Records
- class gretel_client.projects.records.RecordHandler(model: Model, *, record_id: str = None, data_source: DataSourceTypes | None = None, params: dict | None = None, ref_data: RefDataTypes | None = None)
Manages a model’s record handler. After a model has been created and trained, a record handler may be used to “run” the model.
- Parameters:
model – The model to generate a record handler for
record_id – The id of an existing record handler.
- property artifact_types: List[str]
Returns a list of valid artifacts for the record handler.
- property billing_details: dict
Get billing details for the current job.
- cancel()
Cancels the active job.
- property container_image: str
Return the container image for the job.
- property data_source: str | _DataFrameT | None
Returns the data source with which the record handler was configured, if any.
If the record handler has been submitted, returns the resolved artifact ID. Otherwise, returns the originally-supplied data_source argument.
- delete()
Deletes the record handler.
- download_artifacts(target_dir: str | Path)
Given a target directory, either as a string or a Path object, attempt to enumerate and download all artifacts associated with this Job
- Parameters:
target_dir – The target directory to store artifacts in. If the directory does not exist, it will be created for you.
- property errors
Return any errors associated with the model.
- property external_data_source: bool
Returns
True
if the data source is external to Gretel Cloud. If the data source is a Gretel Artifact, returnsFalse
.
- property external_ref_data: bool
Returns
True
if the data refs are external to Gretel Cloud. If the data refs are Gretel Artifacts, returnsFalse
.
- get_artifact_handle(artifact_key: str) BinaryIO
Returns a reference to a remote artifact that can be used to read binary data within a context manager
>>> with job.get_artifact_handle("report_json") as file: ... print(file.read())
- Parameters:
artifact_key – Artifact type to download.
- Returns:
a file like object
- get_artifact_link(artifact_key: str) str
Retrieves a signed S3 link that will download the specified artifact type.
- Parameters:
artifact_key – Artifact type to download.
- get_artifacts() Iterator[Tuple[str, str]]
List artifact links for all known artifact types.
- get_report_summary(report_path: str | None = None) dict | None
Return a summary of the job results :param report_path: If a report_path is passed, that report
will be used for the summary. If no report path is passed, the function will check for a cloud report artifact.
- property instance_type: str
Return CPU or GPU based on the record handler’s run requirements.
- property logs
Returns run logs for the job.
- property model_type: str
Returns the parent model type of the record handler.
- property params: dict | None
Returns the params with which the record handler was configured, if any.
- peek_report(report_path: str | None = None) dict | None
Return a summary of the job results.
- Parameters:
report_path – If a report_path is passed, that report will be used for the summary. If no report path is passed, the function will check for a cloud based artifact.
- poll_logs_status(wait: int = -1, callback: Callable | None = None) Iterator[LogStatus]
Returns an iterator that may be used to tail the logs of a running Model.
- Parameters:
wait – The time in seconds to wait before closing the iterator. If wait is -1 (WAIT_UNTIL_DONE), the iterator will run until the model has reached a “completed” or “error” state.
callback – This function will be executed on every polling loop. A callback is useful for checking some external state that is working on a Job.
- property print_obj: dict
Returns a printable object representation of the job.
- property ref_data: str | Dict[str, str] | List[str] | Tuple[str] | _DataFrameT | List[_DataFrameT] | None
Returns the ref_data with which the record handler was configured, if any.
- refresh()
Update internal state of the job by making an API call to Gretel Cloud.
- property runner_mode: str
Returns the runner_mode of the job. May be one of
manual
orcloud
.
- property status: Status
The status of the job. Is one of
gretel_client.projects.jobs.Status
.
- submit(runner_mode: str | RunnerMode | None = None, dry_run: bool = False) Job
Submit this Job to the Gretel Cloud API.
- Parameters:
runner_mode – Determines where to run the model. If not specified, the default runner configured on the session is used.
dry_run – If set to True the model config will be submitted for validation, but won’t be run. Ignored for record handlers.
- submit_cloud(dry_run: bool = False) Job
Submit this Job to the Gretel Cloud API be scheduled for running in Gretel Cloud.
- Returns:
The response from the Gretel API.
- submit_hybrid(dry_run: bool = False) Job
Submit this Job to the Gretel Cloud API to be scheduled for running in a hybrid deployment.
- Returns:
The response from the Gretel API.
- submit_local(dry_run: bool = False) Job
Submit this Job to the Gretel Cloud API to be scheduled for running in a local container.
- Returns:
The response from the Gretel API.
- submit_manual(dry_run: bool = False) Job
Submit this Job to the Gretel Cloud API, which will create the job metadata but no runner will be started. The
Model
instance can now be passed into a dedicated runner.- Returns:
The response from the Gretel API.
- property traceback: str | None
Returns the traceback associated with any job errors.
- upload_data_source(_validate: bool = True, _artifacts_handler: CloudArtifactsHandler | HybridArtifactsHandler | None = None) str | None
Resolves and uploads the data source specified in the model config.
If the data source is already a Gretel artifact, the artifact will not be uploaded.
- Returns:
A Gretel artifact key.
- upload_ref_data(_validate: bool = True, _artifacts_handler: ArtifactsHandler | None = None) RefData
Resolves and uploads ref data sources specificed in the model config.
If the ref data are already Gretel artifacts, we’ll return the ref data as-is.
- Returns:
A
RefData
instance that contains the new Gretel artifact values.
- worker_key: str | None
Worker key used to launch the job.