Skip to content

Deployers

zenml.deployers

Deployers are stack components responsible for deploying pipelines as HTTP services.

Deploying pipelines is the process of hosting and running machine-learning pipelines as part of a managed web service and providing access to pipeline execution through an API endpoint like HTTP or GRPC. Once deployed, you can send execution requests to the pipeline through the web service's API and receive responses containing the pipeline results or execution status.

Add a deployer to your ZenML stack to be able to provision pipelines deployments that transform your ML pipelines into long-running HTTP services for real-time, on-demand execution instead of traditional batch processing.

When present in a stack, the deployer also acts as a registry for pipeline endpoints that are deployed with ZenML. You can use the deployer to list all deployments that are currently provisioned for online execution or filtered according to a particular snapshot or configuration, or to delete an external deployment managed through ZenML.

Attributes

__all__ = ['BaseDeployer', 'BaseDeployerFlavor', 'BaseDeployerConfig', 'ContainerizedDeployer', 'DockerDeployer', 'DockerDeployerConfig', 'DockerDeployerFlavor', 'DockerDeployerSettings', 'LocalDeployer', 'LocalDeployerConfig', 'LocalDeployerFlavor', 'LocalDeployerSettings'] module-attribute

Classes

BaseDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: StackComponent, ABC

Base class for all ZenML deployers.

The deployer serves three major purposes:

  1. It contains all the stack related configuration attributes required to interact with the remote pipeline deployment tool, service or platform (e.g. hostnames, URLs, references to credentials, other client related configuration parameters).

  2. It implements the life-cycle management for deployments, including discovery, creation, deletion and updating.

  3. It acts as a ZenML deployment registry, where every pipeline deployment is stored as a database entity through the ZenML Client. This allows the deployer to keep track of all externally running pipeline deployments and to manage their lifecycle.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: BaseDeployerConfig property

Returns the BaseDeployerConfig config.

Returns:

Type Description
BaseDeployerConfig

The configuration.

Functions
delete_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, force: bool = False, timeout: Optional[int] = None) -> None

Deprovision and delete a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to delete.

required
project Optional[UUID]

The project ID of the deployment to deprovision. Required if a name is provided.

None
force bool

if True, force the deployment to delete even if it cannot be deprovisioned.

False
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned. If provided, will override the deployer's default timeout.

None

Raises:

Type Description
DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
def delete_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    force: bool = False,
    timeout: Optional[int] = None,
) -> None:
    """Deprovision and delete a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to
            delete.
        project: The project ID of the deployment to deprovision.
            Required if a name is provided.
        force: if True, force the deployment to delete even if it
            cannot be deprovisioned.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned. If provided, will override the
            deployer's default timeout.

    Raises:
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = self.deprovision_deployment(
            deployment_name_or_id, project, timeout
        )
    except DeploymentNotFoundError:
        # The deployment was already deleted
        return
    except DeployerError as e:
        if force:
            logger.warning(
                f"Failed to deprovision deployment "
                f"{deployment_name_or_id}: {e}. Forcing deletion."
            )
            deployment = client.get_deployment(
                deployment_name_or_id, project=project
            )
            client.zen_store.delete_deployment(deployment_id=deployment.id)
        else:
            raise
    else:
        client.zen_store.delete_deployment(deployment_id=deployment.id)
deprovision_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, timeout: Optional[int] = None) -> DeploymentResponse

Deprovision a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to deprovision.

required
project Optional[UUID]

The project ID of the deployment to deprovision. Required if a name is provided.

None
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to deprovision. If provided, will override the deployer's default timeout.

None

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found or is not managed by this deployer.

DeploymentDeprovisionError

if the deployment deprovision fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
def deprovision_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    timeout: Optional[int] = None,
) -> DeploymentResponse:
    """Deprovision a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to
            deprovision.
        project: The project ID of the deployment to deprovision.
            Required if a name is provided.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to deprovision. If provided, will override the
            deployer's default timeout.

    Returns:
        The deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found
            or is not managed by this deployer.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    if not timeout and deployment.snapshot:
        settings = cast(
            BaseDeployerSettings,
            self.get_settings(deployment.snapshot),
        )

        timeout = settings.lcm_timeout

    timeout = timeout or DEFAULT_DEPLOYMENT_LCM_TIMEOUT

    start_time = time.time()
    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    with track_handler(
        AnalyticsEvent.STOP_DEPLOYMENT
    ) as analytics_handler:
        try:
            deleted_deployment_state = self.do_deprovision_deployment(
                deployment, timeout
            )
            if not deleted_deployment_state:
                # When do_delete_deployment returns a None value, this
                # is to signal that the deployment is already fully deprovisioned.
                deployment_state.status = DeploymentStatus.ABSENT
        except DeploymentNotFoundError:
            deployment_state.status = DeploymentStatus.ABSENT
        except DeployerError as e:
            raise DeployerError(
                f"Failed to delete deployment {deployment_name_or_id}: {e}"
            ) from e
        except Exception as e:
            raise DeployerError(
                f"Unexpected error while deleting deployment for "
                f"{deployment_name_or_id}: {e}"
            ) from e
        finally:
            deployment = self._update_deployment(
                deployment, deployment_state
            )

        try:
            if deployment_state.status == DeploymentStatus.ABSENT:
                return deployment

            # Subtract the time spent deprovisioning the deployment from the timeout
            timeout = timeout - int(time.time() - start_time)
            deployment, _ = self._poll_deployment(
                deployment, DeploymentStatus.ABSENT, timeout
            )

            if deployment.status != DeploymentStatus.ABSENT:
                raise DeploymentDeprovisionError(
                    f"Failed to deprovision deployment {deployment_name_or_id}: "
                    f"Operational state: {deployment.status}"
                )

        finally:
            analytics_handler.metadata = (
                self._get_deployment_analytics_metadata(
                    deployment=deployment,
                    stack=None,
                )
            )

        return deployment
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState] abstractmethod

Abstract method to deprovision a deployment.

Concrete deployer subclasses must implement the following functionality in this method:

  • Deprovision the actual deployment infrastructure (e.g., FastAPI server, Kubernetes deployment, cloud function, etc.) based on the information in the deployment response.

  • Return a DeploymentOperationalState representing the operational state of the deleted deployment, or None if the deletion is completed before the call returns.

Note that the deployment infrastructure is not required to be deleted immediately. The deployer can return a DeploymentOperationalState with a status of DeploymentStatus.PENDING, and the base deployer will poll the deployment infrastructure by calling the do_get_deployment_state method until it is deleted or it times out.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to delete.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

The DeploymentOperationalState object representing the

Optional[DeploymentOperationalState]

operational state of the deprovisioned deployment, or None

Optional[DeploymentOperationalState]

if the deprovision is completed before the call returns.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentDeprovisionError

if the deployment deprovision fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
@abstractmethod
def do_deprovision_deployment(
    self,
    deployment: DeploymentResponse,
    timeout: int,
) -> Optional[DeploymentOperationalState]:
    """Abstract method to deprovision a deployment.

    Concrete deployer subclasses must implement the following
    functionality in this method:

    - Deprovision the actual deployment infrastructure (e.g.,
    FastAPI server, Kubernetes deployment, cloud function, etc.) based on
    the information in the deployment response.

    - Return a DeploymentOperationalState representing the operational
    state of the deleted deployment, or None if the deletion is
    completed before the call returns.

    Note that the deployment infrastructure is not required to be
    deleted immediately. The deployer can return a
    DeploymentOperationalState with a status of
    DeploymentStatus.PENDING, and the base deployer will poll
    the deployment infrastructure by calling the
    `do_get_deployment_state` method until it is deleted or it times out.

    Args:
        deployment: The deployment to delete.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the deprovisioned deployment, or None
        if the deprovision is completed before the call returns.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
        DeployerError: if an unexpected error occurs.
    """
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState abstractmethod

Abstract method to get information about a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get information about.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

updated operational state of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeployerError

if the deployment information cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
@abstractmethod
def do_get_deployment_state(
    self,
    deployment: DeploymentResponse,
) -> DeploymentOperationalState:
    """Abstract method to get information about a deployment.

    Args:
        deployment: The deployment to get information about.

    Returns:
        The DeploymentOperationalState object representing the
        updated operational state of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeployerError: if the deployment information cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None] abstractmethod

Abstract method to get the logs of a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get the logs of.

required
follow bool

if True, the logs will be streamed as they are written

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Yields:

Type Description
str

The logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentLogsNotFoundError

if the deployment logs are not found.

DeployerError

if the deployment logs cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
@abstractmethod
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Abstract method to get the logs of a deployment.

    Args:
        deployment: The deployment to get the logs of.
        follow: if True, the logs will be streamed as they are written
        tail: only retrieve the last NUM lines of log output.

    Yields:
        The logs of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentLogsNotFoundError: if the deployment logs are not
            found.
        DeployerError: if the deployment logs cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState abstractmethod

Abstract method to deploy a pipeline as an HTTP deployment.

Concrete deployer subclasses must implement the following functionality in this method:

  • Create the actual deployment infrastructure (e.g., FastAPI server, Kubernetes deployment, cloud function, etc.) based on the information in the deployment response, particularly the pipeline snapshot. When determining how to name the external resources, do not rely on the deployment name as being immutable or unique.

  • If the deployment infrastructure is already provisioned, update it to match the information in the deployment response.

  • Return a DeploymentOperationalState representing the operational state of the provisioned deployment.

Note that the deployment infrastructure is not required to be deployed immediately. The deployer can return a DeploymentOperationalState with a status of DeploymentStatus.PENDING, and the base deployer will poll the deployment infrastructure by calling the do_get_deployment_state method until it is ready or it times out.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to deploy as an HTTP deployment.

required
stack Stack

The stack the pipeline will be deployed on.

required
environment Dict[str, str]

A dictionary of environment variables to set on the deployment.

required
secrets Dict[str, str]

A dictionary of secret environment variables to set on the deployment. These secret environment variables should not be exposed as regular environment variables on the deployer.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be provisioned.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

if provisioning the deployment fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
@abstractmethod
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Abstract method to deploy a pipeline as an HTTP deployment.

    Concrete deployer subclasses must implement the following
    functionality in this method:

    - Create the actual deployment infrastructure (e.g.,
    FastAPI server, Kubernetes deployment, cloud function, etc.) based on
    the information in the deployment response, particularly the
    pipeline snapshot. When determining how to name the external
    resources, do not rely on the deployment name as being immutable
    or unique.

    - If the deployment infrastructure is already provisioned, update
    it to match the information in the deployment response.

    - Return a DeploymentOperationalState representing the operational
    state of the provisioned deployment.

    Note that the deployment infrastructure is not required to be
    deployed immediately. The deployer can return a
    DeploymentOperationalState with a status of
    DeploymentStatus.PENDING, and the base deployer will poll
    the deployment infrastructure by calling the
    `do_get_deployment_state` method until it is ready or it times out.

    Args:
        deployment: The deployment to deploy as an HTTP deployment.
        stack: The stack the pipeline will be deployed on.
        environment: A dictionary of environment variables to set on the
            deployment.
        secrets: A dictionary of secret environment variables to set
            on the deployment. These secret environment variables
            should not be exposed as regular environment variables on the
            deployer.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: if provisioning the deployment
            fails.
        DeployerError: if an unexpected error occurs.
    """
get_active_deployer() -> BaseDeployer classmethod

Get the deployer registered in the active stack.

Returns:

Type Description
BaseDeployer

The deployer registered in the active stack.

Raises:

Type Description
TypeError

if a deployer is not part of the active stack.

Source code in src/zenml/deployers/base_deployer.py
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
@classmethod
def get_active_deployer(cls) -> "BaseDeployer":
    """Get the deployer registered in the active stack.

    Returns:
        The deployer registered in the active stack.

    Raises:
        TypeError: if a deployer is not part of the
            active stack.
    """
    client = Client()
    deployer = client.active_stack.deployer
    if not deployer or not isinstance(deployer, cls):
        raise TypeError(
            "The active stack needs to have a deployer "
            "component registered to be able to deploy pipelines. "
            "You can create a new stack with a deployer component "
            "or update your active stack to add this component, e.g.:\n\n"
            "  `zenml deployer register ...`\n"
            "  `zenml stack register <STACK-NAME> -D ...`\n"
            "  or:\n"
            "  `zenml stack update -D ...`\n\n"
        )

    return deployer
get_deployment_logs(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Get the logs of a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to get the logs of.

required
project Optional[UUID]

The project ID of the deployment to get the logs of. Required if a name is provided.

None
follow bool

if True, the logs will be streamed as they are written.

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Returns:

Type Description
None

A generator that yields the logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
def get_deployment_logs(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Get the logs of a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to get
            the logs of.
        project: The project ID of the deployment to get the logs of.
            Required if a name is provided.
        follow: if True, the logs will be streamed as they are written.
        tail: only retrieve the last NUM lines of log output.

    Returns:
        A generator that yields the logs of the deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    try:
        return self.do_get_deployment_state_logs(deployment, follow, tail)
    except DeployerError as e:
        raise DeployerError(
            f"Failed to get logs for deployment {deployment_name_or_id}: {e}"
        ) from e
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while getting logs for deployment for "
            f"{deployment_name_or_id}: {e}"
        ) from e
provision_deployment(snapshot: PipelineSnapshotResponse, stack: Stack, deployment_name_or_id: Union[str, UUID], replace: bool = True, timeout: Optional[int] = None) -> DeploymentResponse

Provision a deployment.

The provision_deployment method is the main entry point for provisioning deployments using the deployer. It is used to deploy a pipeline snapshot as an HTTP deployment, or update an existing deployment instance with the same name. The method returns a DeploymentResponse object that is a representation of the external deployment instance.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The pipeline snapshot to deploy as an HTTP deployment.

required
stack Stack

The stack the pipeline will be deployed on.

required
deployment_name_or_id Union[str, UUID]

Unique name or ID for the deployment. This name must be unique at the project level.

required
replace bool

If True, it will update in-place any existing pipeline deployment instance with the same name. If False, and the pipeline deployment instance already exists, it will raise a DeploymentAlreadyExistsError.

True
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to be provisioned. If provided, will override the deployer's default timeout.

None

Raises:

Type Description
DeploymentAlreadyExistsError

if the deployment already exists and replace is False.

DeploymentProvisionError

if the deployment fails.

DeploymentSnapshotMismatchError

if the pipeline snapshot was not created for this deployer.

DeploymentNotFoundError

if the deployment with the given ID is not found.

DeployerError

if an unexpected error occurs.

Returns:

Type Description
DeploymentResponse

The DeploymentResponse object representing the provisioned

DeploymentResponse

deployment.

Source code in src/zenml/deployers/base_deployer.py
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
def provision_deployment(
    self,
    snapshot: PipelineSnapshotResponse,
    stack: "Stack",
    deployment_name_or_id: Union[str, UUID],
    replace: bool = True,
    timeout: Optional[int] = None,
) -> DeploymentResponse:
    """Provision a deployment.

    The provision_deployment method is the main entry point for
    provisioning deployments using the deployer. It is used to deploy
    a pipeline snapshot as an HTTP deployment, or update an existing
    deployment instance with the same name. The method returns a
    DeploymentResponse object that is a representation of the
    external deployment instance.

    Args:
        snapshot: The pipeline snapshot to deploy as an HTTP deployment.
        stack: The stack the pipeline will be deployed on.
        deployment_name_or_id: Unique name or ID for the deployment.
            This name must be unique at the project level.
        replace: If True, it will update in-place any existing pipeline
            deployment instance with the same name. If False, and the pipeline
            deployment instance already exists, it will raise a
            DeploymentAlreadyExistsError.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned. If provided, will override the
            deployer's default timeout.

    Raises:
        DeploymentAlreadyExistsError: if the deployment already
            exists and replace is False.
        DeploymentProvisionError: if the deployment fails.
        DeploymentSnapshotMismatchError: if the pipeline snapshot
            was not created for this deployer.
        DeploymentNotFoundError: if the deployment with the
            given ID is not found.
        DeployerError: if an unexpected error occurs.

    Returns:
        The DeploymentResponse object representing the provisioned
        deployment.
    """
    if not replace and is_valid_uuid(deployment_name_or_id):
        raise DeploymentAlreadyExistsError(
            f"A deployment with ID '{deployment_name_or_id}' "
            "already exists"
        )

    self._check_deployment_inputs_outputs(snapshot)

    client = Client()

    settings = cast(
        BaseDeployerSettings,
        self.get_settings(snapshot),
    )

    timeout = timeout or settings.lcm_timeout
    auth_key = settings.auth_key
    if not auth_key and settings.generate_auth_key:
        auth_key = self._generate_auth_key()

    if snapshot.stack and snapshot.stack.id != stack.id:
        # When a different stack is used then the one the snapshot was
        # created for, the container image may not have the correct
        # dependencies installed, which leads to unexpected errors during
        # deployment. To avoid this, we raise an error here.
        raise DeploymentSnapshotMismatchError(
            f"The pipeline snapshot with ID '{snapshot.id}' "
            f"was not created for the stack {stack.name} and might not "
            "have the correct dependencies installed. This may "
            "lead to unexpected behavior during deployment. Please switch "
            f"to the correct active stack '{snapshot.stack.name}' or use "
            "a different snapshot."
        )

    try:
        # Get the existing deployment
        deployment = client.get_deployment(
            deployment_name_or_id, project=snapshot.project_id
        )

        self._check_snapshot_already_deployed(snapshot, deployment.id)

        logger.debug(
            f"Existing deployment found with name '{deployment.name}'"
        )
    except KeyError:
        if isinstance(deployment_name_or_id, UUID):
            raise DeploymentNotFoundError(
                f"Deployment with ID '{deployment_name_or_id}' not found"
            )

        self._check_snapshot_already_deployed(
            snapshot, deployment_name_or_id
        )

        logger.debug(
            f"Creating new deployment {deployment_name_or_id} with "
            f"snapshot ID: {snapshot.id}"
        )

        # Create the deployment request
        deployment_request = DeploymentRequest(
            name=deployment_name_or_id,
            project=snapshot.project_id,
            snapshot_id=snapshot.id,
            deployer_id=self.id,  # This deployer's ID
            auth_key=auth_key,
        )

        deployment = client.zen_store.create_deployment(deployment_request)
        logger.debug(
            f"Created new deployment with name '{deployment.name}' "
            f"and ID: {deployment.id}"
        )
    else:
        if not replace:
            raise DeploymentAlreadyExistsError(
                f"A deployment with name '{deployment.name}' "
                "already exists"
            )

        self._check_deployment_deployer(deployment)
        self._check_deployment_snapshot(snapshot)

        deployment_update = DeploymentUpdate(
            snapshot_id=snapshot.id,
        )
        if (
            deployment.auth_key
            and not auth_key
            or not deployment.auth_key
            and auth_key
        ):
            # Key was either added or removed
            deployment_update.auth_key = auth_key
        elif deployment.auth_key != auth_key and (
            settings.auth_key or not settings.generate_auth_key
        ):
            # Key was changed and not because of re-generation
            deployment_update.auth_key = auth_key

        # The deployment has been updated
        deployment = client.zen_store.update_deployment(
            deployment.id,
            deployment_update,
        )

    logger.info(
        f"Provisioning deployment {deployment.name} with "
        f"snapshot ID: {snapshot.id}"
    )

    environment, secrets = get_config_environment_vars(
        deployment_id=deployment.id,
    )

    # Make sure to use the correct active stack/project which correspond
    # to the supplied stack and snapshot, which may be different from the
    # active stack/project
    environment[ENV_ZENML_ACTIVE_STACK_ID] = str(stack.id)
    environment[ENV_ZENML_ACTIVE_PROJECT_ID] = str(snapshot.project_id)

    start_time = time.time()
    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    with track_handler(
        AnalyticsEvent.DEPLOY_PIPELINE
    ) as analytics_handler:
        try:
            deployment_state = self.do_provision_deployment(
                deployment,
                stack=stack,
                environment=environment,
                secrets=secrets,
                timeout=timeout,
            )
        except DeploymentProvisionError as e:
            raise DeploymentProvisionError(
                f"Failed to provision deployment {deployment.name}: {e}"
            ) from e
        except DeployerError as e:
            raise DeployerError(
                f"Failed to provision deployment {deployment.name}: {e}"
            ) from e
        except Exception as e:
            raise DeployerError(
                f"Unexpected error while provisioning deployment for "
                f"{deployment.name}: {e}"
            ) from e
        finally:
            deployment = self._update_deployment(
                deployment, deployment_state
            )

        logger.info(
            f"Provisioned deployment {deployment.name} with "
            f"snapshot ID: {snapshot.id}. Operational state is: "
            f"{deployment_state.status}"
        )

        try:
            if deployment_state.status == DeploymentStatus.RUNNING:
                return deployment

            # Subtract the time spent deploying the deployment from the
            # timeout
            timeout = timeout - int(time.time() - start_time)
            deployment, _ = self._poll_deployment(
                deployment, DeploymentStatus.RUNNING, timeout
            )

            if deployment.status != DeploymentStatus.RUNNING:
                raise DeploymentProvisionError(
                    f"Failed to provision deployment {deployment.name}: "
                    f"The deployment's operational state is "
                    f"{deployment.status}. Please check the status or logs "
                    "of the deployment for more information."
                )

        finally:
            analytics_handler.metadata = (
                self._get_deployment_analytics_metadata(
                    deployment=deployment,
                    stack=stack,
                )
            )

        return deployment
refresh_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None) -> DeploymentResponse

Refresh the status of a deployment by name or ID.

Call this to refresh the operational state of a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to get.

required
project Optional[UUID]

The project ID of the deployment to get. Required if a name is provided.

None

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
def refresh_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
) -> DeploymentResponse:
    """Refresh the status of a deployment by name or ID.

    Call this to refresh the operational state of a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to get.
        project: The project ID of the deployment to get. Required
            if a name is provided.

    Returns:
        The deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    try:
        deployment_state = self.do_get_deployment_state(deployment)
    except DeploymentNotFoundError:
        deployment_state.status = DeploymentStatus.ABSENT
    except DeployerError as e:
        raise DeployerError(
            f"Failed to refresh deployment {deployment_name_or_id}: {e}"
        ) from e
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while refreshing deployment for "
            f"{deployment_name_or_id}: {e}"
        ) from e
    finally:
        deployment = self._update_deployment(deployment, deployment_state)

    return deployment

BaseDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: StackComponentConfig

Base config for all deployers.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)

BaseDeployerFlavor

Bases: Flavor

Base class for deployer flavors.

Attributes
config_class: Type[BaseDeployerConfig] property

Returns BaseDeployerConfig config class.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

implementation_class: Type[BaseDeployer] abstractmethod property

The class that implements the deployer.

type: StackComponentType property

Returns the flavor type.

Returns:

Type Description
StackComponentType

The flavor type.

ContainerizedDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: BaseDeployer, ABC

Base class for all containerized deployers.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
requirements: Set[str] property

Set of PyPI requirements for the deployer.

Returns:

Type Description
Set[str]

A set of PyPI requirements for the deployer.

Functions
get_docker_builds(snapshot: PipelineSnapshotBase) -> List[BuildConfiguration]

Gets the Docker builds required for the component.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotBase

The pipeline snapshot for which to get the builds.

required

Returns:

Type Description
List[BuildConfiguration]

The required Docker builds.

Source code in src/zenml/deployers/containerized_deployer.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def get_docker_builds(
    self, snapshot: "PipelineSnapshotBase"
) -> List["BuildConfiguration"]:
    """Gets the Docker builds required for the component.

    Args:
        snapshot: The pipeline snapshot for which to get the builds.

    Returns:
        The required Docker builds.
    """
    deployment_settings = (
        snapshot.pipeline_configuration.deployment_settings
    )
    docker_settings = snapshot.pipeline_configuration.docker_settings
    if not docker_settings.install_deployment_requirements:
        return []

    deployment_requirements = load_deployment_requirements(
        deployment_settings
    )
    return [
        BuildConfiguration(
            key=DEPLOYER_DOCKER_IMAGE_KEY,
            settings=snapshot.pipeline_configuration.docker_settings,
            extra_requirements_files={
                ".zenml_deployment_requirements": deployment_requirements,
            },
        )
    ]
get_image(snapshot: PipelineSnapshotResponse) -> str staticmethod

Get the docker image used to deploy a pipeline snapshot.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The pipeline snapshot to get the image for.

required

Returns:

Type Description
str

The docker image used to deploy the pipeline snapshot.

Raises:

Type Description
RuntimeError

if the pipeline snapshot does not have a build or if the deployer image is not in the build.

Source code in src/zenml/deployers/containerized_deployer.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
@staticmethod
def get_image(snapshot: PipelineSnapshotResponse) -> str:
    """Get the docker image used to deploy a pipeline snapshot.

    Args:
        snapshot: The pipeline snapshot to get the image for.

    Returns:
        The docker image used to deploy the pipeline snapshot.

    Raises:
        RuntimeError: if the pipeline snapshot does not have a build or
            if the deployer image is not in the build.
    """
    if snapshot.build is None:
        raise RuntimeError("Pipeline snapshot does not have a build. ")
    if DEPLOYER_DOCKER_IMAGE_KEY not in snapshot.build.images:
        raise RuntimeError(
            "Pipeline snapshot build does not have a deployer image. "
        )
    return snapshot.build.images[DEPLOYER_DOCKER_IMAGE_KEY].image

DockerDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: ContainerizedDeployer

Deployer responsible for deploying pipelines locally using Docker.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: DockerDeployerConfig property

Returns the DockerDeployerConfig config.

Returns:

Type Description
DockerDeployerConfig

The configuration.

docker_client: DockerClient property

Initialize and/or return the docker client.

Returns:

Type Description
DockerClient

The docker client.

settings_class: Optional[Type[BaseSettings]] property

Settings class for the Docker deployer.

Returns:

Type Description
Optional[Type[BaseSettings]]

The settings class.

validator: Optional[StackValidator] property

Ensures there is an image builder in the stack.

Returns:

Type Description
Optional[StackValidator]

A StackValidator instance.

Functions
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState]

Deprovision a docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to deprovision.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

The DeploymentOperationalState object representing the

Optional[DeploymentOperationalState]

operational state of the deleted deployment, or None if the

Optional[DeploymentOperationalState]

deletion is completed before the call returns.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentDeprovisionError

if the deployment deprovision fails.

Source code in src/zenml/deployers/docker/docker_deployer.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
def do_deprovision_deployment(
    self,
    deployment: DeploymentResponse,
    timeout: int,
) -> Optional[DeploymentOperationalState]:
    """Deprovision a docker deployment.

    Args:
        deployment: The deployment to deprovision.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the deleted deployment, or None if the
        deletion is completed before the call returns.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    try:
        container.stop(timeout=timeout)
        container.remove()
    except docker_errors.DockerException as e:
        raise DeploymentDeprovisionError(
            f"Docker container for deployment '{deployment.name}' "
            f"failed to delete: {e}"
        )

    return None
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState

Get information about a docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get information about.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

updated operational state of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

Source code in src/zenml/deployers/docker/docker_deployer.py
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
def do_get_deployment_state(
    self,
    deployment: DeploymentResponse,
) -> DeploymentOperationalState:
    """Get information about a docker deployment.

    Args:
        deployment: The deployment to get information about.

    Returns:
        The DeploymentOperationalState object representing the
        updated operational state of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    return self._get_container_operational_state(container)
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Get the logs of a Docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get the logs of.

required
follow bool

if True, the logs will be streamed as they are written

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Yields:

Type Description
str

The logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentLogsNotFoundError

if the deployment logs are not found.

DeployerError

if the deployment logs cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/docker/docker_deployer.py
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Get the logs of a Docker deployment.

    Args:
        deployment: The deployment to get the logs of.
        follow: if True, the logs will be streamed as they are written
        tail: only retrieve the last NUM lines of log output.

    Yields:
        The logs of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentLogsNotFoundError: if the deployment logs are not
            found.
        DeployerError: if the deployment logs cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    try:
        log_kwargs: Dict[str, Any] = {
            "stdout": True,
            "stderr": True,
            "stream": follow,
            "follow": follow,
            "timestamps": True,
        }

        if tail is not None and tail > 0:
            log_kwargs["tail"] = tail

        log_stream = container.logs(**log_kwargs)

        if follow:
            for log_line in log_stream:
                if isinstance(log_line, bytes):
                    yield log_line.decode(
                        "utf-8", errors="replace"
                    ).rstrip()
                else:
                    yield str(log_line).rstrip()
        else:
            if isinstance(log_stream, bytes):
                log_text = log_stream.decode("utf-8", errors="replace")
                for line in log_text.splitlines():
                    yield line
            else:
                for log_line in log_stream:
                    if isinstance(log_line, bytes):
                        yield log_line.decode(
                            "utf-8", errors="replace"
                        ).rstrip()
                    else:
                        yield str(log_line).rstrip()

    except docker_errors.NotFound as e:
        raise DeploymentLogsNotFoundError(
            f"Logs for deployment '{deployment.name}' not found: {e}"
        )
    except docker_errors.APIError as e:
        raise DeployerError(
            f"Docker API error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
    except docker_errors.DockerException as e:
        raise DeployerError(
            f"Docker error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState

Deploy a pipeline as a Docker container.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to run as a Docker container.

required
stack Stack

The stack the pipeline will be deployed on.

required
environment Dict[str, str]

A dictionary of environment variables to set on the deployment.

required
secrets Dict[str, str]

A dictionary of secret environment variables to set on the deployment. These secret environment variables should not be exposed as regular environment variables on the deployer.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be provisioned.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

if provisioning the deployment fails.

Source code in src/zenml/deployers/docker/docker_deployer.py
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Deploy a pipeline as a Docker container.

    Args:
        deployment: The deployment to run as a Docker container.
        stack: The stack the pipeline will be deployed on.
        environment: A dictionary of environment variables to set on the
            deployment.
        secrets: A dictionary of secret environment variables to set
            on the deployment. These secret environment variables
            should not be exposed as regular environment variables on the
            deployer.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: if provisioning the deployment
            fails.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"
    snapshot = deployment.snapshot

    # Currently, there is no safe way to pass secrets to a docker
    # container, so we simply merge them into the environment variables.
    environment.update(secrets)

    settings = cast(
        DockerDeployerSettings,
        self.get_settings(snapshot),
    )

    existing_metadata = DockerDeploymentMetadata.from_deployment(
        deployment
    )

    entrypoint = DeploymentEntrypointConfiguration.get_entrypoint_command()

    entrypoint_kwargs = {
        DEPLOYMENT_ID_OPTION: deployment.id,
    }

    arguments = DeploymentEntrypointConfiguration.get_entrypoint_arguments(
        **entrypoint_kwargs
    )

    # Add the local stores path as a volume mount
    stack.check_local_paths()
    local_stores_path = GlobalConfiguration().local_stores_path
    volumes = {
        local_stores_path: {
            "bind": local_stores_path,
            "mode": "rw",
        }
    }
    environment[ENV_ZENML_LOCAL_STORES_PATH] = local_stores_path

    # check if a container already exists for the deployment
    container = self._get_container(deployment)

    if container:
        # the container exists, check if it is running
        if container.status == "running":
            logger.debug(
                f"Container for deployment '{deployment.name}' is "
                "already running",
            )
            container.stop(timeout=timeout)

        # the container is stopped or in an error state, remove it
        logger.debug(
            f"Removing previous container for deployment "
            f"'{deployment.name}'",
        )
        container.remove(force=True)

    logger.debug(
        f"Starting container for deployment '{deployment.name}'..."
    )

    image = self.get_image(deployment.snapshot)

    try:
        self.docker_client.images.get(image)
    except docker_errors.ImageNotFound:
        logger.debug(
            f"Pulling container image '{image}' for deployment "
            f"'{deployment.name}'...",
        )
        self.docker_client.images.pull(image)

    preferred_ports: List[int] = []
    if settings.port:
        preferred_ports.append(settings.port)
    if existing_metadata.port:
        preferred_ports.append(existing_metadata.port)
    port = lookup_preferred_or_free_port(
        preferred_ports=preferred_ports,
        allocate_port_if_busy=settings.allocate_port_if_busy,
        range=settings.port_range,
        address="0.0.0.0",  # nosec
    )
    container_port = (
        snapshot.pipeline_configuration.deployment_settings.uvicorn_port
    )
    ports: Dict[str, Optional[int]] = {f"{container_port}/tcp": port}

    uid_args: Dict[str, Any] = {}
    if sys.platform == "win32":
        # File permissions are not checked on Windows. This if clause
        # prevents mypy from complaining about unused 'type: ignore'
        # statements
        pass
    else:
        # Run the container in the context of the local UID/GID
        # to ensure that the local database can be shared
        # with the container.
        logger.debug(
            "Setting UID and GID to local user/group in container."
        )
        uid_args = dict(
            user=os.getuid(),
            group_add=[os.getgid()],
        )

    run_args = copy.deepcopy(settings.run_args)
    docker_environment = run_args.pop("environment", {})
    docker_environment.update(environment)

    docker_volumes = run_args.pop("volumes", {})
    docker_volumes.update(volumes)

    extra_hosts = run_args.pop("extra_hosts", {})
    extra_hosts["host.docker.internal"] = "host-gateway"

    run_args.update(uid_args)

    try:
        container = self.docker_client.containers.run(
            image=image,
            name=self._get_container_id(deployment),
            entrypoint=entrypoint,
            command=arguments,
            detach=True,
            volumes=docker_volumes,
            environment=docker_environment,
            remove=False,
            auto_remove=False,
            ports=ports,
            labels={
                "zenml-deployment-id": str(deployment.id),
                "zenml-deployment-name": deployment.name,
                "zenml-deployer-name": str(self.name),
                "zenml-deployer-id": str(self.id),
                "managed-by": "zenml",
            },
            extra_hosts=extra_hosts,
            **run_args,
        )

        logger.debug(
            f"Docker container for deployment '{deployment.name}' "
            f"started with ID {self._get_container_id(deployment)}",
        )

    except docker_errors.DockerException as e:
        raise DeploymentProvisionError(
            f"Docker container for deployment '{deployment.name}' "
            f"failed to start: {e}"
        )

    return self._get_container_operational_state(container)

DockerDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerConfig, DockerDeployerSettings

Docker deployer config.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)
Attributes
is_local: bool property

Checks if this stack component is running locally.

Returns:

Type Description
bool

True if this config is for a local component, False otherwise.

DockerDeployerFlavor

Bases: BaseDeployerFlavor

Flavor for the Docker deployer.

Attributes
config_class: Type[BaseDeployerConfig] property

Config class for the base deployer flavor.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

docs_url: Optional[str] property

A url to point at docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor docs url.

implementation_class: Type[DockerDeployer] property

Implementation class for this flavor.

Returns:

Type Description
Type[DockerDeployer]

Implementation class for this flavor.

logo_url: str property

A url to represent the flavor in the dashboard.

Returns:

Type Description
str

The flavor logo.

name: str property

Name of the deployer flavor.

Returns:

Type Description
str

Name of the deployer flavor.

sdk_docs_url: Optional[str] property

A url to point at SDK docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor SDK docs url.

DockerDeployerSettings(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerSettings

Docker deployer settings.

Attributes:

Name Type Description
port Optional[int]

The port to expose the deployment on.

allocate_port_if_busy bool

If True, allocate a free port if the configured port is busy.

port_range Tuple[int, int]

The range of ports to search for a free port.

run_args Dict[str, Any]

Arguments to pass to the docker run call. (See https://docker-py.readthedocs.io/en/stable/containers.html for a list of what can be passed.)

Source code in src/zenml/config/secret_reference_mixin.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references are only passed for valid fields.

    This method ensures that secret references are not passed for fields
    that explicitly prevent them or require pydantic validation.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using plain-text secrets.
        **kwargs: Arguments to initialize this object.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            or an attribute which explicitly disallows secret references
            is passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}`. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure values with secrets "
                    "here: https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if secret_utils.is_clear_text_field(field):
            raise ValueError(
                f"Passing the `{key}` attribute as a secret reference is "
                "not allowed."
            )

        requires_validation = has_validators(
            pydantic_class=self.__class__, field_name=key
        )
        if requires_validation:
            raise ValueError(
                f"Passing the attribute `{key}` as a secret reference is "
                "not allowed as additional validation is required for "
                "this attribute."
            )

    super().__init__(**kwargs)

LocalDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: BaseDeployer

Deployer that runs deployments as local daemon processes.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: LocalDeployerConfig property

Returns the LocalDeployerConfig config.

Returns:

Type Description
LocalDeployerConfig

The configuration.

settings_class: Optional[Type[BaseSettings]] property

Settings class for the local deployer.

Returns:

Type Description
Optional[Type[BaseSettings]]

The settings class.

Functions
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState]

Deprovision a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to stop.

required
timeout int

Unused for local daemon stop.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

None, indicating immediate deletion completed.

Raises:

Type Description
DeploymentNotFoundError

If the daemon is not found.

DeploymentDeprovisionError

If stopping fails.

Source code in src/zenml/deployers/local/local_deployer.py
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
def do_deprovision_deployment(
    self, deployment: DeploymentResponse, timeout: int
) -> Optional[DeploymentOperationalState]:
    """Deprovision a local daemon deployment.

    Args:
        deployment: The deployment to stop.
        timeout: Unused for local daemon stop.

    Returns:
        None, indicating immediate deletion completed.

    Raises:
        DeploymentNotFoundError: If the daemon is not found.
        DeploymentDeprovisionError: If stopping fails.
    """
    meta = LocalDeploymentMetadata.from_deployment(deployment)
    if not meta.pid:
        raise DeploymentNotFoundError(
            f"Daemon for deployment '{deployment.name}' missing."
        )

    try:
        stop_process(meta.pid)
    except Exception as e:
        raise DeploymentDeprovisionError(
            f"Failed to stop daemon for deployment '{deployment.name}': "
            f"{e}"
        ) from e
    else:
        shutil.rmtree(self._runtime_dir(deployment.id))

    return None
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState

Get information about a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to inspect.

required

Returns:

Type Description
DeploymentOperationalState

Operational state of the deployment.

Source code in src/zenml/deployers/local/local_deployer.py
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
def do_get_deployment_state(
    self, deployment: DeploymentResponse
) -> DeploymentOperationalState:
    """Get information about a local daemon deployment.

    Args:
        deployment: The deployment to inspect.

    Returns:
        Operational state of the deployment.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"
    meta = LocalDeploymentMetadata.from_deployment(deployment)

    state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
        metadata=meta.model_dump(exclude_none=True),
    )

    if not meta.pid:
        state.status = DeploymentStatus.ABSENT
        return state

    if not psutil.pid_exists(meta.pid):
        return state

    if not meta.port or not meta.address:
        return state

    # Use pending until we can confirm the daemon is reachable
    state.status = DeploymentStatus.PENDING
    address = meta.address
    if address == "0.0.0.0":  # nosec
        address = "localhost"
    state.url = f"http://{address}:{meta.port}"

    settings = (
        deployment.snapshot.pipeline_configuration.deployment_settings
    )
    health_check_path = f"{settings.root_url_path}{settings.api_url_path}{settings.health_url_path}"
    health_check_url = f"{state.url}{health_check_path}"

    # Attempt to connect to the daemon and set the status to RUNNING
    # if successful
    try:
        response = requests.get(health_check_url, timeout=3)
        if response.status_code == 200:
            state.status = DeploymentStatus.RUNNING
        else:
            logger.debug(
                f"Daemon for deployment '{deployment.name}' returned "
                f"status code {response.status_code} for health check "
                f"at '{health_check_url}'"
            )
    except Exception as e:
        logger.debug(
            f"Daemon for deployment '{deployment.name}' is not "
            f"reachable at '{health_check_url}': {e}"
        )
        # It can take a long time after the deployment is started until
        # the deployment is ready to serve requests, but this isn't an
        # error condition. We return PENDING instead of ERROR here to
        # signal to the polling in the base deployer class to keep trying.
        state.status = DeploymentStatus.PENDING

    state.metadata = meta.model_dump(exclude_none=True)

    return state
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Read logs from the local daemon log file.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to read logs for.

required
follow bool

Stream logs if True.

False
tail Optional[int]

Return only last N lines if set.

None

Yields:

Type Description
str

Log lines.

Raises:

Type Description
DeploymentLogsNotFoundError

If the log file is missing.

DeployerError

For unexpected errors.

Source code in src/zenml/deployers/local/local_deployer.py
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Read logs from the local daemon log file.

    Args:
        deployment: The deployment to read logs for.
        follow: Stream logs if True.
        tail: Return only last N lines if set.

    Yields:
        Log lines.

    Raises:
        DeploymentLogsNotFoundError: If the log file is missing.
        DeployerError: For unexpected errors.
    """
    meta = LocalDeploymentMetadata.from_deployment(deployment)
    log_file = meta.log_file
    if not log_file or not os.path.exists(log_file):
        raise DeploymentLogsNotFoundError(
            f"Log file not found for deployment '{deployment.name}'"
        )

    try:

        def _read_tail(path: str, n: int) -> Generator[str, bool, None]:
            with open(path, "r", encoding="utf-8", errors="ignore") as f:
                lines = f.readlines()
                for line in lines[-n:]:
                    yield line.rstrip("\n")

        if not follow:
            if tail and tail > 0:
                yield from _read_tail(log_file, tail)
            else:
                with open(
                    log_file, "r", encoding="utf-8", errors="ignore"
                ) as f:
                    for line in f:
                        yield line.rstrip("\n")
            return

        with open(log_file, "r", encoding="utf-8", errors="ignore") as f:
            if not tail:
                tail = DEFAULT_TAIL_FOLLOW_LINES
            lines = f.readlines()
            for line in lines[-tail:]:
                yield line.rstrip("\n")

            while True:
                where = f.tell()
                line = f.readline()
                if not line:
                    time.sleep(0.2)
                    f.seek(where)
                    continue
                yield line.rstrip("\n")

    except DeploymentLogsNotFoundError:
        raise
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while reading logs for deployment "
            f"'{deployment.name}': {e}"
        ) from e
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState

Provision a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to run.

required
stack Stack

The active stack (unused by local deployer).

required
environment Dict[str, str]

Environment variables for the app.

required
secrets Dict[str, str]

Secret environment variables for the app.

required
timeout int

Unused for immediate daemonization.

required

Returns:

Type Description
DeploymentOperationalState

Operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

If the daemon cannot be started.

Source code in src/zenml/deployers/local/local_deployer.py
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Provision a local daemon deployment.

    Args:
        deployment: The deployment to run.
        stack: The active stack (unused by local deployer).
        environment: Environment variables for the app.
        secrets: Secret environment variables for the app.
        timeout: Unused for immediate daemonization.

    Returns:
        Operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: If the daemon cannot be started.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"

    child_env: Dict[str, str] = dict(os.environ)
    child_env.update(environment)
    child_env.update(secrets)

    settings = cast(
        LocalDeployerSettings,
        self.get_settings(deployment.snapshot),
    )

    existing_meta = LocalDeploymentMetadata.from_deployment(deployment)

    preferred_ports: List[int] = []
    if settings.port:
        preferred_ports.append(settings.port)
    if existing_meta.port:
        preferred_ports.append(existing_meta.port)

    try:
        port = lookup_preferred_or_free_port(
            preferred_ports=preferred_ports,
            allocate_port_if_busy=settings.allocate_port_if_busy,
            range=settings.port_range,
            address=settings.address,
        )
    except IOError as e:
        raise DeploymentProvisionError(str(e))

    address = settings.address
    # Validate that the address is a valid IP address
    try:
        ipaddress.ip_address(address)
    except ValueError:
        raise DeploymentProvisionError(
            f"Invalid address: {address}. Must be a valid IP address."
        )

    if address == "0.0.0.0":  # nosec
        address = "localhost"
    url = f"http://{address}:{port}"

    log_file = existing_meta.log_file or self._log_file_path(deployment.id)

    runtime_dir = self._runtime_dir(deployment.id)
    if not os.path.exists(runtime_dir):
        os.makedirs(runtime_dir, exist_ok=True)

    if existing_meta.pid:
        try:
            stop_process(existing_meta.pid)
        except Exception as e:
            logger.warning(
                f"Failed to stop existing daemon process for deployment "
                f"'{deployment.name}' with PID {existing_meta.pid}: {e}"
            )

    if settings.blocking:
        self._update_deployment(
            deployment,
            DeploymentOperationalState(
                status=DeploymentStatus.RUNNING,
                url=url,
                metadata=LocalDeploymentMetadata(
                    pid=os.getpid(),
                    port=port,
                    address=settings.address,
                ).model_dump(exclude_none=True),
            ),
        )
        start_deployment_app(
            deployment_id=deployment.id,
            host=settings.address,
            port=port,
        )
        self._update_deployment(
            deployment,
            DeploymentOperationalState(
                status=DeploymentStatus.ABSENT,
                metadata=None,
            ),
        )
        # Exiting early here because the deployment takes over the current
        # process and anything else is irrelevant.
        sys.exit(0)

    # Launch the deployment app as a background subprocess.
    python_exe = sys.executable
    module = "zenml.deployers.server.app"
    cmd = [
        python_exe,
        "-m",
        module,
        "--deployment_id",
        str(deployment.id),
        "--log_file",
        os.path.abspath(log_file),
        "--host",
        settings.address,
        "--port",
        str(port),
    ]

    try:
        os.makedirs(os.path.dirname(log_file), exist_ok=True)
        proc = subprocess.Popen(
            cmd,
            cwd=os.getcwd(),
            env=child_env,
            stdout=subprocess.DEVNULL,
            stderr=subprocess.DEVNULL,
            start_new_session=True,
            close_fds=True,
        )
    except Exception as e:
        raise DeploymentProvisionError(
            f"Failed to start subprocess for deployment "
            f"'{deployment.name}': {e}"
        ) from e

    metadata = LocalDeploymentMetadata(
        pid=proc.pid,
        port=port,
        address=settings.address,
        log_file=log_file,
    )

    state = DeploymentOperationalState(
        status=DeploymentStatus.PENDING,
        url=url,
        metadata=metadata.model_dump(exclude_none=True),
    )

    return state

LocalDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerConfig, LocalDeployerSettings

Local deployer config.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)
Attributes
is_local: bool property

Checks if this stack component is running locally.

Returns:

Type Description
bool

True if this config is for a local component.

LocalDeployerFlavor

Bases: BaseDeployerFlavor

Flavor for the Local daemon deployer.

Attributes
config_class: Type[BaseDeployerConfig] property

Config class for the flavor.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

docs_url: Optional[str] property

A url to point at docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor docs url.

implementation_class: Type[LocalDeployer] property

Implementation class for this flavor.

Returns:

Type Description
Type[LocalDeployer]

The implementation class.

logo_url: str property

A url to represent the flavor in the dashboard.

Returns:

Type Description
str

The flavor logo.

name: str property

Name of the deployer flavor.

Returns:

Type Description
str

Flavor name.

sdk_docs_url: Optional[str] property

A url to point at SDK docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor SDK docs url.

LocalDeployerSettings(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerSettings

Local deployer settings.

Attributes:

Name Type Description
port Optional[int]

Preferred port to run on.

allocate_port_if_busy bool

Whether to allocate a free port if busy.

port_range Tuple[int, int]

Range to scan when allocating a free port.

address str

Address to bind the server to.

blocking bool

Whether to run the deployment in the current process instead of running it as a daemon process.

Source code in src/zenml/config/secret_reference_mixin.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references are only passed for valid fields.

    This method ensures that secret references are not passed for fields
    that explicitly prevent them or require pydantic validation.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using plain-text secrets.
        **kwargs: Arguments to initialize this object.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            or an attribute which explicitly disallows secret references
            is passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}`. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure values with secrets "
                    "here: https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if secret_utils.is_clear_text_field(field):
            raise ValueError(
                f"Passing the `{key}` attribute as a secret reference is "
                "not allowed."
            )

        requires_validation = has_validators(
            pydantic_class=self.__class__, field_name=key
        )
        if requires_validation:
            raise ValueError(
                f"Passing the attribute `{key}` as a secret reference is "
                "not allowed as additional validation is required for "
                "this attribute."
            )

    super().__init__(**kwargs)

Modules

base_deployer

Base class for all ZenML deployers.

Classes
BaseDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: StackComponent, ABC

Base class for all ZenML deployers.

The deployer serves three major purposes:

  1. It contains all the stack related configuration attributes required to interact with the remote pipeline deployment tool, service or platform (e.g. hostnames, URLs, references to credentials, other client related configuration parameters).

  2. It implements the life-cycle management for deployments, including discovery, creation, deletion and updating.

  3. It acts as a ZenML deployment registry, where every pipeline deployment is stored as a database entity through the ZenML Client. This allows the deployer to keep track of all externally running pipeline deployments and to manage their lifecycle.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: BaseDeployerConfig property

Returns the BaseDeployerConfig config.

Returns:

Type Description
BaseDeployerConfig

The configuration.

Functions
delete_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, force: bool = False, timeout: Optional[int] = None) -> None

Deprovision and delete a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to delete.

required
project Optional[UUID]

The project ID of the deployment to deprovision. Required if a name is provided.

None
force bool

if True, force the deployment to delete even if it cannot be deprovisioned.

False
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned. If provided, will override the deployer's default timeout.

None

Raises:

Type Description
DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
def delete_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    force: bool = False,
    timeout: Optional[int] = None,
) -> None:
    """Deprovision and delete a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to
            delete.
        project: The project ID of the deployment to deprovision.
            Required if a name is provided.
        force: if True, force the deployment to delete even if it
            cannot be deprovisioned.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned. If provided, will override the
            deployer's default timeout.

    Raises:
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = self.deprovision_deployment(
            deployment_name_or_id, project, timeout
        )
    except DeploymentNotFoundError:
        # The deployment was already deleted
        return
    except DeployerError as e:
        if force:
            logger.warning(
                f"Failed to deprovision deployment "
                f"{deployment_name_or_id}: {e}. Forcing deletion."
            )
            deployment = client.get_deployment(
                deployment_name_or_id, project=project
            )
            client.zen_store.delete_deployment(deployment_id=deployment.id)
        else:
            raise
    else:
        client.zen_store.delete_deployment(deployment_id=deployment.id)
deprovision_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, timeout: Optional[int] = None) -> DeploymentResponse

Deprovision a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to deprovision.

required
project Optional[UUID]

The project ID of the deployment to deprovision. Required if a name is provided.

None
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to deprovision. If provided, will override the deployer's default timeout.

None

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found or is not managed by this deployer.

DeploymentDeprovisionError

if the deployment deprovision fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
def deprovision_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    timeout: Optional[int] = None,
) -> DeploymentResponse:
    """Deprovision a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to
            deprovision.
        project: The project ID of the deployment to deprovision.
            Required if a name is provided.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to deprovision. If provided, will override the
            deployer's default timeout.

    Returns:
        The deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found
            or is not managed by this deployer.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    if not timeout and deployment.snapshot:
        settings = cast(
            BaseDeployerSettings,
            self.get_settings(deployment.snapshot),
        )

        timeout = settings.lcm_timeout

    timeout = timeout or DEFAULT_DEPLOYMENT_LCM_TIMEOUT

    start_time = time.time()
    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    with track_handler(
        AnalyticsEvent.STOP_DEPLOYMENT
    ) as analytics_handler:
        try:
            deleted_deployment_state = self.do_deprovision_deployment(
                deployment, timeout
            )
            if not deleted_deployment_state:
                # When do_delete_deployment returns a None value, this
                # is to signal that the deployment is already fully deprovisioned.
                deployment_state.status = DeploymentStatus.ABSENT
        except DeploymentNotFoundError:
            deployment_state.status = DeploymentStatus.ABSENT
        except DeployerError as e:
            raise DeployerError(
                f"Failed to delete deployment {deployment_name_or_id}: {e}"
            ) from e
        except Exception as e:
            raise DeployerError(
                f"Unexpected error while deleting deployment for "
                f"{deployment_name_or_id}: {e}"
            ) from e
        finally:
            deployment = self._update_deployment(
                deployment, deployment_state
            )

        try:
            if deployment_state.status == DeploymentStatus.ABSENT:
                return deployment

            # Subtract the time spent deprovisioning the deployment from the timeout
            timeout = timeout - int(time.time() - start_time)
            deployment, _ = self._poll_deployment(
                deployment, DeploymentStatus.ABSENT, timeout
            )

            if deployment.status != DeploymentStatus.ABSENT:
                raise DeploymentDeprovisionError(
                    f"Failed to deprovision deployment {deployment_name_or_id}: "
                    f"Operational state: {deployment.status}"
                )

        finally:
            analytics_handler.metadata = (
                self._get_deployment_analytics_metadata(
                    deployment=deployment,
                    stack=None,
                )
            )

        return deployment
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState] abstractmethod

Abstract method to deprovision a deployment.

Concrete deployer subclasses must implement the following functionality in this method:

  • Deprovision the actual deployment infrastructure (e.g., FastAPI server, Kubernetes deployment, cloud function, etc.) based on the information in the deployment response.

  • Return a DeploymentOperationalState representing the operational state of the deleted deployment, or None if the deletion is completed before the call returns.

Note that the deployment infrastructure is not required to be deleted immediately. The deployer can return a DeploymentOperationalState with a status of DeploymentStatus.PENDING, and the base deployer will poll the deployment infrastructure by calling the do_get_deployment_state method until it is deleted or it times out.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to delete.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

The DeploymentOperationalState object representing the

Optional[DeploymentOperationalState]

operational state of the deprovisioned deployment, or None

Optional[DeploymentOperationalState]

if the deprovision is completed before the call returns.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentDeprovisionError

if the deployment deprovision fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
@abstractmethod
def do_deprovision_deployment(
    self,
    deployment: DeploymentResponse,
    timeout: int,
) -> Optional[DeploymentOperationalState]:
    """Abstract method to deprovision a deployment.

    Concrete deployer subclasses must implement the following
    functionality in this method:

    - Deprovision the actual deployment infrastructure (e.g.,
    FastAPI server, Kubernetes deployment, cloud function, etc.) based on
    the information in the deployment response.

    - Return a DeploymentOperationalState representing the operational
    state of the deleted deployment, or None if the deletion is
    completed before the call returns.

    Note that the deployment infrastructure is not required to be
    deleted immediately. The deployer can return a
    DeploymentOperationalState with a status of
    DeploymentStatus.PENDING, and the base deployer will poll
    the deployment infrastructure by calling the
    `do_get_deployment_state` method until it is deleted or it times out.

    Args:
        deployment: The deployment to delete.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the deprovisioned deployment, or None
        if the deprovision is completed before the call returns.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
        DeployerError: if an unexpected error occurs.
    """
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState abstractmethod

Abstract method to get information about a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get information about.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

updated operational state of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeployerError

if the deployment information cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
@abstractmethod
def do_get_deployment_state(
    self,
    deployment: DeploymentResponse,
) -> DeploymentOperationalState:
    """Abstract method to get information about a deployment.

    Args:
        deployment: The deployment to get information about.

    Returns:
        The DeploymentOperationalState object representing the
        updated operational state of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeployerError: if the deployment information cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None] abstractmethod

Abstract method to get the logs of a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get the logs of.

required
follow bool

if True, the logs will be streamed as they are written

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Yields:

Type Description
str

The logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentLogsNotFoundError

if the deployment logs are not found.

DeployerError

if the deployment logs cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
@abstractmethod
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Abstract method to get the logs of a deployment.

    Args:
        deployment: The deployment to get the logs of.
        follow: if True, the logs will be streamed as they are written
        tail: only retrieve the last NUM lines of log output.

    Yields:
        The logs of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentLogsNotFoundError: if the deployment logs are not
            found.
        DeployerError: if the deployment logs cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState abstractmethod

Abstract method to deploy a pipeline as an HTTP deployment.

Concrete deployer subclasses must implement the following functionality in this method:

  • Create the actual deployment infrastructure (e.g., FastAPI server, Kubernetes deployment, cloud function, etc.) based on the information in the deployment response, particularly the pipeline snapshot. When determining how to name the external resources, do not rely on the deployment name as being immutable or unique.

  • If the deployment infrastructure is already provisioned, update it to match the information in the deployment response.

  • Return a DeploymentOperationalState representing the operational state of the provisioned deployment.

Note that the deployment infrastructure is not required to be deployed immediately. The deployer can return a DeploymentOperationalState with a status of DeploymentStatus.PENDING, and the base deployer will poll the deployment infrastructure by calling the do_get_deployment_state method until it is ready or it times out.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to deploy as an HTTP deployment.

required
stack Stack

The stack the pipeline will be deployed on.

required
environment Dict[str, str]

A dictionary of environment variables to set on the deployment.

required
secrets Dict[str, str]

A dictionary of secret environment variables to set on the deployment. These secret environment variables should not be exposed as regular environment variables on the deployer.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be provisioned.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

if provisioning the deployment fails.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
@abstractmethod
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Abstract method to deploy a pipeline as an HTTP deployment.

    Concrete deployer subclasses must implement the following
    functionality in this method:

    - Create the actual deployment infrastructure (e.g.,
    FastAPI server, Kubernetes deployment, cloud function, etc.) based on
    the information in the deployment response, particularly the
    pipeline snapshot. When determining how to name the external
    resources, do not rely on the deployment name as being immutable
    or unique.

    - If the deployment infrastructure is already provisioned, update
    it to match the information in the deployment response.

    - Return a DeploymentOperationalState representing the operational
    state of the provisioned deployment.

    Note that the deployment infrastructure is not required to be
    deployed immediately. The deployer can return a
    DeploymentOperationalState with a status of
    DeploymentStatus.PENDING, and the base deployer will poll
    the deployment infrastructure by calling the
    `do_get_deployment_state` method until it is ready or it times out.

    Args:
        deployment: The deployment to deploy as an HTTP deployment.
        stack: The stack the pipeline will be deployed on.
        environment: A dictionary of environment variables to set on the
            deployment.
        secrets: A dictionary of secret environment variables to set
            on the deployment. These secret environment variables
            should not be exposed as regular environment variables on the
            deployer.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: if provisioning the deployment
            fails.
        DeployerError: if an unexpected error occurs.
    """
get_active_deployer() -> BaseDeployer classmethod

Get the deployer registered in the active stack.

Returns:

Type Description
BaseDeployer

The deployer registered in the active stack.

Raises:

Type Description
TypeError

if a deployer is not part of the active stack.

Source code in src/zenml/deployers/base_deployer.py
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
@classmethod
def get_active_deployer(cls) -> "BaseDeployer":
    """Get the deployer registered in the active stack.

    Returns:
        The deployer registered in the active stack.

    Raises:
        TypeError: if a deployer is not part of the
            active stack.
    """
    client = Client()
    deployer = client.active_stack.deployer
    if not deployer or not isinstance(deployer, cls):
        raise TypeError(
            "The active stack needs to have a deployer "
            "component registered to be able to deploy pipelines. "
            "You can create a new stack with a deployer component "
            "or update your active stack to add this component, e.g.:\n\n"
            "  `zenml deployer register ...`\n"
            "  `zenml stack register <STACK-NAME> -D ...`\n"
            "  or:\n"
            "  `zenml stack update -D ...`\n\n"
        )

    return deployer
get_deployment_logs(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Get the logs of a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to get the logs of.

required
project Optional[UUID]

The project ID of the deployment to get the logs of. Required if a name is provided.

None
follow bool

if True, the logs will be streamed as they are written.

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Returns:

Type Description
None

A generator that yields the logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
def get_deployment_logs(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Get the logs of a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to get
            the logs of.
        project: The project ID of the deployment to get the logs of.
            Required if a name is provided.
        follow: if True, the logs will be streamed as they are written.
        tail: only retrieve the last NUM lines of log output.

    Returns:
        A generator that yields the logs of the deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    try:
        return self.do_get_deployment_state_logs(deployment, follow, tail)
    except DeployerError as e:
        raise DeployerError(
            f"Failed to get logs for deployment {deployment_name_or_id}: {e}"
        ) from e
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while getting logs for deployment for "
            f"{deployment_name_or_id}: {e}"
        ) from e
provision_deployment(snapshot: PipelineSnapshotResponse, stack: Stack, deployment_name_or_id: Union[str, UUID], replace: bool = True, timeout: Optional[int] = None) -> DeploymentResponse

Provision a deployment.

The provision_deployment method is the main entry point for provisioning deployments using the deployer. It is used to deploy a pipeline snapshot as an HTTP deployment, or update an existing deployment instance with the same name. The method returns a DeploymentResponse object that is a representation of the external deployment instance.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The pipeline snapshot to deploy as an HTTP deployment.

required
stack Stack

The stack the pipeline will be deployed on.

required
deployment_name_or_id Union[str, UUID]

Unique name or ID for the deployment. This name must be unique at the project level.

required
replace bool

If True, it will update in-place any existing pipeline deployment instance with the same name. If False, and the pipeline deployment instance already exists, it will raise a DeploymentAlreadyExistsError.

True
timeout Optional[int]

The maximum time in seconds to wait for the pipeline deployment to be provisioned. If provided, will override the deployer's default timeout.

None

Raises:

Type Description
DeploymentAlreadyExistsError

if the deployment already exists and replace is False.

DeploymentProvisionError

if the deployment fails.

DeploymentSnapshotMismatchError

if the pipeline snapshot was not created for this deployer.

DeploymentNotFoundError

if the deployment with the given ID is not found.

DeployerError

if an unexpected error occurs.

Returns:

Type Description
DeploymentResponse

The DeploymentResponse object representing the provisioned

DeploymentResponse

deployment.

Source code in src/zenml/deployers/base_deployer.py
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
def provision_deployment(
    self,
    snapshot: PipelineSnapshotResponse,
    stack: "Stack",
    deployment_name_or_id: Union[str, UUID],
    replace: bool = True,
    timeout: Optional[int] = None,
) -> DeploymentResponse:
    """Provision a deployment.

    The provision_deployment method is the main entry point for
    provisioning deployments using the deployer. It is used to deploy
    a pipeline snapshot as an HTTP deployment, or update an existing
    deployment instance with the same name. The method returns a
    DeploymentResponse object that is a representation of the
    external deployment instance.

    Args:
        snapshot: The pipeline snapshot to deploy as an HTTP deployment.
        stack: The stack the pipeline will be deployed on.
        deployment_name_or_id: Unique name or ID for the deployment.
            This name must be unique at the project level.
        replace: If True, it will update in-place any existing pipeline
            deployment instance with the same name. If False, and the pipeline
            deployment instance already exists, it will raise a
            DeploymentAlreadyExistsError.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned. If provided, will override the
            deployer's default timeout.

    Raises:
        DeploymentAlreadyExistsError: if the deployment already
            exists and replace is False.
        DeploymentProvisionError: if the deployment fails.
        DeploymentSnapshotMismatchError: if the pipeline snapshot
            was not created for this deployer.
        DeploymentNotFoundError: if the deployment with the
            given ID is not found.
        DeployerError: if an unexpected error occurs.

    Returns:
        The DeploymentResponse object representing the provisioned
        deployment.
    """
    if not replace and is_valid_uuid(deployment_name_or_id):
        raise DeploymentAlreadyExistsError(
            f"A deployment with ID '{deployment_name_or_id}' "
            "already exists"
        )

    self._check_deployment_inputs_outputs(snapshot)

    client = Client()

    settings = cast(
        BaseDeployerSettings,
        self.get_settings(snapshot),
    )

    timeout = timeout or settings.lcm_timeout
    auth_key = settings.auth_key
    if not auth_key and settings.generate_auth_key:
        auth_key = self._generate_auth_key()

    if snapshot.stack and snapshot.stack.id != stack.id:
        # When a different stack is used then the one the snapshot was
        # created for, the container image may not have the correct
        # dependencies installed, which leads to unexpected errors during
        # deployment. To avoid this, we raise an error here.
        raise DeploymentSnapshotMismatchError(
            f"The pipeline snapshot with ID '{snapshot.id}' "
            f"was not created for the stack {stack.name} and might not "
            "have the correct dependencies installed. This may "
            "lead to unexpected behavior during deployment. Please switch "
            f"to the correct active stack '{snapshot.stack.name}' or use "
            "a different snapshot."
        )

    try:
        # Get the existing deployment
        deployment = client.get_deployment(
            deployment_name_or_id, project=snapshot.project_id
        )

        self._check_snapshot_already_deployed(snapshot, deployment.id)

        logger.debug(
            f"Existing deployment found with name '{deployment.name}'"
        )
    except KeyError:
        if isinstance(deployment_name_or_id, UUID):
            raise DeploymentNotFoundError(
                f"Deployment with ID '{deployment_name_or_id}' not found"
            )

        self._check_snapshot_already_deployed(
            snapshot, deployment_name_or_id
        )

        logger.debug(
            f"Creating new deployment {deployment_name_or_id} with "
            f"snapshot ID: {snapshot.id}"
        )

        # Create the deployment request
        deployment_request = DeploymentRequest(
            name=deployment_name_or_id,
            project=snapshot.project_id,
            snapshot_id=snapshot.id,
            deployer_id=self.id,  # This deployer's ID
            auth_key=auth_key,
        )

        deployment = client.zen_store.create_deployment(deployment_request)
        logger.debug(
            f"Created new deployment with name '{deployment.name}' "
            f"and ID: {deployment.id}"
        )
    else:
        if not replace:
            raise DeploymentAlreadyExistsError(
                f"A deployment with name '{deployment.name}' "
                "already exists"
            )

        self._check_deployment_deployer(deployment)
        self._check_deployment_snapshot(snapshot)

        deployment_update = DeploymentUpdate(
            snapshot_id=snapshot.id,
        )
        if (
            deployment.auth_key
            and not auth_key
            or not deployment.auth_key
            and auth_key
        ):
            # Key was either added or removed
            deployment_update.auth_key = auth_key
        elif deployment.auth_key != auth_key and (
            settings.auth_key or not settings.generate_auth_key
        ):
            # Key was changed and not because of re-generation
            deployment_update.auth_key = auth_key

        # The deployment has been updated
        deployment = client.zen_store.update_deployment(
            deployment.id,
            deployment_update,
        )

    logger.info(
        f"Provisioning deployment {deployment.name} with "
        f"snapshot ID: {snapshot.id}"
    )

    environment, secrets = get_config_environment_vars(
        deployment_id=deployment.id,
    )

    # Make sure to use the correct active stack/project which correspond
    # to the supplied stack and snapshot, which may be different from the
    # active stack/project
    environment[ENV_ZENML_ACTIVE_STACK_ID] = str(stack.id)
    environment[ENV_ZENML_ACTIVE_PROJECT_ID] = str(snapshot.project_id)

    start_time = time.time()
    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    with track_handler(
        AnalyticsEvent.DEPLOY_PIPELINE
    ) as analytics_handler:
        try:
            deployment_state = self.do_provision_deployment(
                deployment,
                stack=stack,
                environment=environment,
                secrets=secrets,
                timeout=timeout,
            )
        except DeploymentProvisionError as e:
            raise DeploymentProvisionError(
                f"Failed to provision deployment {deployment.name}: {e}"
            ) from e
        except DeployerError as e:
            raise DeployerError(
                f"Failed to provision deployment {deployment.name}: {e}"
            ) from e
        except Exception as e:
            raise DeployerError(
                f"Unexpected error while provisioning deployment for "
                f"{deployment.name}: {e}"
            ) from e
        finally:
            deployment = self._update_deployment(
                deployment, deployment_state
            )

        logger.info(
            f"Provisioned deployment {deployment.name} with "
            f"snapshot ID: {snapshot.id}. Operational state is: "
            f"{deployment_state.status}"
        )

        try:
            if deployment_state.status == DeploymentStatus.RUNNING:
                return deployment

            # Subtract the time spent deploying the deployment from the
            # timeout
            timeout = timeout - int(time.time() - start_time)
            deployment, _ = self._poll_deployment(
                deployment, DeploymentStatus.RUNNING, timeout
            )

            if deployment.status != DeploymentStatus.RUNNING:
                raise DeploymentProvisionError(
                    f"Failed to provision deployment {deployment.name}: "
                    f"The deployment's operational state is "
                    f"{deployment.status}. Please check the status or logs "
                    "of the deployment for more information."
                )

        finally:
            analytics_handler.metadata = (
                self._get_deployment_analytics_metadata(
                    deployment=deployment,
                    stack=stack,
                )
            )

        return deployment
refresh_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None) -> DeploymentResponse

Refresh the status of a deployment by name or ID.

Call this to refresh the operational state of a deployment.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to get.

required
project Optional[UUID]

The project ID of the deployment to get. Required if a name is provided.

None

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
DeploymentNotFoundError

if the deployment is not found.

DeployerError

if an unexpected error occurs.

Source code in src/zenml/deployers/base_deployer.py
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
def refresh_deployment(
    self,
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
) -> DeploymentResponse:
    """Refresh the status of a deployment by name or ID.

    Call this to refresh the operational state of a deployment.

    Args:
        deployment_name_or_id: The name or ID of the deployment to get.
        project: The project ID of the deployment to get. Required
            if a name is provided.

    Returns:
        The deployment.

    Raises:
        DeploymentNotFoundError: if the deployment is not found.
        DeployerError: if an unexpected error occurs.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' "
            f"not found"
        )

    self._check_deployment_deployer(deployment)

    deployment_state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
    )
    try:
        deployment_state = self.do_get_deployment_state(deployment)
    except DeploymentNotFoundError:
        deployment_state.status = DeploymentStatus.ABSENT
    except DeployerError as e:
        raise DeployerError(
            f"Failed to refresh deployment {deployment_name_or_id}: {e}"
        ) from e
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while refreshing deployment for "
            f"{deployment_name_or_id}: {e}"
        ) from e
    finally:
        deployment = self._update_deployment(deployment, deployment_state)

    return deployment
BaseDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: StackComponentConfig

Base config for all deployers.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)
BaseDeployerFlavor

Bases: Flavor

Base class for deployer flavors.

Attributes
config_class: Type[BaseDeployerConfig] property

Returns BaseDeployerConfig config class.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

implementation_class: Type[BaseDeployer] abstractmethod property

The class that implements the deployer.

type: StackComponentType property

Returns the flavor type.

Returns:

Type Description
StackComponentType

The flavor type.

BaseDeployerSettings(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseSettings

Base settings for all deployers.

Source code in src/zenml/config/secret_reference_mixin.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references are only passed for valid fields.

    This method ensures that secret references are not passed for fields
    that explicitly prevent them or require pydantic validation.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using plain-text secrets.
        **kwargs: Arguments to initialize this object.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            or an attribute which explicitly disallows secret references
            is passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}`. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure values with secrets "
                    "here: https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if secret_utils.is_clear_text_field(field):
            raise ValueError(
                f"Passing the `{key}` attribute as a secret reference is "
                "not allowed."
            )

        requires_validation = has_validators(
            pydantic_class=self.__class__, field_name=key
        )
        if requires_validation:
            raise ValueError(
                f"Passing the attribute `{key}` as a secret reference is "
                "not allowed as additional validation is required for "
                "this attribute."
            )

    super().__init__(**kwargs)
Functions

containerized_deployer

Base class for all containerized deployers.

Classes
ContainerizedDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: BaseDeployer, ABC

Base class for all containerized deployers.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
requirements: Set[str] property

Set of PyPI requirements for the deployer.

Returns:

Type Description
Set[str]

A set of PyPI requirements for the deployer.

Functions
get_docker_builds(snapshot: PipelineSnapshotBase) -> List[BuildConfiguration]

Gets the Docker builds required for the component.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotBase

The pipeline snapshot for which to get the builds.

required

Returns:

Type Description
List[BuildConfiguration]

The required Docker builds.

Source code in src/zenml/deployers/containerized_deployer.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
def get_docker_builds(
    self, snapshot: "PipelineSnapshotBase"
) -> List["BuildConfiguration"]:
    """Gets the Docker builds required for the component.

    Args:
        snapshot: The pipeline snapshot for which to get the builds.

    Returns:
        The required Docker builds.
    """
    deployment_settings = (
        snapshot.pipeline_configuration.deployment_settings
    )
    docker_settings = snapshot.pipeline_configuration.docker_settings
    if not docker_settings.install_deployment_requirements:
        return []

    deployment_requirements = load_deployment_requirements(
        deployment_settings
    )
    return [
        BuildConfiguration(
            key=DEPLOYER_DOCKER_IMAGE_KEY,
            settings=snapshot.pipeline_configuration.docker_settings,
            extra_requirements_files={
                ".zenml_deployment_requirements": deployment_requirements,
            },
        )
    ]
get_image(snapshot: PipelineSnapshotResponse) -> str staticmethod

Get the docker image used to deploy a pipeline snapshot.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The pipeline snapshot to get the image for.

required

Returns:

Type Description
str

The docker image used to deploy the pipeline snapshot.

Raises:

Type Description
RuntimeError

if the pipeline snapshot does not have a build or if the deployer image is not in the build.

Source code in src/zenml/deployers/containerized_deployer.py
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
@staticmethod
def get_image(snapshot: PipelineSnapshotResponse) -> str:
    """Get the docker image used to deploy a pipeline snapshot.

    Args:
        snapshot: The pipeline snapshot to get the image for.

    Returns:
        The docker image used to deploy the pipeline snapshot.

    Raises:
        RuntimeError: if the pipeline snapshot does not have a build or
            if the deployer image is not in the build.
    """
    if snapshot.build is None:
        raise RuntimeError("Pipeline snapshot does not have a build. ")
    if DEPLOYER_DOCKER_IMAGE_KEY not in snapshot.build.images:
        raise RuntimeError(
            "Pipeline snapshot build does not have a deployer image. "
        )
    return snapshot.build.images[DEPLOYER_DOCKER_IMAGE_KEY].image
Functions

docker

Implementation for the local Docker deployer.

Modules
docker_deployer

Implementation of the ZenML Docker deployer.

Classes
DockerDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: ContainerizedDeployer

Deployer responsible for deploying pipelines locally using Docker.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: DockerDeployerConfig property

Returns the DockerDeployerConfig config.

Returns:

Type Description
DockerDeployerConfig

The configuration.

docker_client: DockerClient property

Initialize and/or return the docker client.

Returns:

Type Description
DockerClient

The docker client.

settings_class: Optional[Type[BaseSettings]] property

Settings class for the Docker deployer.

Returns:

Type Description
Optional[Type[BaseSettings]]

The settings class.

validator: Optional[StackValidator] property

Ensures there is an image builder in the stack.

Returns:

Type Description
Optional[StackValidator]

A StackValidator instance.

Functions
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState]

Deprovision a docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to deprovision.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be deprovisioned.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

The DeploymentOperationalState object representing the

Optional[DeploymentOperationalState]

operational state of the deleted deployment, or None if the

Optional[DeploymentOperationalState]

deletion is completed before the call returns.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentDeprovisionError

if the deployment deprovision fails.

Source code in src/zenml/deployers/docker/docker_deployer.py
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
def do_deprovision_deployment(
    self,
    deployment: DeploymentResponse,
    timeout: int,
) -> Optional[DeploymentOperationalState]:
    """Deprovision a docker deployment.

    Args:
        deployment: The deployment to deprovision.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be deprovisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the deleted deployment, or None if the
        deletion is completed before the call returns.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentDeprovisionError: if the deployment
            deprovision fails.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    try:
        container.stop(timeout=timeout)
        container.remove()
    except docker_errors.DockerException as e:
        raise DeploymentDeprovisionError(
            f"Docker container for deployment '{deployment.name}' "
            f"failed to delete: {e}"
        )

    return None
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState

Get information about a docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get information about.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

updated operational state of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

Source code in src/zenml/deployers/docker/docker_deployer.py
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
def do_get_deployment_state(
    self,
    deployment: DeploymentResponse,
) -> DeploymentOperationalState:
    """Get information about a docker deployment.

    Args:
        deployment: The deployment to get information about.

    Returns:
        The DeploymentOperationalState object representing the
        updated operational state of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    return self._get_container_operational_state(container)
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Get the logs of a Docker deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get the logs of.

required
follow bool

if True, the logs will be streamed as they are written

False
tail Optional[int]

only retrieve the last NUM lines of log output.

None

Yields:

Type Description
str

The logs of the deployment.

Raises:

Type Description
DeploymentNotFoundError

if no deployment is found corresponding to the provided DeploymentResponse.

DeploymentLogsNotFoundError

if the deployment logs are not found.

DeployerError

if the deployment logs cannot be retrieved for any other reason or if an unexpected error occurs.

Source code in src/zenml/deployers/docker/docker_deployer.py
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Get the logs of a Docker deployment.

    Args:
        deployment: The deployment to get the logs of.
        follow: if True, the logs will be streamed as they are written
        tail: only retrieve the last NUM lines of log output.

    Yields:
        The logs of the deployment.

    Raises:
        DeploymentNotFoundError: if no deployment is found
            corresponding to the provided DeploymentResponse.
        DeploymentLogsNotFoundError: if the deployment logs are not
            found.
        DeployerError: if the deployment logs cannot
            be retrieved for any other reason or if an unexpected error
            occurs.
    """
    container = self._get_container(deployment)
    if container is None:
        raise DeploymentNotFoundError(
            f"Docker container for deployment '{deployment.name}' "
            "not found"
        )

    try:
        log_kwargs: Dict[str, Any] = {
            "stdout": True,
            "stderr": True,
            "stream": follow,
            "follow": follow,
            "timestamps": True,
        }

        if tail is not None and tail > 0:
            log_kwargs["tail"] = tail

        log_stream = container.logs(**log_kwargs)

        if follow:
            for log_line in log_stream:
                if isinstance(log_line, bytes):
                    yield log_line.decode(
                        "utf-8", errors="replace"
                    ).rstrip()
                else:
                    yield str(log_line).rstrip()
        else:
            if isinstance(log_stream, bytes):
                log_text = log_stream.decode("utf-8", errors="replace")
                for line in log_text.splitlines():
                    yield line
            else:
                for log_line in log_stream:
                    if isinstance(log_line, bytes):
                        yield log_line.decode(
                            "utf-8", errors="replace"
                        ).rstrip()
                    else:
                        yield str(log_line).rstrip()

    except docker_errors.NotFound as e:
        raise DeploymentLogsNotFoundError(
            f"Logs for deployment '{deployment.name}' not found: {e}"
        )
    except docker_errors.APIError as e:
        raise DeployerError(
            f"Docker API error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
    except docker_errors.DockerException as e:
        raise DeployerError(
            f"Docker error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while retrieving logs for deployment "
            f"'{deployment.name}': {e}"
        )
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState

Deploy a pipeline as a Docker container.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to run as a Docker container.

required
stack Stack

The stack the pipeline will be deployed on.

required
environment Dict[str, str]

A dictionary of environment variables to set on the deployment.

required
secrets Dict[str, str]

A dictionary of secret environment variables to set on the deployment. These secret environment variables should not be exposed as regular environment variables on the deployer.

required
timeout int

The maximum time in seconds to wait for the pipeline deployment to be provisioned.

required

Returns:

Type Description
DeploymentOperationalState

The DeploymentOperationalState object representing the

DeploymentOperationalState

operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

if provisioning the deployment fails.

Source code in src/zenml/deployers/docker/docker_deployer.py
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Deploy a pipeline as a Docker container.

    Args:
        deployment: The deployment to run as a Docker container.
        stack: The stack the pipeline will be deployed on.
        environment: A dictionary of environment variables to set on the
            deployment.
        secrets: A dictionary of secret environment variables to set
            on the deployment. These secret environment variables
            should not be exposed as regular environment variables on the
            deployer.
        timeout: The maximum time in seconds to wait for the pipeline
            deployment to be provisioned.

    Returns:
        The DeploymentOperationalState object representing the
        operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: if provisioning the deployment
            fails.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"
    snapshot = deployment.snapshot

    # Currently, there is no safe way to pass secrets to a docker
    # container, so we simply merge them into the environment variables.
    environment.update(secrets)

    settings = cast(
        DockerDeployerSettings,
        self.get_settings(snapshot),
    )

    existing_metadata = DockerDeploymentMetadata.from_deployment(
        deployment
    )

    entrypoint = DeploymentEntrypointConfiguration.get_entrypoint_command()

    entrypoint_kwargs = {
        DEPLOYMENT_ID_OPTION: deployment.id,
    }

    arguments = DeploymentEntrypointConfiguration.get_entrypoint_arguments(
        **entrypoint_kwargs
    )

    # Add the local stores path as a volume mount
    stack.check_local_paths()
    local_stores_path = GlobalConfiguration().local_stores_path
    volumes = {
        local_stores_path: {
            "bind": local_stores_path,
            "mode": "rw",
        }
    }
    environment[ENV_ZENML_LOCAL_STORES_PATH] = local_stores_path

    # check if a container already exists for the deployment
    container = self._get_container(deployment)

    if container:
        # the container exists, check if it is running
        if container.status == "running":
            logger.debug(
                f"Container for deployment '{deployment.name}' is "
                "already running",
            )
            container.stop(timeout=timeout)

        # the container is stopped or in an error state, remove it
        logger.debug(
            f"Removing previous container for deployment "
            f"'{deployment.name}'",
        )
        container.remove(force=True)

    logger.debug(
        f"Starting container for deployment '{deployment.name}'..."
    )

    image = self.get_image(deployment.snapshot)

    try:
        self.docker_client.images.get(image)
    except docker_errors.ImageNotFound:
        logger.debug(
            f"Pulling container image '{image}' for deployment "
            f"'{deployment.name}'...",
        )
        self.docker_client.images.pull(image)

    preferred_ports: List[int] = []
    if settings.port:
        preferred_ports.append(settings.port)
    if existing_metadata.port:
        preferred_ports.append(existing_metadata.port)
    port = lookup_preferred_or_free_port(
        preferred_ports=preferred_ports,
        allocate_port_if_busy=settings.allocate_port_if_busy,
        range=settings.port_range,
        address="0.0.0.0",  # nosec
    )
    container_port = (
        snapshot.pipeline_configuration.deployment_settings.uvicorn_port
    )
    ports: Dict[str, Optional[int]] = {f"{container_port}/tcp": port}

    uid_args: Dict[str, Any] = {}
    if sys.platform == "win32":
        # File permissions are not checked on Windows. This if clause
        # prevents mypy from complaining about unused 'type: ignore'
        # statements
        pass
    else:
        # Run the container in the context of the local UID/GID
        # to ensure that the local database can be shared
        # with the container.
        logger.debug(
            "Setting UID and GID to local user/group in container."
        )
        uid_args = dict(
            user=os.getuid(),
            group_add=[os.getgid()],
        )

    run_args = copy.deepcopy(settings.run_args)
    docker_environment = run_args.pop("environment", {})
    docker_environment.update(environment)

    docker_volumes = run_args.pop("volumes", {})
    docker_volumes.update(volumes)

    extra_hosts = run_args.pop("extra_hosts", {})
    extra_hosts["host.docker.internal"] = "host-gateway"

    run_args.update(uid_args)

    try:
        container = self.docker_client.containers.run(
            image=image,
            name=self._get_container_id(deployment),
            entrypoint=entrypoint,
            command=arguments,
            detach=True,
            volumes=docker_volumes,
            environment=docker_environment,
            remove=False,
            auto_remove=False,
            ports=ports,
            labels={
                "zenml-deployment-id": str(deployment.id),
                "zenml-deployment-name": deployment.name,
                "zenml-deployer-name": str(self.name),
                "zenml-deployer-id": str(self.id),
                "managed-by": "zenml",
            },
            extra_hosts=extra_hosts,
            **run_args,
        )

        logger.debug(
            f"Docker container for deployment '{deployment.name}' "
            f"started with ID {self._get_container_id(deployment)}",
        )

    except docker_errors.DockerException as e:
        raise DeploymentProvisionError(
            f"Docker container for deployment '{deployment.name}' "
            f"failed to start: {e}"
        )

    return self._get_container_operational_state(container)
DockerDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerConfig, DockerDeployerSettings

Docker deployer config.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)
Attributes
is_local: bool property

Checks if this stack component is running locally.

Returns:

Type Description
bool

True if this config is for a local component, False otherwise.

DockerDeployerFlavor

Bases: BaseDeployerFlavor

Flavor for the Docker deployer.

Attributes
config_class: Type[BaseDeployerConfig] property

Config class for the base deployer flavor.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

docs_url: Optional[str] property

A url to point at docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor docs url.

implementation_class: Type[DockerDeployer] property

Implementation class for this flavor.

Returns:

Type Description
Type[DockerDeployer]

Implementation class for this flavor.

logo_url: str property

A url to represent the flavor in the dashboard.

Returns:

Type Description
str

The flavor logo.

name: str property

Name of the deployer flavor.

Returns:

Type Description
str

Name of the deployer flavor.

sdk_docs_url: Optional[str] property

A url to point at SDK docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor SDK docs url.

DockerDeployerSettings(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerSettings

Docker deployer settings.

Attributes:

Name Type Description
port Optional[int]

The port to expose the deployment on.

allocate_port_if_busy bool

If True, allocate a free port if the configured port is busy.

port_range Tuple[int, int]

The range of ports to search for a free port.

run_args Dict[str, Any]

Arguments to pass to the docker run call. (See https://docker-py.readthedocs.io/en/stable/containers.html for a list of what can be passed.)

Source code in src/zenml/config/secret_reference_mixin.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references are only passed for valid fields.

    This method ensures that secret references are not passed for fields
    that explicitly prevent them or require pydantic validation.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using plain-text secrets.
        **kwargs: Arguments to initialize this object.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            or an attribute which explicitly disallows secret references
            is passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}`. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure values with secrets "
                    "here: https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if secret_utils.is_clear_text_field(field):
            raise ValueError(
                f"Passing the `{key}` attribute as a secret reference is "
                "not allowed."
            )

        requires_validation = has_validators(
            pydantic_class=self.__class__, field_name=key
        )
        if requires_validation:
            raise ValueError(
                f"Passing the attribute `{key}` as a secret reference is "
                "not allowed as additional validation is required for "
                "this attribute."
            )

    super().__init__(**kwargs)
DockerDeploymentMetadata

Bases: BaseModel

Metadata for a Docker deployment.

Functions
from_container(container: Container) -> DockerDeploymentMetadata classmethod

Create a DockerDeploymentMetadata from a docker container.

Parameters:

Name Type Description Default
container Container

The docker container to get the metadata for.

required

Returns:

Type Description
DockerDeploymentMetadata

The metadata for the docker container.

Source code in src/zenml/deployers/docker/docker_deployer.py
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
@classmethod
def from_container(
    cls, container: Container
) -> "DockerDeploymentMetadata":
    """Create a DockerDeploymentMetadata from a docker container.

    Args:
        container: The docker container to get the metadata for.

    Returns:
        The metadata for the docker container.
    """
    image = container.image
    if image is not None:
        image_url = image.attrs["RepoTags"][0]
        image_id = image.attrs["Id"]
    else:
        image_url = None
        image_id = None
    if container.ports:
        ports = list(container.ports.values())
        if len(ports) > 0:
            port = int(ports[0][0]["HostPort"])
        else:
            port = None
    else:
        port = None
    return cls(
        port=port,
        container_id=container.id,
        container_name=container.name,
        container_image_uri=image_url,
        container_image_id=image_id,
        container_status=container.status,
    )
from_deployment(deployment: DeploymentResponse) -> DockerDeploymentMetadata classmethod

Create a DockerDeploymentMetadata from a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to get the metadata for.

required

Returns:

Type Description
DockerDeploymentMetadata

The metadata for the deployment.

Source code in src/zenml/deployers/docker/docker_deployer.py
120
121
122
123
124
125
126
127
128
129
130
131
132
@classmethod
def from_deployment(
    cls, deployment: DeploymentResponse
) -> "DockerDeploymentMetadata":
    """Create a DockerDeploymentMetadata from a deployment.

    Args:
        deployment: The deployment to get the metadata for.

    Returns:
        The metadata for the deployment.
    """
    return cls.model_validate(deployment.deployment_metadata)
Functions Modules

exceptions

Base class for all ZenML deployers.

Classes
DeployerError

Bases: Exception

Base class for deployer errors.

DeploymentAlreadyExistsError(message: Optional[str] = None, url: Optional[str] = None)

Bases: EntityExistsError, DeployerError

Error raised when a deployment already exists.

Source code in src/zenml/exceptions.py
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
def __init__(
    self,
    message: Optional[str] = None,
    url: Optional[str] = None,
):
    """The BaseException used to format messages displayed to the user.

    Args:
        message: Message with details of exception. This message
                 will be appended with another message directing user to
                 `url` for more information. If `None`, then default
                 Exception behavior is used.
        url: URL to point to in exception message. If `None`, then no url
             is appended.
    """
    if message and url:
        message += f" For more information, visit {url}."
    super().__init__(message)
DeploymentDeployerMismatchError

Bases: DeployerError

Error raised when a deployment is not managed by this deployer.

DeploymentDeprovisionError

Bases: DeployerError

Error raised when a deployment deprovisioning fails.

DeploymentHTTPError

Bases: DeployerError

Error raised when an HTTP request to a deployment fails.

DeploymentInvalidParametersError

Bases: DeployerError

Error raised when the parameters for a deployment are invalid.

DeploymentLogsNotFoundError

Bases: KeyError, DeployerError

Error raised when pipeline logs are not found.

DeploymentNotFoundError

Bases: KeyError, DeployerError

Error raised when a deployment is not found.

DeploymentProvisionError

Bases: DeployerError

Error raised when a deployment provisioning fails.

DeploymentSnapshotMismatchError

Bases: DeployerError

Error raised when a deployment snapshot does not match the current deployer.

DeploymentTimeoutError

Bases: DeployerError

Error raised when a deployment provisioning or deprovisioning times out.

Functions

local

Local daemon deployer implementation.

Modules
local_deployer

Implementation of the ZenML Local deployer.

Classes
LocalDeployer(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: BaseDeployer

Deployer that runs deployments as local daemon processes.

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Attributes
config: LocalDeployerConfig property

Returns the LocalDeployerConfig config.

Returns:

Type Description
LocalDeployerConfig

The configuration.

settings_class: Optional[Type[BaseSettings]] property

Settings class for the local deployer.

Returns:

Type Description
Optional[Type[BaseSettings]]

The settings class.

Functions
do_deprovision_deployment(deployment: DeploymentResponse, timeout: int) -> Optional[DeploymentOperationalState]

Deprovision a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to stop.

required
timeout int

Unused for local daemon stop.

required

Returns:

Type Description
Optional[DeploymentOperationalState]

None, indicating immediate deletion completed.

Raises:

Type Description
DeploymentNotFoundError

If the daemon is not found.

DeploymentDeprovisionError

If stopping fails.

Source code in src/zenml/deployers/local/local_deployer.py
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
def do_deprovision_deployment(
    self, deployment: DeploymentResponse, timeout: int
) -> Optional[DeploymentOperationalState]:
    """Deprovision a local daemon deployment.

    Args:
        deployment: The deployment to stop.
        timeout: Unused for local daemon stop.

    Returns:
        None, indicating immediate deletion completed.

    Raises:
        DeploymentNotFoundError: If the daemon is not found.
        DeploymentDeprovisionError: If stopping fails.
    """
    meta = LocalDeploymentMetadata.from_deployment(deployment)
    if not meta.pid:
        raise DeploymentNotFoundError(
            f"Daemon for deployment '{deployment.name}' missing."
        )

    try:
        stop_process(meta.pid)
    except Exception as e:
        raise DeploymentDeprovisionError(
            f"Failed to stop daemon for deployment '{deployment.name}': "
            f"{e}"
        ) from e
    else:
        shutil.rmtree(self._runtime_dir(deployment.id))

    return None
do_get_deployment_state(deployment: DeploymentResponse) -> DeploymentOperationalState

Get information about a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to inspect.

required

Returns:

Type Description
DeploymentOperationalState

Operational state of the deployment.

Source code in src/zenml/deployers/local/local_deployer.py
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
def do_get_deployment_state(
    self, deployment: DeploymentResponse
) -> DeploymentOperationalState:
    """Get information about a local daemon deployment.

    Args:
        deployment: The deployment to inspect.

    Returns:
        Operational state of the deployment.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"
    meta = LocalDeploymentMetadata.from_deployment(deployment)

    state = DeploymentOperationalState(
        status=DeploymentStatus.ERROR,
        metadata=meta.model_dump(exclude_none=True),
    )

    if not meta.pid:
        state.status = DeploymentStatus.ABSENT
        return state

    if not psutil.pid_exists(meta.pid):
        return state

    if not meta.port or not meta.address:
        return state

    # Use pending until we can confirm the daemon is reachable
    state.status = DeploymentStatus.PENDING
    address = meta.address
    if address == "0.0.0.0":  # nosec
        address = "localhost"
    state.url = f"http://{address}:{meta.port}"

    settings = (
        deployment.snapshot.pipeline_configuration.deployment_settings
    )
    health_check_path = f"{settings.root_url_path}{settings.api_url_path}{settings.health_url_path}"
    health_check_url = f"{state.url}{health_check_path}"

    # Attempt to connect to the daemon and set the status to RUNNING
    # if successful
    try:
        response = requests.get(health_check_url, timeout=3)
        if response.status_code == 200:
            state.status = DeploymentStatus.RUNNING
        else:
            logger.debug(
                f"Daemon for deployment '{deployment.name}' returned "
                f"status code {response.status_code} for health check "
                f"at '{health_check_url}'"
            )
    except Exception as e:
        logger.debug(
            f"Daemon for deployment '{deployment.name}' is not "
            f"reachable at '{health_check_url}': {e}"
        )
        # It can take a long time after the deployment is started until
        # the deployment is ready to serve requests, but this isn't an
        # error condition. We return PENDING instead of ERROR here to
        # signal to the polling in the base deployer class to keep trying.
        state.status = DeploymentStatus.PENDING

    state.metadata = meta.model_dump(exclude_none=True)

    return state
do_get_deployment_state_logs(deployment: DeploymentResponse, follow: bool = False, tail: Optional[int] = None) -> Generator[str, bool, None]

Read logs from the local daemon log file.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to read logs for.

required
follow bool

Stream logs if True.

False
tail Optional[int]

Return only last N lines if set.

None

Yields:

Type Description
str

Log lines.

Raises:

Type Description
DeploymentLogsNotFoundError

If the log file is missing.

DeployerError

For unexpected errors.

Source code in src/zenml/deployers/local/local_deployer.py
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
def do_get_deployment_state_logs(
    self,
    deployment: DeploymentResponse,
    follow: bool = False,
    tail: Optional[int] = None,
) -> Generator[str, bool, None]:
    """Read logs from the local daemon log file.

    Args:
        deployment: The deployment to read logs for.
        follow: Stream logs if True.
        tail: Return only last N lines if set.

    Yields:
        Log lines.

    Raises:
        DeploymentLogsNotFoundError: If the log file is missing.
        DeployerError: For unexpected errors.
    """
    meta = LocalDeploymentMetadata.from_deployment(deployment)
    log_file = meta.log_file
    if not log_file or not os.path.exists(log_file):
        raise DeploymentLogsNotFoundError(
            f"Log file not found for deployment '{deployment.name}'"
        )

    try:

        def _read_tail(path: str, n: int) -> Generator[str, bool, None]:
            with open(path, "r", encoding="utf-8", errors="ignore") as f:
                lines = f.readlines()
                for line in lines[-n:]:
                    yield line.rstrip("\n")

        if not follow:
            if tail and tail > 0:
                yield from _read_tail(log_file, tail)
            else:
                with open(
                    log_file, "r", encoding="utf-8", errors="ignore"
                ) as f:
                    for line in f:
                        yield line.rstrip("\n")
            return

        with open(log_file, "r", encoding="utf-8", errors="ignore") as f:
            if not tail:
                tail = DEFAULT_TAIL_FOLLOW_LINES
            lines = f.readlines()
            for line in lines[-tail:]:
                yield line.rstrip("\n")

            while True:
                where = f.tell()
                line = f.readline()
                if not line:
                    time.sleep(0.2)
                    f.seek(where)
                    continue
                yield line.rstrip("\n")

    except DeploymentLogsNotFoundError:
        raise
    except Exception as e:
        raise DeployerError(
            f"Unexpected error while reading logs for deployment "
            f"'{deployment.name}': {e}"
        ) from e
do_provision_deployment(deployment: DeploymentResponse, stack: Stack, environment: Dict[str, str], secrets: Dict[str, str], timeout: int) -> DeploymentOperationalState

Provision a local daemon deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to run.

required
stack Stack

The active stack (unused by local deployer).

required
environment Dict[str, str]

Environment variables for the app.

required
secrets Dict[str, str]

Secret environment variables for the app.

required
timeout int

Unused for immediate daemonization.

required

Returns:

Type Description
DeploymentOperationalState

Operational state of the provisioned deployment.

Raises:

Type Description
DeploymentProvisionError

If the daemon cannot be started.

Source code in src/zenml/deployers/local/local_deployer.py
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
def do_provision_deployment(
    self,
    deployment: DeploymentResponse,
    stack: "Stack",
    environment: Dict[str, str],
    secrets: Dict[str, str],
    timeout: int,
) -> DeploymentOperationalState:
    """Provision a local daemon deployment.

    Args:
        deployment: The deployment to run.
        stack: The active stack (unused by local deployer).
        environment: Environment variables for the app.
        secrets: Secret environment variables for the app.
        timeout: Unused for immediate daemonization.

    Returns:
        Operational state of the provisioned deployment.

    Raises:
        DeploymentProvisionError: If the daemon cannot be started.
    """
    assert deployment.snapshot, "Pipeline snapshot not found"

    child_env: Dict[str, str] = dict(os.environ)
    child_env.update(environment)
    child_env.update(secrets)

    settings = cast(
        LocalDeployerSettings,
        self.get_settings(deployment.snapshot),
    )

    existing_meta = LocalDeploymentMetadata.from_deployment(deployment)

    preferred_ports: List[int] = []
    if settings.port:
        preferred_ports.append(settings.port)
    if existing_meta.port:
        preferred_ports.append(existing_meta.port)

    try:
        port = lookup_preferred_or_free_port(
            preferred_ports=preferred_ports,
            allocate_port_if_busy=settings.allocate_port_if_busy,
            range=settings.port_range,
            address=settings.address,
        )
    except IOError as e:
        raise DeploymentProvisionError(str(e))

    address = settings.address
    # Validate that the address is a valid IP address
    try:
        ipaddress.ip_address(address)
    except ValueError:
        raise DeploymentProvisionError(
            f"Invalid address: {address}. Must be a valid IP address."
        )

    if address == "0.0.0.0":  # nosec
        address = "localhost"
    url = f"http://{address}:{port}"

    log_file = existing_meta.log_file or self._log_file_path(deployment.id)

    runtime_dir = self._runtime_dir(deployment.id)
    if not os.path.exists(runtime_dir):
        os.makedirs(runtime_dir, exist_ok=True)

    if existing_meta.pid:
        try:
            stop_process(existing_meta.pid)
        except Exception as e:
            logger.warning(
                f"Failed to stop existing daemon process for deployment "
                f"'{deployment.name}' with PID {existing_meta.pid}: {e}"
            )

    if settings.blocking:
        self._update_deployment(
            deployment,
            DeploymentOperationalState(
                status=DeploymentStatus.RUNNING,
                url=url,
                metadata=LocalDeploymentMetadata(
                    pid=os.getpid(),
                    port=port,
                    address=settings.address,
                ).model_dump(exclude_none=True),
            ),
        )
        start_deployment_app(
            deployment_id=deployment.id,
            host=settings.address,
            port=port,
        )
        self._update_deployment(
            deployment,
            DeploymentOperationalState(
                status=DeploymentStatus.ABSENT,
                metadata=None,
            ),
        )
        # Exiting early here because the deployment takes over the current
        # process and anything else is irrelevant.
        sys.exit(0)

    # Launch the deployment app as a background subprocess.
    python_exe = sys.executable
    module = "zenml.deployers.server.app"
    cmd = [
        python_exe,
        "-m",
        module,
        "--deployment_id",
        str(deployment.id),
        "--log_file",
        os.path.abspath(log_file),
        "--host",
        settings.address,
        "--port",
        str(port),
    ]

    try:
        os.makedirs(os.path.dirname(log_file), exist_ok=True)
        proc = subprocess.Popen(
            cmd,
            cwd=os.getcwd(),
            env=child_env,
            stdout=subprocess.DEVNULL,
            stderr=subprocess.DEVNULL,
            start_new_session=True,
            close_fds=True,
        )
    except Exception as e:
        raise DeploymentProvisionError(
            f"Failed to start subprocess for deployment "
            f"'{deployment.name}': {e}"
        ) from e

    metadata = LocalDeploymentMetadata(
        pid=proc.pid,
        port=port,
        address=settings.address,
        log_file=log_file,
    )

    state = DeploymentOperationalState(
        status=DeploymentStatus.PENDING,
        url=url,
        metadata=metadata.model_dump(exclude_none=True),
    )

    return state
LocalDeployerConfig(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerConfig, LocalDeployerSettings

Local deployer config.

Source code in src/zenml/stack/stack_component.py
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references don't clash with pydantic validation.

    StackComponents allow the specification of all their string attributes
    using secret references of the form `{{secret_name.key}}`. This however
    is only possible when the stack component does not perform any explicit
    validation of this attribute using pydantic validators. If this were
    the case, the validation would run on the secret reference and would
    fail or in the worst case, modify the secret reference and lead to
    unexpected behavior. This method ensures that no attributes that require
    custom pydantic validation are set as secret references.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using
            plain-text secrets.
        **kwargs: Arguments to initialize this stack component.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            is passed as a secret reference, or if the `name` attribute
            was passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}` for a `{self.__class__.__name__}` "
                    "stack component. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure your stack "
                    "components with secrets here: "
                    "https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if pydantic_utils.has_validators(
            pydantic_class=self.__class__, field_name=key
        ):
            raise ValueError(
                f"Passing the stack component attribute `{key}` as a "
                "secret reference is not allowed as additional validation "
                "is required for this attribute."
            )

    super().__init__(**kwargs)
Attributes
is_local: bool property

Checks if this stack component is running locally.

Returns:

Type Description
bool

True if this config is for a local component.

LocalDeployerFlavor

Bases: BaseDeployerFlavor

Flavor for the Local daemon deployer.

Attributes
config_class: Type[BaseDeployerConfig] property

Config class for the flavor.

Returns:

Type Description
Type[BaseDeployerConfig]

The config class.

docs_url: Optional[str] property

A url to point at docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor docs url.

implementation_class: Type[LocalDeployer] property

Implementation class for this flavor.

Returns:

Type Description
Type[LocalDeployer]

The implementation class.

logo_url: str property

A url to represent the flavor in the dashboard.

Returns:

Type Description
str

The flavor logo.

name: str property

Name of the deployer flavor.

Returns:

Type Description
str

Flavor name.

sdk_docs_url: Optional[str] property

A url to point at SDK docs explaining this flavor.

Returns:

Type Description
Optional[str]

A flavor SDK docs url.

LocalDeployerSettings(warn_about_plain_text_secrets: bool = False, **kwargs: Any)

Bases: BaseDeployerSettings

Local deployer settings.

Attributes:

Name Type Description
port Optional[int]

Preferred port to run on.

allocate_port_if_busy bool

Whether to allocate a free port if busy.

port_range Tuple[int, int]

Range to scan when allocating a free port.

address str

Address to bind the server to.

blocking bool

Whether to run the deployment in the current process instead of running it as a daemon process.

Source code in src/zenml/config/secret_reference_mixin.py
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
def __init__(
    self, warn_about_plain_text_secrets: bool = False, **kwargs: Any
) -> None:
    """Ensures that secret references are only passed for valid fields.

    This method ensures that secret references are not passed for fields
    that explicitly prevent them or require pydantic validation.

    Args:
        warn_about_plain_text_secrets: If true, then warns about using plain-text secrets.
        **kwargs: Arguments to initialize this object.

    Raises:
        ValueError: If an attribute that requires custom pydantic validation
            or an attribute which explicitly disallows secret references
            is passed as a secret reference.
    """
    for key, value in kwargs.items():
        try:
            field = self.__class__.model_fields[key]
        except KeyError:
            # Value for a private attribute or non-existing field, this
            # will fail during the upcoming pydantic validation
            continue

        if value is None:
            continue

        if not secret_utils.is_secret_reference(value):
            if (
                secret_utils.is_secret_field(field)
                and warn_about_plain_text_secrets
            ):
                logger.warning(
                    "You specified a plain-text value for the sensitive "
                    f"attribute `{key}`. This is currently only a warning, "
                    "but future versions of ZenML will require you to pass "
                    "in sensitive information as secrets. Check out the "
                    "documentation on how to configure values with secrets "
                    "here: https://docs.zenml.io/deploying-zenml/deploying-zenml/secret-management"
                )
            continue

        if secret_utils.is_clear_text_field(field):
            raise ValueError(
                f"Passing the `{key}` attribute as a secret reference is "
                "not allowed."
            )

        requires_validation = has_validators(
            pydantic_class=self.__class__, field_name=key
        )
        if requires_validation:
            raise ValueError(
                f"Passing the attribute `{key}` as a secret reference is "
                "not allowed as additional validation is required for "
                "this attribute."
            )

    super().__init__(**kwargs)
LocalDeploymentMetadata

Bases: BaseModel

Metadata for a local daemon deployment.

Attributes:

Name Type Description
pid Optional[int]

PID of the daemon process.

port Optional[int]

TCP port the app listens on.

address Optional[str]

IP address the app binds to.

log_file Optional[str]

Path to log file.

Functions
from_deployment(deployment: DeploymentResponse) -> LocalDeploymentMetadata classmethod

Build metadata object from a deployment record.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment to read metadata from.

required

Returns:

Type Description
LocalDeploymentMetadata

Parsed local deployment metadata.

Source code in src/zenml/deployers/local/local_deployer.py
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
@classmethod
def from_deployment(
    cls, deployment: DeploymentResponse
) -> "LocalDeploymentMetadata":
    """Build metadata object from a deployment record.

    Args:
        deployment: The deployment to read metadata from.

    Returns:
        Parsed local deployment metadata.
    """
    return cls.model_validate(deployment.deployment_metadata or {})
Functions

server

Deployment server web application implementation.

Classes
BaseAppExtension

Bases: ABC

Abstract base for app extensions.

Extensions provide advanced framework-specific capabilities like: - Custom authentication/authorization - Observability (logging, tracing, metrics) - Complex routers with framework-specific features - OpenAPI customizations - Advanced middleware patterns

Subclasses must implement install() to modify the app.

Functions
install(app_runner: BaseDeploymentAppRunner) -> None abstractmethod

Install extension into the application.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

The deployment app runner instance being used to build and run the web application.

required

Raises:

Type Description
RuntimeError

If installation fails.

Source code in src/zenml/deployers/server/extensions.py
36
37
38
39
40
41
42
43
44
45
46
47
48
49
@abstractmethod
def install(
    self,
    app_runner: "BaseDeploymentAppRunner",
) -> None:
    """Install extension into the application.

    Args:
        app_runner: The deployment app runner instance being used to build
            and run the web application.

    Raises:
        RuntimeError: If installation fails.
    """
BaseDeploymentAppRunner(deployment: Union[str, UUID, DeploymentResponse], **kwargs: Any)

Bases: ABC

Base class for deployment app runners.

This class is responsible for building and running the ASGI compatible web application (e.g. FastAPI, Django, Flask, Falcon, Quart, BlackSheep, etc.) and the associated deployment service for the pipeline deployment. It also acts as a adaptation layer between the REST API interface and deployment service to preserve the following separation of concerns between the two components:

  • the ASGI application is responsible for handling the HTTP requests and responses to the user
  • the deployment service is responsible for handling the business logic

The deployment service code should be free of any ASGI application specific code and concerns and vice-versa. This allows them to be independently extendable and easily swappable.

Implementations of this class must use the deployment and its settings to configure and run the web application (e.g. FastAPI, Flask, Falcon, Quart, BlackSheep, etc.) that wraps the deployment service according to the user's specifications, particularly concerning the following:

  • exposed endpoints (URL paths, methods, input/output models)
  • middleware (CORS, authentication, logging, etc.)
  • error handling
  • lifecycle management (startup, shutdown)
  • custom hooks (startup, shutdown)
  • app configuration (workers, host, port, thread pool size, etc.)

The following methods must be provided by implementations of this class:

  • flavor: Return the flavor class associated with this deployment application runner.
  • build: Build and return an ASGI compatible web application (i.e. an ASGIApplication object that can be run with uvicorn). Most Python ASGI frameworks provide an ASGIApplication object. This method also has to register all the endpoints, middleware and extensions that are either required internally or supplied to it. It must also configure the startup and shutdown methods to be run as part of the ASGI application's lifespan or overload the _run_asgi_app method to handle the startup and shutdown as an alternative.
  • _get_dashboard_endpoints: Gets the dashboard endpoints specs from the deployment configuration. Only required if the dashboard files path is set in the deployment configuration and the app runner supports serving a dashboard alongside the API.
  • _build_cors_middleware: Builds the CORS middleware from the CORS settings in the deployment configuration.

Initialize the deployment app.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to run.

required
**kwargs Any

Additional keyword arguments for the deployment app runner.

{}
Source code in src/zenml/deployers/server/app.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
def __init__(
    self, deployment: Union[str, UUID, "DeploymentResponse"], **kwargs: Any
):
    """Initialize the deployment app.

    Args:
        deployment: The deployment to run.
        **kwargs: Additional keyword arguments for the deployment app runner.
    """
    self.deployment = self.load_deployment(deployment)
    assert self.deployment.snapshot is not None
    self.snapshot = self.deployment.snapshot

    self.settings = (
        self.snapshot.pipeline_configuration.deployment_settings
    )

    self.service = self.load_deployment_service()

    # Create framework-specific adapters
    self.endpoint_adapter = self._create_endpoint_adapter()
    self.middleware_adapter = self._create_middleware_adapter()
    self._asgi_app: Optional[ASGIApplication] = None

    self.endpoints: List[EndpointSpec] = []
    self.middlewares: List[MiddlewareSpec] = []
    self.extensions: List[AppExtensionSpec] = []
Attributes
asgi_app: ASGIApplication property

Get the ASGI application.

Returns:

Type Description
ASGIApplication

The ASGI application.

Raises:

Type Description
RuntimeError

If the ASGI application is not built yet.

flavor: BaseDeploymentAppRunnerFlavor abstractmethod property

Return the flavor associated with this deployment application runner.

Returns:

Type Description
BaseDeploymentAppRunnerFlavor

The flavor associated with this deployment application runner.

Functions
build(middlewares: List[MiddlewareSpec], endpoints: List[EndpointSpec], extensions: List[AppExtensionSpec]) -> ASGIApplication abstractmethod

Build the ASGI compatible web application.

Parameters:

Name Type Description Default
middlewares List[MiddlewareSpec]

The middleware to register.

required
endpoints List[EndpointSpec]

The endpoints to register.

required
extensions List[AppExtensionSpec]

The extensions to install.

required

Returns:

Type Description
ASGIApplication

The ASGI compatible web application.

Source code in src/zenml/deployers/server/app.py
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
@abstractmethod
def build(
    self,
    middlewares: List[MiddlewareSpec],
    endpoints: List[EndpointSpec],
    extensions: List[AppExtensionSpec],
) -> ASGIApplication:
    """Build the ASGI compatible web application.

    Args:
        middlewares: The middleware to register.
        endpoints: The endpoints to register.
        extensions: The extensions to install.

    Returns:
        The ASGI compatible web application.
    """
dashboard_files_path() -> Optional[str]

Get the absolute path of the dashboard files directory.

Returns:

Type Description
Optional[str]

Absolute path.

Raises:

Type Description
ValueError

If the dashboard files path is absolute.

RuntimeError

If the dashboard files path does not exist.

Source code in src/zenml/deployers/server/app.py
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
def dashboard_files_path(self) -> Optional[str]:
    """Get the absolute path of the dashboard files directory.

    Returns:
        Absolute path.

    Raises:
        ValueError: If the dashboard files path is absolute.
        RuntimeError: If the dashboard files path does not exist.
    """
    # If an absolute path is provided, use it
    dashboard_files_path = self.settings.dashboard_files_path
    if not dashboard_files_path:
        import zenml

        return os.path.join(
            zenml.__path__[0], "deployers", "server", "dashboard"
        )

    if os.path.isabs(dashboard_files_path):
        raise ValueError(
            f"Dashboard files path '{dashboard_files_path}' must be "
            "relative to the source root, not absolute."
        )

    # Otherwise, assume this is a path relative to the source root
    source_root = source_utils.get_source_root()
    dashboard_path = os.path.join(source_root, dashboard_files_path)
    if not os.path.exists(dashboard_path):
        raise RuntimeError(
            f"Dashboard files path '{dashboard_path}' does not exist. "
            f"Please check that the path exists and that the source root "
            f"is set correctly. Hint: run `zenml init` in your local source "
            f"directory to initialize the source root path."
        )
    return dashboard_path
install_extensions(*extension_specs: AppExtensionSpec) -> None

Install the given app extensions.

Parameters:

Name Type Description Default
extension_specs AppExtensionSpec

The app extensions to install.

()

Raises:

Type Description
ValueError

If the extension is not a subclass of BaseAppExtension.

RuntimeError

If the extension cannot be initialized.

Source code in src/zenml/deployers/server/app.py
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
def install_extensions(self, *extension_specs: AppExtensionSpec) -> None:
    """Install the given app extensions.

    Args:
        extension_specs: The app extensions to install.

    Raises:
        ValueError: If the extension is not a subclass of BaseAppExtension.
        RuntimeError: If the extension cannot be initialized.
    """
    for ext_spec in extension_specs:
        # Load extension
        ext_spec.load_sources()
        extension_obj = ext_spec.resolve_extension_handler()

        # Handle callable vs class-based extensions
        if isinstance(extension_obj, type):
            if not issubclass(extension_obj, BaseAppExtension):
                raise ValueError(
                    f"Extension type {extension_obj} is not a subclass of "
                    "BaseAppExtension"
                )

            try:
                extension_instance = extension_obj(
                    **ext_spec.extension_kwargs
                )
            except Exception as e:
                raise RuntimeError(
                    f"Failed to initialize extension class {extension_obj}: {e}"
                ) from e

            extension_instance.install(self)
        else:
            # Simple callable extension
            extension_obj(
                app_runner=self,
                **ext_spec.extension_kwargs,
            )
        self.extensions.append(ext_spec)
load_app_runner(deployment: Union[str, UUID, DeploymentResponse]) -> BaseDeploymentAppRunner classmethod

Load the app runner for the deployment.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to load the app runner for.

required

Returns:

Type Description
BaseDeploymentAppRunner

The app runner for the deployment.

Raises:

Type Description
RuntimeError

If the deployment app runner cannot be loaded.

Source code in src/zenml/deployers/server/app.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
@classmethod
def load_app_runner(
    cls, deployment: Union[str, UUID, "DeploymentResponse"]
) -> "BaseDeploymentAppRunner":
    """Load the app runner for the deployment.

    Args:
        deployment: The deployment to load the app runner for.

    Returns:
        The app runner for the deployment.

    Raises:
        RuntimeError: If the deployment app runner cannot be loaded.
    """
    deployment = cls.load_deployment(deployment)
    assert deployment.snapshot is not None

    settings = (
        deployment.snapshot.pipeline_configuration.deployment_settings
    )

    app_runner_flavor = (
        BaseDeploymentAppRunnerFlavor.load_app_runner_flavor(settings)
    )

    app_runner_cls = app_runner_flavor.implementation_class

    logger.info(
        f"Instantiating deployment app runner class '{app_runner_cls}' for "
        f"deployment {deployment.id}"
    )

    try:
        return app_runner_cls(
            deployment, **settings.deployment_app_runner_kwargs
        )
    except Exception as e:
        raise RuntimeError(
            f"Failed to instantiate deployment app runner class "
            f"'{app_runner_cls}' for deployment {deployment.id}: {e}"
        ) from e
load_deployment(deployment: Union[str, UUID, DeploymentResponse]) -> DeploymentResponse classmethod

Load the deployment.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to load.

required

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
RuntimeError

If the deployment or its snapshot cannot be loaded.

Source code in src/zenml/deployers/server/app.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
@classmethod
def load_deployment(
    cls, deployment: Union[str, UUID, "DeploymentResponse"]
) -> DeploymentResponse:
    """Load the deployment.

    Args:
        deployment: The deployment to load.

    Returns:
        The deployment.

    Raises:
        RuntimeError: If the deployment or its snapshot cannot be loaded.
    """
    if isinstance(deployment, str):
        deployment = UUID(deployment)

    if isinstance(deployment, UUID):
        try:
            deployment = Client().zen_store.get_deployment(
                deployment_id=deployment
            )
        except Exception as e:
            raise RuntimeError(
                f"Failed to load deployment {deployment}: {e}"
            ) from e
    else:
        assert isinstance(deployment, DeploymentResponse)

    if deployment.snapshot is None:
        raise RuntimeError(f"Deployment {deployment.id} has no snapshot")

    return deployment
load_deployment_service() -> BasePipelineDeploymentService

Load the service for the deployment.

Returns:

Type Description
BasePipelineDeploymentService

The deployment service for the deployment.

Raises:

Type Description
RuntimeError

If the deployment service cannot be loaded.

Source code in src/zenml/deployers/server/app.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
def load_deployment_service(self) -> BasePipelineDeploymentService:
    """Load the service for the deployment.

    Returns:
        The deployment service for the deployment.

    Raises:
        RuntimeError: If the deployment service cannot be loaded.
    """
    settings = self.snapshot.pipeline_configuration.deployment_settings
    if settings.deployment_service_class is None:
        service_cls: Type[BasePipelineDeploymentService] = (
            PipelineDeploymentService
        )
    else:
        assert isinstance(
            settings.deployment_service_class, SourceOrObject
        )
        try:
            loaded_service_cls = settings.deployment_service_class.load()
        except Exception as e:
            raise RuntimeError(
                f"Failed to load deployment service from source "
                f"{settings.deployment_service_class}: {e}\n"
                "Please check that the source is valid and that the "
                "deployment service class is importable from the source "
                "root directory. Hint: run `zenml init` in your local "
                "source directory to initialize the source root path."
            ) from e

        if not isinstance(loaded_service_cls, type) or not issubclass(
            loaded_service_cls, BasePipelineDeploymentService
        ):
            raise RuntimeError(
                f"Deployment service class '{loaded_service_cls}' is not a "
                "subclass of 'BasePipelineDeploymentService'"
            )
        service_cls = loaded_service_cls

    logger.info(
        f"Instantiating deployment service class '{service_cls}' for "
        f"deployment {self.deployment.id}"
    )

    try:
        return service_cls(self, **settings.deployment_service_kwargs)
    except Exception as e:
        raise RuntimeError(
            f"Failed to instantiate deployment service class "
            f"'{service_cls}' for deployment {self.deployment.id}: {e}"
        ) from e
register_endpoints(*endpoint_specs: EndpointSpec) -> None

Register the given endpoints.

Parameters:

Name Type Description Default
endpoint_specs EndpointSpec

The endpoints to register.

()
Source code in src/zenml/deployers/server/app.py
645
646
647
648
649
650
651
652
653
654
def register_endpoints(self, *endpoint_specs: EndpointSpec) -> None:
    """Register the given endpoints.

    Args:
        endpoint_specs: The endpoints to register.
    """
    for endpoint_spec in endpoint_specs:
        endpoint_spec.load_sources()
        self.endpoint_adapter.register_endpoint(self, endpoint_spec)
        self.endpoints.append(endpoint_spec)
register_middlewares(*middleware_specs: MiddlewareSpec) -> None

Register the given middleware.

Parameters:

Name Type Description Default
middleware_specs MiddlewareSpec

The middleware to register.

()
Source code in src/zenml/deployers/server/app.py
656
657
658
659
660
661
662
663
664
665
def register_middlewares(self, *middleware_specs: MiddlewareSpec) -> None:
    """Register the given middleware.

    Args:
        middleware_specs: The middleware to register.
    """
    for middleware_spec in middleware_specs:
        middleware_spec.load_sources()
        self.middleware_adapter.register_middleware(self, middleware_spec)
        self.middlewares.append(middleware_spec)
run() -> None

Run the deployment app.

Source code in src/zenml/deployers/server/app.py
875
876
877
878
879
880
def run(self) -> None:
    """Run the deployment app."""
    if self._asgi_app is None:
        self._build_asgi_app()

    self._run_asgi_app(self.asgi_app)
shutdown() -> None

Shutdown the deployment app.

Raises:

Type Description
Exception

If the service cleanup fails.

Source code in src/zenml/deployers/server/app.py
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
def shutdown(self) -> None:
    """Shutdown the deployment app.

    Raises:
        Exception: If the service cleanup fails.
    """
    self._run_shutdown_hook()

    logger.info("🛑 Cleaning up the pipeline deployment service...")
    try:
        self.service.cleanup()
        logger.info(
            "✅ The pipeline deployment service was cleaned up successfully"
        )
    except Exception as e:
        logger.error(
            f"❌ Failed to clean up the pipeline deployment service: {e}"
        )
        raise
startup() -> None

Startup the deployment app.

Raises:

Type Description
Exception

If the service initialization fails.

Source code in src/zenml/deployers/server/app.py
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
def startup(self) -> None:
    """Startup the deployment app.

    Raises:
        Exception: If the service initialization fails.
    """
    logger.info("🚀 Initializing the pipeline deployment service...")

    try:
        self.service.initialize()
        logger.info(
            "✅ Pipeline deployment service initialized successfully"
        )
    except Exception as e:
        logger.error(
            f"❌ Failed to initialize the pipeline deployment service: {e}"
        )
        raise

    self._run_startup_hook()
EndpointAdapter

Bases: ABC

Converts framework-agnostic endpoint specs to framework endpoints.

Functions
register_endpoint(app_runner: BaseDeploymentAppRunner, spec: EndpointSpec) -> None abstractmethod

Register an endpoint on the app.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec EndpointSpec

Framework-agnostic endpoint specification.

required

Raises:

Type Description
RuntimeError

If endpoint registration fails.

Source code in src/zenml/deployers/server/adapters.py
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
@abstractmethod
def register_endpoint(
    self,
    app_runner: "BaseDeploymentAppRunner",
    spec: "EndpointSpec",
) -> None:
    """Register an endpoint on the app.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic endpoint specification.

    Raises:
        RuntimeError: If endpoint registration fails.
    """
resolve_endpoint_handler(app_runner: BaseDeploymentAppRunner, endpoint_spec: EndpointSpec) -> Any

Resolve an endpoint handler from its specification.

This method handles three types of handlers as defined in EndpointSpec: 1. Direct endpoint function - returned as-is 2. Endpoint builder class - instantiated with app_runner, app, and init_kwargs 3. Endpoint builder function - called with app_runner, app, and init_kwargs to obtain the actual endpoint

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
endpoint_spec EndpointSpec

The endpoint specification to resolve the handler from.

required

Returns:

Type Description
Any

The actual endpoint callable ready to be registered.

Raises:

Type Description
ValueError

If handler is not callable or builder returns non-callable.

RuntimeError

If handler resolution fails.

Source code in src/zenml/deployers/server/adapters.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
def resolve_endpoint_handler(
    self,
    app_runner: "BaseDeploymentAppRunner",
    endpoint_spec: "EndpointSpec",
) -> Any:
    """Resolve an endpoint handler from its specification.

    This method handles three types of handlers as defined in EndpointSpec:
    1. Direct endpoint function - returned as-is
    2. Endpoint builder class - instantiated with app_runner, app, and
    init_kwargs
    3. Endpoint builder function - called with app_runner, app, and
    init_kwargs to obtain the actual endpoint

    Args:
        app_runner: Deployment app runner instance.
        endpoint_spec: The endpoint specification to resolve the handler
            from.

    Returns:
        The actual endpoint callable ready to be registered.

    Raises:
        ValueError: If handler is not callable or builder returns
            non-callable.
        RuntimeError: If handler resolution fails.
    """
    import inspect

    assert isinstance(endpoint_spec.handler, SourceOrObject)
    handler = endpoint_spec.handler.load()

    if endpoint_spec.native:
        return handler

    # Type 2: Endpoint builder class
    if isinstance(handler, type):
        if not hasattr(handler, "__call__"):
            raise ValueError(
                f"Handler class {handler.__name__} must implement "
                "__call__ method"
            )
        try:
            inner_handler = handler(
                app_runner=app_runner,
                **endpoint_spec.init_kwargs,
            )
        except TypeError as e:
            raise RuntimeError(
                f"Failed to instantiate handler class "
                f"{handler.__name__}: {e}"
            ) from e

        if not callable(inner_handler):
            raise ValueError(
                f"The __call__ method of the handler class "
                f"{handler.__name__} must return a callable"
            )

        return inner_handler

    if not callable(handler):
        raise ValueError(f"Handler {handler} is not callable")

    # Determine if it's Type 3 (builder function) or Type 1 (direct)
    try:
        sig = inspect.signature(handler)
        params = set(sig.parameters.keys())

        # Type 3: Builder function (has app_runner parameter)
        if "app_runner" in params:
            try:
                inner_handler = handler(
                    app_runner=app_runner,
                    **endpoint_spec.init_kwargs,
                )
                if not callable(inner_handler):
                    raise ValueError(
                        f"Builder function {handler.__name__} must "
                        f"return a callable, got {type(inner_handler)}"
                    )
                return inner_handler
            except TypeError as e:
                raise RuntimeError(
                    f"Failed to call builder function "
                    f"{handler.__name__}: {e}"
                ) from e

        # Type 1: Direct endpoint function
        return handler

    except ValueError:
        # inspect.signature failed, assume it's a direct endpoint
        return handler
MiddlewareAdapter

Bases: ABC

Converts framework-agnostic middleware specs to framework middleware.

Functions
register_middleware(app_runner: BaseDeploymentAppRunner, spec: MiddlewareSpec) -> None abstractmethod

Register middleware on the app.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec MiddlewareSpec

Framework-agnostic middleware specification.

required

Raises:

Type Description
ValueError

If middleware scope requires missing parameters.

RuntimeError

If middleware registration fails.

Source code in src/zenml/deployers/server/adapters.py
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
@abstractmethod
def register_middleware(
    self,
    app_runner: "BaseDeploymentAppRunner",
    spec: "MiddlewareSpec",
) -> None:
    """Register middleware on the app.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic middleware specification.

    Raises:
        ValueError: If middleware scope requires missing parameters.
        RuntimeError: If middleware registration fails.
    """
resolve_middleware_handler(app_runner: BaseDeploymentAppRunner, middleware_spec: MiddlewareSpec) -> Any

Resolve a middleware handler from its specification.

This method handles three types of middleware as defined in MiddlewareSpec: 1. Middleware callable class 2. Middleware callable function 3. Native middleware object

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
middleware_spec MiddlewareSpec

The middleware specification to resolve the handler from.

required

Returns:

Type Description
Any

The actual middleware callable ready to be registered.

Raises:

Type Description
ValueError

If middleware is not callable or builder returns non-callable.

Source code in src/zenml/deployers/server/adapters.py
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def resolve_middleware_handler(
    self,
    app_runner: "BaseDeploymentAppRunner",
    middleware_spec: "MiddlewareSpec",
) -> Any:
    """Resolve a middleware handler from its specification.

    This method handles three types of middleware as defined in MiddlewareSpec:
    1. Middleware callable class
    2. Middleware callable function
    3. Native middleware object

    Args:
        app_runner: Deployment app runner instance.
        middleware_spec: The middleware specification to resolve the handler
            from.

    Returns:
        The actual middleware callable ready to be registered.

    Raises:
        ValueError: If middleware is not callable or builder returns
            non-callable.
    """
    import inspect

    assert isinstance(middleware_spec.middleware, SourceOrObject)
    middleware = middleware_spec.middleware.load()

    if middleware_spec.native:
        return middleware

    # Type 1: Middleware class
    if isinstance(middleware, type):
        return middleware

    if not callable(middleware):
        raise ValueError(f"Middleware {middleware} is not callable")

    # Wrap the middleware function in a middleware class
    class _MiddlewareAdapter:
        def __init__(self, app: ASGIApplication, **kwargs: Any) -> None:
            self.app = app
            self.kwargs = kwargs

        async def __call__(
            self,
            scope: Scope,
            receive: ASGIReceiveCallable,
            send: ASGISendCallable,
        ) -> None:
            callable_middleware = cast(Callable[..., Any], middleware)
            if inspect.iscoroutinefunction(callable_middleware):
                await callable_middleware(
                    app=self.app,
                    scope=scope,
                    receive=receive,
                    send=send,
                    **self.kwargs,
                )
            else:
                callable_middleware(
                    app=self.app,
                    scope=scope,
                    receive=receive,
                    send=send,
                    **self.kwargs,
                )

    return _MiddlewareAdapter
Modules
adapters

Framework adapter interfaces.

Classes
EndpointAdapter

Bases: ABC

Converts framework-agnostic endpoint specs to framework endpoints.

Functions
register_endpoint(app_runner: BaseDeploymentAppRunner, spec: EndpointSpec) -> None abstractmethod

Register an endpoint on the app.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec EndpointSpec

Framework-agnostic endpoint specification.

required

Raises:

Type Description
RuntimeError

If endpoint registration fails.

Source code in src/zenml/deployers/server/adapters.py
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
@abstractmethod
def register_endpoint(
    self,
    app_runner: "BaseDeploymentAppRunner",
    spec: "EndpointSpec",
) -> None:
    """Register an endpoint on the app.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic endpoint specification.

    Raises:
        RuntimeError: If endpoint registration fails.
    """
resolve_endpoint_handler(app_runner: BaseDeploymentAppRunner, endpoint_spec: EndpointSpec) -> Any

Resolve an endpoint handler from its specification.

This method handles three types of handlers as defined in EndpointSpec: 1. Direct endpoint function - returned as-is 2. Endpoint builder class - instantiated with app_runner, app, and init_kwargs 3. Endpoint builder function - called with app_runner, app, and init_kwargs to obtain the actual endpoint

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
endpoint_spec EndpointSpec

The endpoint specification to resolve the handler from.

required

Returns:

Type Description
Any

The actual endpoint callable ready to be registered.

Raises:

Type Description
ValueError

If handler is not callable or builder returns non-callable.

RuntimeError

If handler resolution fails.

Source code in src/zenml/deployers/server/adapters.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
def resolve_endpoint_handler(
    self,
    app_runner: "BaseDeploymentAppRunner",
    endpoint_spec: "EndpointSpec",
) -> Any:
    """Resolve an endpoint handler from its specification.

    This method handles three types of handlers as defined in EndpointSpec:
    1. Direct endpoint function - returned as-is
    2. Endpoint builder class - instantiated with app_runner, app, and
    init_kwargs
    3. Endpoint builder function - called with app_runner, app, and
    init_kwargs to obtain the actual endpoint

    Args:
        app_runner: Deployment app runner instance.
        endpoint_spec: The endpoint specification to resolve the handler
            from.

    Returns:
        The actual endpoint callable ready to be registered.

    Raises:
        ValueError: If handler is not callable or builder returns
            non-callable.
        RuntimeError: If handler resolution fails.
    """
    import inspect

    assert isinstance(endpoint_spec.handler, SourceOrObject)
    handler = endpoint_spec.handler.load()

    if endpoint_spec.native:
        return handler

    # Type 2: Endpoint builder class
    if isinstance(handler, type):
        if not hasattr(handler, "__call__"):
            raise ValueError(
                f"Handler class {handler.__name__} must implement "
                "__call__ method"
            )
        try:
            inner_handler = handler(
                app_runner=app_runner,
                **endpoint_spec.init_kwargs,
            )
        except TypeError as e:
            raise RuntimeError(
                f"Failed to instantiate handler class "
                f"{handler.__name__}: {e}"
            ) from e

        if not callable(inner_handler):
            raise ValueError(
                f"The __call__ method of the handler class "
                f"{handler.__name__} must return a callable"
            )

        return inner_handler

    if not callable(handler):
        raise ValueError(f"Handler {handler} is not callable")

    # Determine if it's Type 3 (builder function) or Type 1 (direct)
    try:
        sig = inspect.signature(handler)
        params = set(sig.parameters.keys())

        # Type 3: Builder function (has app_runner parameter)
        if "app_runner" in params:
            try:
                inner_handler = handler(
                    app_runner=app_runner,
                    **endpoint_spec.init_kwargs,
                )
                if not callable(inner_handler):
                    raise ValueError(
                        f"Builder function {handler.__name__} must "
                        f"return a callable, got {type(inner_handler)}"
                    )
                return inner_handler
            except TypeError as e:
                raise RuntimeError(
                    f"Failed to call builder function "
                    f"{handler.__name__}: {e}"
                ) from e

        # Type 1: Direct endpoint function
        return handler

    except ValueError:
        # inspect.signature failed, assume it's a direct endpoint
        return handler
MiddlewareAdapter

Bases: ABC

Converts framework-agnostic middleware specs to framework middleware.

Functions
register_middleware(app_runner: BaseDeploymentAppRunner, spec: MiddlewareSpec) -> None abstractmethod

Register middleware on the app.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec MiddlewareSpec

Framework-agnostic middleware specification.

required

Raises:

Type Description
ValueError

If middleware scope requires missing parameters.

RuntimeError

If middleware registration fails.

Source code in src/zenml/deployers/server/adapters.py
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
@abstractmethod
def register_middleware(
    self,
    app_runner: "BaseDeploymentAppRunner",
    spec: "MiddlewareSpec",
) -> None:
    """Register middleware on the app.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic middleware specification.

    Raises:
        ValueError: If middleware scope requires missing parameters.
        RuntimeError: If middleware registration fails.
    """
resolve_middleware_handler(app_runner: BaseDeploymentAppRunner, middleware_spec: MiddlewareSpec) -> Any

Resolve a middleware handler from its specification.

This method handles three types of middleware as defined in MiddlewareSpec: 1. Middleware callable class 2. Middleware callable function 3. Native middleware object

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
middleware_spec MiddlewareSpec

The middleware specification to resolve the handler from.

required

Returns:

Type Description
Any

The actual middleware callable ready to be registered.

Raises:

Type Description
ValueError

If middleware is not callable or builder returns non-callable.

Source code in src/zenml/deployers/server/adapters.py
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
def resolve_middleware_handler(
    self,
    app_runner: "BaseDeploymentAppRunner",
    middleware_spec: "MiddlewareSpec",
) -> Any:
    """Resolve a middleware handler from its specification.

    This method handles three types of middleware as defined in MiddlewareSpec:
    1. Middleware callable class
    2. Middleware callable function
    3. Native middleware object

    Args:
        app_runner: Deployment app runner instance.
        middleware_spec: The middleware specification to resolve the handler
            from.

    Returns:
        The actual middleware callable ready to be registered.

    Raises:
        ValueError: If middleware is not callable or builder returns
            non-callable.
    """
    import inspect

    assert isinstance(middleware_spec.middleware, SourceOrObject)
    middleware = middleware_spec.middleware.load()

    if middleware_spec.native:
        return middleware

    # Type 1: Middleware class
    if isinstance(middleware, type):
        return middleware

    if not callable(middleware):
        raise ValueError(f"Middleware {middleware} is not callable")

    # Wrap the middleware function in a middleware class
    class _MiddlewareAdapter:
        def __init__(self, app: ASGIApplication, **kwargs: Any) -> None:
            self.app = app
            self.kwargs = kwargs

        async def __call__(
            self,
            scope: Scope,
            receive: ASGIReceiveCallable,
            send: ASGISendCallable,
        ) -> None:
            callable_middleware = cast(Callable[..., Any], middleware)
            if inspect.iscoroutinefunction(callable_middleware):
                await callable_middleware(
                    app=self.app,
                    scope=scope,
                    receive=receive,
                    send=send,
                    **self.kwargs,
                )
            else:
                callable_middleware(
                    app=self.app,
                    scope=scope,
                    receive=receive,
                    send=send,
                    **self.kwargs,
                )

    return _MiddlewareAdapter
app

Base deployment app runner.

Classes
BaseDeploymentAppRunner(deployment: Union[str, UUID, DeploymentResponse], **kwargs: Any)

Bases: ABC

Base class for deployment app runners.

This class is responsible for building and running the ASGI compatible web application (e.g. FastAPI, Django, Flask, Falcon, Quart, BlackSheep, etc.) and the associated deployment service for the pipeline deployment. It also acts as a adaptation layer between the REST API interface and deployment service to preserve the following separation of concerns between the two components:

  • the ASGI application is responsible for handling the HTTP requests and responses to the user
  • the deployment service is responsible for handling the business logic

The deployment service code should be free of any ASGI application specific code and concerns and vice-versa. This allows them to be independently extendable and easily swappable.

Implementations of this class must use the deployment and its settings to configure and run the web application (e.g. FastAPI, Flask, Falcon, Quart, BlackSheep, etc.) that wraps the deployment service according to the user's specifications, particularly concerning the following:

  • exposed endpoints (URL paths, methods, input/output models)
  • middleware (CORS, authentication, logging, etc.)
  • error handling
  • lifecycle management (startup, shutdown)
  • custom hooks (startup, shutdown)
  • app configuration (workers, host, port, thread pool size, etc.)

The following methods must be provided by implementations of this class:

  • flavor: Return the flavor class associated with this deployment application runner.
  • build: Build and return an ASGI compatible web application (i.e. an ASGIApplication object that can be run with uvicorn). Most Python ASGI frameworks provide an ASGIApplication object. This method also has to register all the endpoints, middleware and extensions that are either required internally or supplied to it. It must also configure the startup and shutdown methods to be run as part of the ASGI application's lifespan or overload the _run_asgi_app method to handle the startup and shutdown as an alternative.
  • _get_dashboard_endpoints: Gets the dashboard endpoints specs from the deployment configuration. Only required if the dashboard files path is set in the deployment configuration and the app runner supports serving a dashboard alongside the API.
  • _build_cors_middleware: Builds the CORS middleware from the CORS settings in the deployment configuration.

Initialize the deployment app.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to run.

required
**kwargs Any

Additional keyword arguments for the deployment app runner.

{}
Source code in src/zenml/deployers/server/app.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
def __init__(
    self, deployment: Union[str, UUID, "DeploymentResponse"], **kwargs: Any
):
    """Initialize the deployment app.

    Args:
        deployment: The deployment to run.
        **kwargs: Additional keyword arguments for the deployment app runner.
    """
    self.deployment = self.load_deployment(deployment)
    assert self.deployment.snapshot is not None
    self.snapshot = self.deployment.snapshot

    self.settings = (
        self.snapshot.pipeline_configuration.deployment_settings
    )

    self.service = self.load_deployment_service()

    # Create framework-specific adapters
    self.endpoint_adapter = self._create_endpoint_adapter()
    self.middleware_adapter = self._create_middleware_adapter()
    self._asgi_app: Optional[ASGIApplication] = None

    self.endpoints: List[EndpointSpec] = []
    self.middlewares: List[MiddlewareSpec] = []
    self.extensions: List[AppExtensionSpec] = []
Attributes
asgi_app: ASGIApplication property

Get the ASGI application.

Returns:

Type Description
ASGIApplication

The ASGI application.

Raises:

Type Description
RuntimeError

If the ASGI application is not built yet.

flavor: BaseDeploymentAppRunnerFlavor abstractmethod property

Return the flavor associated with this deployment application runner.

Returns:

Type Description
BaseDeploymentAppRunnerFlavor

The flavor associated with this deployment application runner.

Functions
build(middlewares: List[MiddlewareSpec], endpoints: List[EndpointSpec], extensions: List[AppExtensionSpec]) -> ASGIApplication abstractmethod

Build the ASGI compatible web application.

Parameters:

Name Type Description Default
middlewares List[MiddlewareSpec]

The middleware to register.

required
endpoints List[EndpointSpec]

The endpoints to register.

required
extensions List[AppExtensionSpec]

The extensions to install.

required

Returns:

Type Description
ASGIApplication

The ASGI compatible web application.

Source code in src/zenml/deployers/server/app.py
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
@abstractmethod
def build(
    self,
    middlewares: List[MiddlewareSpec],
    endpoints: List[EndpointSpec],
    extensions: List[AppExtensionSpec],
) -> ASGIApplication:
    """Build the ASGI compatible web application.

    Args:
        middlewares: The middleware to register.
        endpoints: The endpoints to register.
        extensions: The extensions to install.

    Returns:
        The ASGI compatible web application.
    """
dashboard_files_path() -> Optional[str]

Get the absolute path of the dashboard files directory.

Returns:

Type Description
Optional[str]

Absolute path.

Raises:

Type Description
ValueError

If the dashboard files path is absolute.

RuntimeError

If the dashboard files path does not exist.

Source code in src/zenml/deployers/server/app.py
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
def dashboard_files_path(self) -> Optional[str]:
    """Get the absolute path of the dashboard files directory.

    Returns:
        Absolute path.

    Raises:
        ValueError: If the dashboard files path is absolute.
        RuntimeError: If the dashboard files path does not exist.
    """
    # If an absolute path is provided, use it
    dashboard_files_path = self.settings.dashboard_files_path
    if not dashboard_files_path:
        import zenml

        return os.path.join(
            zenml.__path__[0], "deployers", "server", "dashboard"
        )

    if os.path.isabs(dashboard_files_path):
        raise ValueError(
            f"Dashboard files path '{dashboard_files_path}' must be "
            "relative to the source root, not absolute."
        )

    # Otherwise, assume this is a path relative to the source root
    source_root = source_utils.get_source_root()
    dashboard_path = os.path.join(source_root, dashboard_files_path)
    if not os.path.exists(dashboard_path):
        raise RuntimeError(
            f"Dashboard files path '{dashboard_path}' does not exist. "
            f"Please check that the path exists and that the source root "
            f"is set correctly. Hint: run `zenml init` in your local source "
            f"directory to initialize the source root path."
        )
    return dashboard_path
install_extensions(*extension_specs: AppExtensionSpec) -> None

Install the given app extensions.

Parameters:

Name Type Description Default
extension_specs AppExtensionSpec

The app extensions to install.

()

Raises:

Type Description
ValueError

If the extension is not a subclass of BaseAppExtension.

RuntimeError

If the extension cannot be initialized.

Source code in src/zenml/deployers/server/app.py
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
def install_extensions(self, *extension_specs: AppExtensionSpec) -> None:
    """Install the given app extensions.

    Args:
        extension_specs: The app extensions to install.

    Raises:
        ValueError: If the extension is not a subclass of BaseAppExtension.
        RuntimeError: If the extension cannot be initialized.
    """
    for ext_spec in extension_specs:
        # Load extension
        ext_spec.load_sources()
        extension_obj = ext_spec.resolve_extension_handler()

        # Handle callable vs class-based extensions
        if isinstance(extension_obj, type):
            if not issubclass(extension_obj, BaseAppExtension):
                raise ValueError(
                    f"Extension type {extension_obj} is not a subclass of "
                    "BaseAppExtension"
                )

            try:
                extension_instance = extension_obj(
                    **ext_spec.extension_kwargs
                )
            except Exception as e:
                raise RuntimeError(
                    f"Failed to initialize extension class {extension_obj}: {e}"
                ) from e

            extension_instance.install(self)
        else:
            # Simple callable extension
            extension_obj(
                app_runner=self,
                **ext_spec.extension_kwargs,
            )
        self.extensions.append(ext_spec)
load_app_runner(deployment: Union[str, UUID, DeploymentResponse]) -> BaseDeploymentAppRunner classmethod

Load the app runner for the deployment.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to load the app runner for.

required

Returns:

Type Description
BaseDeploymentAppRunner

The app runner for the deployment.

Raises:

Type Description
RuntimeError

If the deployment app runner cannot be loaded.

Source code in src/zenml/deployers/server/app.py
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
@classmethod
def load_app_runner(
    cls, deployment: Union[str, UUID, "DeploymentResponse"]
) -> "BaseDeploymentAppRunner":
    """Load the app runner for the deployment.

    Args:
        deployment: The deployment to load the app runner for.

    Returns:
        The app runner for the deployment.

    Raises:
        RuntimeError: If the deployment app runner cannot be loaded.
    """
    deployment = cls.load_deployment(deployment)
    assert deployment.snapshot is not None

    settings = (
        deployment.snapshot.pipeline_configuration.deployment_settings
    )

    app_runner_flavor = (
        BaseDeploymentAppRunnerFlavor.load_app_runner_flavor(settings)
    )

    app_runner_cls = app_runner_flavor.implementation_class

    logger.info(
        f"Instantiating deployment app runner class '{app_runner_cls}' for "
        f"deployment {deployment.id}"
    )

    try:
        return app_runner_cls(
            deployment, **settings.deployment_app_runner_kwargs
        )
    except Exception as e:
        raise RuntimeError(
            f"Failed to instantiate deployment app runner class "
            f"'{app_runner_cls}' for deployment {deployment.id}: {e}"
        ) from e
load_deployment(deployment: Union[str, UUID, DeploymentResponse]) -> DeploymentResponse classmethod

Load the deployment.

Parameters:

Name Type Description Default
deployment Union[str, UUID, DeploymentResponse]

The deployment to load.

required

Returns:

Type Description
DeploymentResponse

The deployment.

Raises:

Type Description
RuntimeError

If the deployment or its snapshot cannot be loaded.

Source code in src/zenml/deployers/server/app.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
@classmethod
def load_deployment(
    cls, deployment: Union[str, UUID, "DeploymentResponse"]
) -> DeploymentResponse:
    """Load the deployment.

    Args:
        deployment: The deployment to load.

    Returns:
        The deployment.

    Raises:
        RuntimeError: If the deployment or its snapshot cannot be loaded.
    """
    if isinstance(deployment, str):
        deployment = UUID(deployment)

    if isinstance(deployment, UUID):
        try:
            deployment = Client().zen_store.get_deployment(
                deployment_id=deployment
            )
        except Exception as e:
            raise RuntimeError(
                f"Failed to load deployment {deployment}: {e}"
            ) from e
    else:
        assert isinstance(deployment, DeploymentResponse)

    if deployment.snapshot is None:
        raise RuntimeError(f"Deployment {deployment.id} has no snapshot")

    return deployment
load_deployment_service() -> BasePipelineDeploymentService

Load the service for the deployment.

Returns:

Type Description
BasePipelineDeploymentService

The deployment service for the deployment.

Raises:

Type Description
RuntimeError

If the deployment service cannot be loaded.

Source code in src/zenml/deployers/server/app.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
def load_deployment_service(self) -> BasePipelineDeploymentService:
    """Load the service for the deployment.

    Returns:
        The deployment service for the deployment.

    Raises:
        RuntimeError: If the deployment service cannot be loaded.
    """
    settings = self.snapshot.pipeline_configuration.deployment_settings
    if settings.deployment_service_class is None:
        service_cls: Type[BasePipelineDeploymentService] = (
            PipelineDeploymentService
        )
    else:
        assert isinstance(
            settings.deployment_service_class, SourceOrObject
        )
        try:
            loaded_service_cls = settings.deployment_service_class.load()
        except Exception as e:
            raise RuntimeError(
                f"Failed to load deployment service from source "
                f"{settings.deployment_service_class}: {e}\n"
                "Please check that the source is valid and that the "
                "deployment service class is importable from the source "
                "root directory. Hint: run `zenml init` in your local "
                "source directory to initialize the source root path."
            ) from e

        if not isinstance(loaded_service_cls, type) or not issubclass(
            loaded_service_cls, BasePipelineDeploymentService
        ):
            raise RuntimeError(
                f"Deployment service class '{loaded_service_cls}' is not a "
                "subclass of 'BasePipelineDeploymentService'"
            )
        service_cls = loaded_service_cls

    logger.info(
        f"Instantiating deployment service class '{service_cls}' for "
        f"deployment {self.deployment.id}"
    )

    try:
        return service_cls(self, **settings.deployment_service_kwargs)
    except Exception as e:
        raise RuntimeError(
            f"Failed to instantiate deployment service class "
            f"'{service_cls}' for deployment {self.deployment.id}: {e}"
        ) from e
register_endpoints(*endpoint_specs: EndpointSpec) -> None

Register the given endpoints.

Parameters:

Name Type Description Default
endpoint_specs EndpointSpec

The endpoints to register.

()
Source code in src/zenml/deployers/server/app.py
645
646
647
648
649
650
651
652
653
654
def register_endpoints(self, *endpoint_specs: EndpointSpec) -> None:
    """Register the given endpoints.

    Args:
        endpoint_specs: The endpoints to register.
    """
    for endpoint_spec in endpoint_specs:
        endpoint_spec.load_sources()
        self.endpoint_adapter.register_endpoint(self, endpoint_spec)
        self.endpoints.append(endpoint_spec)
register_middlewares(*middleware_specs: MiddlewareSpec) -> None

Register the given middleware.

Parameters:

Name Type Description Default
middleware_specs MiddlewareSpec

The middleware to register.

()
Source code in src/zenml/deployers/server/app.py
656
657
658
659
660
661
662
663
664
665
def register_middlewares(self, *middleware_specs: MiddlewareSpec) -> None:
    """Register the given middleware.

    Args:
        middleware_specs: The middleware to register.
    """
    for middleware_spec in middleware_specs:
        middleware_spec.load_sources()
        self.middleware_adapter.register_middleware(self, middleware_spec)
        self.middlewares.append(middleware_spec)
run() -> None

Run the deployment app.

Source code in src/zenml/deployers/server/app.py
875
876
877
878
879
880
def run(self) -> None:
    """Run the deployment app."""
    if self._asgi_app is None:
        self._build_asgi_app()

    self._run_asgi_app(self.asgi_app)
shutdown() -> None

Shutdown the deployment app.

Raises:

Type Description
Exception

If the service cleanup fails.

Source code in src/zenml/deployers/server/app.py
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
def shutdown(self) -> None:
    """Shutdown the deployment app.

    Raises:
        Exception: If the service cleanup fails.
    """
    self._run_shutdown_hook()

    logger.info("🛑 Cleaning up the pipeline deployment service...")
    try:
        self.service.cleanup()
        logger.info(
            "✅ The pipeline deployment service was cleaned up successfully"
        )
    except Exception as e:
        logger.error(
            f"❌ Failed to clean up the pipeline deployment service: {e}"
        )
        raise
startup() -> None

Startup the deployment app.

Raises:

Type Description
Exception

If the service initialization fails.

Source code in src/zenml/deployers/server/app.py
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
def startup(self) -> None:
    """Startup the deployment app.

    Raises:
        Exception: If the service initialization fails.
    """
    logger.info("🚀 Initializing the pipeline deployment service...")

    try:
        self.service.initialize()
        logger.info(
            "✅ Pipeline deployment service initialized successfully"
        )
    except Exception as e:
        logger.error(
            f"❌ Failed to initialize the pipeline deployment service: {e}"
        )
        raise

    self._run_startup_hook()
BaseDeploymentAppRunnerFlavor

Bases: ABC

Base class for deployment app runner flavors.

BaseDeploymentAppRunner implementations must also provide implementations for this class. The flavor class implementation should be kept separate from the implementation class to allow it to be imported without importing the implementation class and all its dependencies.

Attributes
implementation_class: Type[BaseDeploymentAppRunner] abstractmethod property

The class that implements the deployment app runner.

Returns:

Type Description
Type[BaseDeploymentAppRunner]

The implementation class for the deployment app runner.

name: str abstractmethod property

The name of the deployment app runner flavor.

Returns:

Type Description
str

The name of the deployment app runner flavor.

requirements: List[str] property

The software requirements for the deployment app runner.

Returns:

Type Description
List[str]

The software requirements for the deployment app runner.

Functions
load_app_runner_flavor(settings: DeploymentSettings) -> BaseDeploymentAppRunnerFlavor classmethod

Load the app runner flavor for the deployment settings.

Parameters:

Name Type Description Default
settings DeploymentSettings

The deployment settings to load the app runner flavor for.

required

Returns:

Type Description
BaseDeploymentAppRunnerFlavor

The app runner flavor for the deployment.

Raises:

Type Description
RuntimeError

If the deployment app runner flavor cannot be loaded.

Source code in src/zenml/deployers/server/app.py
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
@classmethod
def load_app_runner_flavor(
    cls, settings: DeploymentSettings
) -> "BaseDeploymentAppRunnerFlavor":
    """Load the app runner flavor for the deployment settings.

    Args:
        settings: The deployment settings to load the app runner flavor for.

    Returns:
        The app runner flavor for the deployment.

    Raises:
        RuntimeError: If the deployment app runner flavor cannot be loaded.
    """
    from zenml.deployers.server.fastapi import (
        FastAPIDeploymentAppRunnerFlavor,
    )

    if settings.deployment_app_runner_flavor is None:
        app_runner_flavor_class: Type[BaseDeploymentAppRunnerFlavor] = (
            FastAPIDeploymentAppRunnerFlavor
        )
    else:
        assert isinstance(
            settings.deployment_app_runner_flavor, SourceOrObject
        )
        try:
            loaded_app_runner_flavor_class = (
                settings.deployment_app_runner_flavor.load()
            )
        except Exception as e:
            raise RuntimeError(
                f"Failed to load deployment app runner flavor from source "
                f"{settings.deployment_app_runner_flavor}: {e}\n"
                "Please check that the source is valid and that the "
                "deployment app runner flavor class is importable from the "
                "source root directory. Hint: run `zenml init` in your "
                "local source directory to initialize the source root path."
            ) from e

        if not isinstance(
            loaded_app_runner_flavor_class, type
        ) or not issubclass(
            loaded_app_runner_flavor_class, BaseDeploymentAppRunnerFlavor
        ):
            raise RuntimeError(
                f"The object '{loaded_app_runner_flavor_class}' is not a "
                "subclass of 'BaseDeploymentAppRunnerFlavor'"
            )

        app_runner_flavor_class = loaded_app_runner_flavor_class

    try:
        app_runner_flavor = app_runner_flavor_class()
    except Exception as e:
        raise RuntimeError(
            f"Failed to instantiate deployment app runner flavor "
            f"'{loaded_app_runner_flavor_class}': {e}"
        ) from e

    return app_runner_flavor
Functions
start_deployment_app(deployment_id: UUID, pid_file: Optional[str] = None, log_file: Optional[str] = None, host: Optional[str] = None, port: Optional[int] = None) -> None

Start the deployment app.

Parameters:

Name Type Description Default
deployment_id UUID

The deployment ID.

required
pid_file Optional[str]

The PID file to use for the deployment.

None
log_file Optional[str]

The log file to use for the deployment.

None
host Optional[str]

The custom host to use for the deployment.

None
port Optional[int]

The custom port to use for the deployment.

None
Source code in src/zenml/deployers/server/app.py
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
def start_deployment_app(
    deployment_id: UUID,
    pid_file: Optional[str] = None,
    log_file: Optional[str] = None,
    host: Optional[str] = None,
    port: Optional[int] = None,
) -> None:
    """Start the deployment app.

    Args:
        deployment_id: The deployment ID.
        pid_file: The PID file to use for the deployment.
        log_file: The log file to use for the deployment.
        host: The custom host to use for the deployment.
        port: The custom port to use for the deployment.
    """
    if pid_file or log_file:
        # create parent directory if necessary
        for f in (pid_file, log_file):
            if f:
                os.makedirs(os.path.dirname(f), exist_ok=True)

        setup_daemon(pid_file, log_file)

    logger.info(
        f"Starting deployment application server for deployment "
        f"{deployment_id}"
    )

    app_runner = BaseDeploymentAppRunner.load_app_runner(deployment_id)

    # Allow host/port overrides coming from the CLI.
    if host:
        app_runner.settings.uvicorn_host = host
    if port:
        app_runner.settings.uvicorn_port = int(port)
    app_runner.run()
Modules
entrypoint_configuration

ZenML Pipeline Deployment Entrypoint Configuration.

Classes
DeploymentEntrypointConfiguration(arguments: List[str])

Bases: BaseEntrypointConfiguration

Entrypoint configuration for ZenML Pipeline Deployment.

This entrypoint configuration handles the startup and configuration of the ZenML pipeline deployment FastAPI application.

Source code in src/zenml/entrypoints/base_entrypoint_configuration.py
60
61
62
63
64
65
66
def __init__(self, arguments: List[str]):
    """Initializes the entrypoint configuration.

    Args:
        arguments: Command line arguments to configure this object.
    """
    self.entrypoint_args = self._parse_arguments(arguments)
Functions
get_entrypoint_arguments(**kwargs: Any) -> List[str] classmethod

Gets arguments for the deployment entrypoint command.

Parameters:

Name Type Description Default
**kwargs Any

Keyword arguments containing deployment configuration

{}

Returns:

Type Description
List[str]

List of command-line arguments

Raises:

Type Description
ValueError

If the deployment ID is not a valid UUID.

Source code in src/zenml/deployers/server/entrypoint_configuration.py
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
@classmethod
def get_entrypoint_arguments(cls, **kwargs: Any) -> List[str]:
    """Gets arguments for the deployment entrypoint command.

    Args:
        **kwargs: Keyword arguments containing deployment configuration

    Returns:
        List of command-line arguments

    Raises:
        ValueError: If the deployment ID is not a valid UUID.
    """
    # Get base arguments (snapshot_id, etc.)
    base_args = super().get_entrypoint_arguments(**kwargs)

    deployment_id = kwargs.get(DEPLOYMENT_ID_OPTION)
    if not uuid_utils.is_valid_uuid(deployment_id):
        raise ValueError(
            f"Missing or invalid deployment ID as argument for entrypoint "
            f"configuration. Please make sure to pass a valid UUID to "
            f"`{cls.__name__}.{cls.get_entrypoint_arguments.__name__}"
            f"({DEPLOYMENT_ID_OPTION}=<UUID>)`."
        )

    # Add deployment-specific arguments with defaults
    deployment_args = [
        f"--{DEPLOYMENT_ID_OPTION}",
        str(kwargs.get(DEPLOYMENT_ID_OPTION, "")),
    ]

    return base_args + deployment_args
get_entrypoint_options() -> Set[str] classmethod

Gets all options required for the deployment entrypoint.

Returns:

Type Description
Set[str]

Set of required option names

Source code in src/zenml/deployers/server/entrypoint_configuration.py
41
42
43
44
45
46
47
48
49
50
@classmethod
def get_entrypoint_options(cls) -> Set[str]:
    """Gets all options required for the deployment entrypoint.

    Returns:
        Set of required option names
    """
    return {
        DEPLOYMENT_ID_OPTION,
    }
load_deployment() -> DeploymentResponse

Loads the deployment.

Returns:

Type Description
DeploymentResponse

The deployment.

Source code in src/zenml/deployers/server/entrypoint_configuration.py
85
86
87
88
89
90
91
92
93
94
95
def load_deployment(self) -> "DeploymentResponse":
    """Loads the deployment.

    Returns:
        The deployment.
    """
    deployment_id = UUID(self.entrypoint_args[DEPLOYMENT_ID_OPTION])
    deployment = Client().zen_store.get_deployment(
        deployment_id=deployment_id
    )
    return deployment
run() -> None

Run the ZenML pipeline deployment application.

This method starts the FastAPI server with the configured parameters and the specified pipeline deployment.

Raises:

Type Description
RuntimeError

If the deployment has no snapshot.

Source code in src/zenml/deployers/server/entrypoint_configuration.py
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
def run(self) -> None:
    """Run the ZenML pipeline deployment application.

    This method starts the FastAPI server with the configured parameters
    and the specified pipeline deployment.

    Raises:
        RuntimeError: If the deployment has no snapshot.
    """
    from zenml.deployers.server.app import BaseDeploymentAppRunner

    # Activate integrations to ensure all components are available
    integration_registry.activate_integrations()

    deployment = self.load_deployment()
    if not deployment.snapshot:
        raise RuntimeError(f"Deployment {deployment.id} has no snapshot")

    # Download code if necessary (for remote execution environments)
    self.download_code_if_necessary(snapshot=deployment.snapshot)

    app_runner = BaseDeploymentAppRunner.load_app_runner(deployment)
    app_runner.run()
Functions Modules
extensions

Base app extension interface.

Classes
BaseAppExtension

Bases: ABC

Abstract base for app extensions.

Extensions provide advanced framework-specific capabilities like: - Custom authentication/authorization - Observability (logging, tracing, metrics) - Complex routers with framework-specific features - OpenAPI customizations - Advanced middleware patterns

Subclasses must implement install() to modify the app.

Functions
install(app_runner: BaseDeploymentAppRunner) -> None abstractmethod

Install extension into the application.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

The deployment app runner instance being used to build and run the web application.

required

Raises:

Type Description
RuntimeError

If installation fails.

Source code in src/zenml/deployers/server/extensions.py
36
37
38
39
40
41
42
43
44
45
46
47
48
49
@abstractmethod
def install(
    self,
    app_runner: "BaseDeploymentAppRunner",
) -> None:
    """Install extension into the application.

    Args:
        app_runner: The deployment app runner instance being used to build
            and run the web application.

    Raises:
        RuntimeError: If installation fails.
    """
fastapi

FastAPI implementation of the deployment app factory and adapters.

Classes
FastAPIDeploymentAppRunnerFlavor

Bases: BaseDeploymentAppRunnerFlavor

FastAPI deployment app runner flavor.

Attributes
implementation_class: Type[BaseDeploymentAppRunner] property

The class that implements the deployment app runner.

Returns:

Type Description
Type[BaseDeploymentAppRunner]

The implementation class for the deployment app runner.

name: str property

The name of the deployment app runner flavor.

Returns:

Type Description
str

The name of the deployment app runner flavor.

requirements: List[str] property

The software requirements for the deployment app runner.

Returns:

Type Description
List[str]

The software requirements for the deployment app runner.

Modules
adapters

FastAPI adapter implementations.

Classes
FastAPIEndpointAdapter

Bases: EndpointAdapter

FastAPI implementation of endpoint adapter.

Functions
register_endpoint(app_runner: BaseDeploymentAppRunner, spec: EndpointSpec) -> None

Register endpoint with FastAPI.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec EndpointSpec

Framework-agnostic endpoint specification.

required

Raises:

Type Description
RuntimeError

If the adapter is not used with a FastAPI application.

Source code in src/zenml/deployers/server/fastapi/adapters.py
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
def register_endpoint(
    self,
    app_runner: BaseDeploymentAppRunner,
    spec: EndpointSpec,
) -> None:
    """Register endpoint with FastAPI.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic endpoint specification.

    Raises:
        RuntimeError: If the adapter is not used with a FastAPI application.
    """
    app = app_runner.asgi_app

    if not isinstance(app, FastAPI):
        raise RuntimeError(
            f"The {self.__class__.__name__} adapter must be used with a "
            "FastAPI application"
        )

    # Ensure handler is loaded
    handler = self.resolve_endpoint_handler(app_runner, spec)

    # Apply auth dependency if required
    dependencies = []
    if spec.auth_required and app_runner.deployment.auth_key:
        auth_dependency = self._build_auth_dependency(
            app_runner.deployment.auth_key
        )
        dependencies.append(Depends(auth_dependency))

    if spec.native:
        if isinstance(handler, APIRouter):
            app.include_router(
                handler, prefix=spec.path, **spec.extra_kwargs
            )
            return

    # Register with appropriate HTTP method
    route_kwargs: Dict[str, Any] = {"dependencies": dependencies}
    route_kwargs.update(spec.extra_kwargs)

    if spec.method == EndpointMethod.GET:
        app.get(spec.path, **route_kwargs)(handler)
    elif spec.method == EndpointMethod.POST:
        app.post(spec.path, **route_kwargs)(handler)
    elif spec.method == EndpointMethod.PUT:
        app.put(spec.path, **route_kwargs)(handler)
    elif spec.method == EndpointMethod.PATCH:
        app.patch(spec.path, **route_kwargs)(handler)
    elif spec.method == EndpointMethod.DELETE:
        app.delete(spec.path, **route_kwargs)(handler)
FastAPIMiddlewareAdapter

Bases: MiddlewareAdapter

FastAPI implementation of middleware adapter.

We support two types of native middleware:

  • A middleware class like that receives the ASGIApp object in the constructor and implements the call method to dispatch the middleware, e.g.:
from starlette.types import ASGIApp, Receive, Scope, Send

class MyMiddleware:
    def __init__(self, app: ASGIApp) -> None:
        self.app = app

    async def __call__(self, scope: Scope, receive: Receive, send: Send) -> None:
        ...
        await self.app(scope, receive, send)
  • A middleware function that takes request and next callable and returns a response, e.g.:
from fastapi import Request, Response

async def my_middleware(request: Request, call_next: Callable[[Request], Awaitable[Response]]) -> Response:
    ...
    return await call_next(request)
Functions
register_middleware(app_runner: BaseDeploymentAppRunner, spec: MiddlewareSpec) -> None

Register middleware with FastAPI.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

Deployment app runner instance.

required
spec MiddlewareSpec

Framework-agnostic middleware specification.

required

Raises:

Type Description
RuntimeError

If the adapter is not used with a FastAPI application.

Source code in src/zenml/deployers/server/fastapi/adapters.py
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
def register_middleware(
    self,
    app_runner: BaseDeploymentAppRunner,
    spec: MiddlewareSpec,
) -> None:
    """Register middleware with FastAPI.

    Args:
        app_runner: Deployment app runner instance.
        spec: Framework-agnostic middleware specification.

    Raises:
        RuntimeError: If the adapter is not used with a FastAPI application.
    """
    app = app_runner.asgi_app

    if not isinstance(app, FastAPI):
        raise RuntimeError(
            f"The {self.__class__.__name__} adapter must be used with a "
            "FastAPI application"
        )

    middleware = self.resolve_middleware_handler(app_runner, spec)

    if spec.native:
        if isinstance(middleware, type):
            app.add_middleware(
                middleware,  # type: ignore[arg-type]
                **spec.extra_kwargs,
            )
            return

        app.add_middleware(
            BaseHTTPMiddleware,
            dispatch=middleware,
            **spec.extra_kwargs,
        )

    if isinstance(middleware, type):
        app.add_middleware(
            middleware,  # type: ignore[arg-type]
            **spec.extra_kwargs,
        )
        return

    # Convert the unified middleware to a FastAPI middleware class
    app.add_middleware(
        BaseHTTPMiddleware,
        dispatch=middleware,
        **spec.extra_kwargs,
    )
app

FastAPI application for running ZenML pipeline deployments.

Classes
FastAPIDeploymentAppRunner(deployment: Union[str, UUID, DeploymentResponse], **kwargs: Any)

Bases: BaseDeploymentAppRunner

FastAPI deployment app runner.

Source code in src/zenml/deployers/server/app.py
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
def __init__(
    self, deployment: Union[str, UUID, "DeploymentResponse"], **kwargs: Any
):
    """Initialize the deployment app.

    Args:
        deployment: The deployment to run.
        **kwargs: Additional keyword arguments for the deployment app runner.
    """
    self.deployment = self.load_deployment(deployment)
    assert self.deployment.snapshot is not None
    self.snapshot = self.deployment.snapshot

    self.settings = (
        self.snapshot.pipeline_configuration.deployment_settings
    )

    self.service = self.load_deployment_service()

    # Create framework-specific adapters
    self.endpoint_adapter = self._create_endpoint_adapter()
    self.middleware_adapter = self._create_middleware_adapter()
    self._asgi_app: Optional[ASGIApplication] = None

    self.endpoints: List[EndpointSpec] = []
    self.middlewares: List[MiddlewareSpec] = []
    self.extensions: List[AppExtensionSpec] = []
Attributes
flavor: BaseDeploymentAppRunnerFlavor property

Return the flavor associated with this deployment application runner.

Returns:

Type Description
BaseDeploymentAppRunnerFlavor

The flavor associated with this deployment application runner.

Functions
build(middlewares: List[MiddlewareSpec], endpoints: List[EndpointSpec], extensions: List[AppExtensionSpec]) -> ASGIApplication

Build the FastAPI app for the deployment.

Parameters:

Name Type Description Default
middlewares List[MiddlewareSpec]

The middleware to register.

required
endpoints List[EndpointSpec]

The endpoints to register.

required
extensions List[AppExtensionSpec]

The extensions to install.

required

Returns:

Type Description
ASGIApplication

The configured FastAPI application instance.

Source code in src/zenml/deployers/server/fastapi/app.py
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
def build(
    self,
    middlewares: List[MiddlewareSpec],
    endpoints: List[EndpointSpec],
    extensions: List[AppExtensionSpec],
) -> ASGIApplication:
    """Build the FastAPI app for the deployment.

    Args:
        middlewares: The middleware to register.
        endpoints: The endpoints to register.
        extensions: The extensions to install.

    Returns:
        The configured FastAPI application instance.
    """
    title = (
        self.settings.app_title
        or f"ZenML Pipeline Deployment {self.deployment.name}"
    )
    description = (
        self.settings.app_description
        or f"ZenML pipeline deployment server for the "
        f"{self.deployment.name} deployment"
    )
    docs_url_path: Optional[str] = None
    redoc_url_path: Optional[str] = None
    if self.settings.endpoint_enabled(DeploymentDefaultEndpoints.DOCS):
        docs_url_path = self.settings.docs_url_path
    if self.settings.endpoint_enabled(DeploymentDefaultEndpoints.REDOC):
        redoc_url_path = self.settings.redoc_url_path

    fastapi_kwargs: Dict[str, Any] = dict(
        title=title,
        description=description,
        version=self.settings.app_version
        if self.settings.app_version is not None
        else zenml_version,
        root_path=self.settings.root_url_path,
        docs_url=docs_url_path,
        redoc_url=redoc_url_path,
        lifespan=self.lifespan,
    )
    fastapi_kwargs.update(self.settings.app_kwargs)

    asgi_app = FastAPI(**fastapi_kwargs)

    # Save this so it's available for the middleware, endpoint adapters and
    # extensions
    self._asgi_app = cast(ASGIApplication, asgi_app)

    # Bind the app runner to the app state
    asgi_app.state.app_runner = self
    asgi_app.exception_handler(Exception)(self.error_handler)

    self.register_middlewares(*middlewares)
    self.register_endpoints(*endpoints)
    self.install_extensions(*extensions)

    return self._asgi_app
error_handler(request: Request, exc: ValueError) -> JSONResponse

FastAPI error handler.

Parameters:

Name Type Description Default
request Request

The request.

required
exc ValueError

The exception.

required

Returns:

Type Description
JSONResponse

The error response.

Source code in src/zenml/deployers/server/fastapi/app.py
235
236
237
238
239
240
241
242
243
244
245
246
def error_handler(self, request: Request, exc: ValueError) -> JSONResponse:
    """FastAPI error handler.

    Args:
        request: The request.
        exc: The exception.

    Returns:
        The error response.
    """
    logger.error("Error in request: %s", exc)
    return JSONResponse(status_code=500, content={"detail": str(exc)})
lifespan(app: FastAPI) -> AsyncGenerator[None, None] async

Manage the deployment application lifespan.

Parameters:

Name Type Description Default
app FastAPI

The FastAPI application instance being deployed.

required

Yields:

Name Type Description
None AsyncGenerator[None, None]

Control is handed back to FastAPI once initialization completes.

Source code in src/zenml/deployers/server/fastapi/app.py
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
@asynccontextmanager
async def lifespan(self, app: FastAPI) -> AsyncGenerator[None, None]:
    """Manage the deployment application lifespan.

    Args:
        app: The FastAPI application instance being deployed.

    Yields:
        None: Control is handed back to FastAPI once initialization completes.
    """
    # Set the maximum number of worker threads
    to_thread.current_default_thread_limiter().total_tokens = (
        self.settings.thread_pool_size
    )

    self.startup()

    yield

    self.shutdown()
Functions
models

FastAPI application models.

Classes
AppInfo

Bases: BaseModel

App info model.

BaseDeploymentInvocationRequest

Bases: BaseModel

Base pipeline invoke request model.

BaseDeploymentInvocationResponse

Bases: BaseModel

Base pipeline invoke response model.

DeploymentInfo

Bases: BaseModel

Deployment info model.

DeploymentInvocationResponseMetadata

Bases: BaseModel

Pipeline invoke response metadata model.

ExecutionMetrics

Bases: BaseModel

Execution metrics model.

PipelineInfo

Bases: BaseModel

Pipeline info model.

ServiceInfo

Bases: BaseModel

Service info model.

SnapshotInfo

Bases: BaseModel

Snapshot info model.

Functions
runtime

Thread-safe runtime context for deployments.

This module provides request-scoped state for deployment invocations using contextvars to ensure thread safety and proper request isolation. Each deployment request gets its own isolated context that doesn't interfere with concurrent requests.

It also provides parameter override functionality for the orchestrator to access deployment parameters without tight coupling.

Classes Functions
get_in_memory_data(uri: str) -> Any

Get data from memory for the given URI.

Parameters:

Name Type Description Default
uri str

The artifact URI to retrieve data for.

required

Returns:

Type Description
Any

The stored data, or None if not found.

Source code in src/zenml/deployers/server/runtime.py
158
159
160
161
162
163
164
165
166
167
168
169
170
def get_in_memory_data(uri: str) -> Any:
    """Get data from memory for the given URI.

    Args:
        uri: The artifact URI to retrieve data for.

    Returns:
        The stored data, or None if not found.
    """
    if is_active():
        state = _get_context()
        return state.in_memory_data[uri]
    return None
get_outputs() -> Dict[str, Dict[str, Any]]

Return the outputs for all steps in the current context.

Returns:

Type Description
Dict[str, Dict[str, Any]]

A dictionary of outputs for all steps.

Source code in src/zenml/deployers/server/runtime.py
125
126
127
128
129
130
131
def get_outputs() -> Dict[str, Dict[str, Any]]:
    """Return the outputs for all steps in the current context.

    Returns:
        A dictionary of outputs for all steps.
    """
    return dict(_get_context().outputs)
is_active() -> bool

Return whether deployment state is active in the current context.

Returns:

Type Description
bool

True if the deployment state is active in the current context, False otherwise.

Source code in src/zenml/deployers/server/runtime.py
101
102
103
104
105
106
107
def is_active() -> bool:
    """Return whether deployment state is active in the current context.

    Returns:
        True if the deployment state is active in the current context, False otherwise.
    """
    return _get_context().active
put_in_memory_data(uri: str, data: Any) -> None

Store data in memory for the given URI.

Parameters:

Name Type Description Default
uri str

The artifact URI to store data for.

required
data Any

The data to store in memory.

required
Source code in src/zenml/deployers/server/runtime.py
146
147
148
149
150
151
152
153
154
155
def put_in_memory_data(uri: str, data: Any) -> None:
    """Store data in memory for the given URI.

    Args:
        uri: The artifact URI to store data for.
        data: The data to store in memory.
    """
    if is_active():
        state = _get_context()
        state.in_memory_data[uri] = data
record_step_outputs(step_name: str, outputs: Dict[str, Any]) -> None

Record raw outputs for a step by invocation id.

Parameters:

Name Type Description Default
step_name str

The name of the step to record the outputs for.

required
outputs Dict[str, Any]

A dictionary of outputs to record.

required
Source code in src/zenml/deployers/server/runtime.py
110
111
112
113
114
115
116
117
118
119
120
121
122
def record_step_outputs(step_name: str, outputs: Dict[str, Any]) -> None:
    """Record raw outputs for a step by invocation id.

    Args:
        step_name: The name of the step to record the outputs for.
        outputs: A dictionary of outputs to record.
    """
    state = _get_context()
    if not state.active:
        return
    if not outputs:
        return
    state.outputs.setdefault(step_name, {}).update(outputs)
should_skip_artifact_materialization() -> bool

Check if the current request should skip artifact materialization.

Returns:

Type Description
bool

True if artifact materialization is skipped for this request.

Source code in src/zenml/deployers/server/runtime.py
134
135
136
137
138
139
140
141
142
143
def should_skip_artifact_materialization() -> bool:
    """Check if the current request should skip artifact materialization.

    Returns:
        True if artifact materialization is skipped for this request.
    """
    if is_active():
        state = _get_context()
        return state.skip_artifact_materialization
    return False
start(request_id: str, snapshot: PipelineSnapshotResponse, parameters: Dict[str, Any], skip_artifact_materialization: bool = False) -> None

Initialize deployment state for the current request context.

Parameters:

Name Type Description Default
request_id str

The ID of the request.

required
snapshot PipelineSnapshotResponse

The snapshot to deploy.

required
parameters Dict[str, Any]

The parameters to deploy.

required
skip_artifact_materialization bool

Whether to skip artifact materialization.

False
Source code in src/zenml/deployers/server/runtime.py
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
def start(
    request_id: str,
    snapshot: PipelineSnapshotResponse,
    parameters: Dict[str, Any],
    skip_artifact_materialization: bool = False,
) -> None:
    """Initialize deployment state for the current request context.

    Args:
        request_id: The ID of the request.
        snapshot: The snapshot to deploy.
        parameters: The parameters to deploy.
        skip_artifact_materialization: Whether to skip artifact materialization.
    """
    state = _DeploymentState()
    state.active = True
    state.request_id = request_id
    state.snapshot_id = str(snapshot.id)
    state.pipeline_parameters = parameters
    state.outputs = {}
    state.skip_artifact_materialization = skip_artifact_materialization
    _deployment_context.set(state)
stop() -> None

Clear the deployment state for the current request context.

Source code in src/zenml/deployers/server/runtime.py
95
96
97
98
def stop() -> None:
    """Clear the deployment state for the current request context."""
    state = _get_context()
    state.reset()
service

Pipeline deployment service.

Classes
BasePipelineDeploymentService(app_runner: BaseDeploymentAppRunner, **kwargs: Any)

Bases: ABC

Abstract base class for pipeline deployment services.

Subclasses must implement lifecycle management, execution, health, and schema accessors. This contract enables swapping implementations via import-source configuration without modifying the FastAPI app wiring code.

Initialize the deployment service.

Parameters:

Name Type Description Default
app_runner BaseDeploymentAppRunner

The deployment application runner used with this service.

required
**kwargs Any

Additional keyword arguments for the deployment service.

{}

Raises:

Type Description
RuntimeError

If snapshot cannot be loaded.

Source code in src/zenml/deployers/server/service.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
def __init__(
    self, app_runner: "BaseDeploymentAppRunner", **kwargs: Any
) -> None:
    """Initialize the deployment service.

    Args:
        app_runner: The deployment application runner used with this service.
        **kwargs: Additional keyword arguments for the deployment service.

    Raises:
        RuntimeError: If snapshot cannot be loaded.
    """
    self.app_runner = app_runner
    self.deployment = app_runner.deployment
    if self.deployment.snapshot is None:
        raise RuntimeError("Deployment has no snapshot")
    self.snapshot = self.deployment.snapshot
Attributes
input_model: Type[BaseModel] property

Construct a Pydantic model representing pipeline input parameters.

Load the pipeline class from pipeline_spec.source and derive the entrypoint signature types to create a dynamic Pydantic model (extra='forbid') to use for parameter validation.

Returns:

Type Description
Type[BaseModel]

A Pydantic BaseModel subclass that validates the pipeline input

Type[BaseModel]

parameters.

Raises:

Type Description
RuntimeError

If the pipeline class cannot be loaded or if no parameters model can be constructed for the pipeline.

input_schema: Dict[str, Any] property

Return the JSON schema for pipeline input parameters.

Returns:

Type Description
Dict[str, Any]

The JSON schema for pipeline parameters.

Raises:

Type Description
RuntimeError

If the pipeline input schema is not available.

output_schema: Dict[str, Any] property

Return the JSON schema for the pipeline outputs.

Returns:

Type Description
Dict[str, Any]

The JSON schema for the pipeline outputs.

Raises:

Type Description
RuntimeError

If the pipeline output schema is not available.

Functions
cleanup() -> None abstractmethod

Cleanup service resources and run cleanup hooks.

Source code in src/zenml/deployers/server/service.py
159
160
161
@abstractmethod
def cleanup(self) -> None:
    """Cleanup service resources and run cleanup hooks."""
execute_pipeline(request: BaseDeploymentInvocationRequest) -> BaseDeploymentInvocationResponse abstractmethod

Execute the deployment with the given parameters.

Parameters:

Name Type Description Default
request BaseDeploymentInvocationRequest

Runtime parameters supplied by the caller.

required

Returns:

Type Description
BaseDeploymentInvocationResponse

A BaseDeploymentInvocationResponse describing the execution result.

Source code in src/zenml/deployers/server/service.py
163
164
165
166
167
168
169
170
171
172
173
174
@abstractmethod
def execute_pipeline(
    self, request: BaseDeploymentInvocationRequest
) -> BaseDeploymentInvocationResponse:
    """Execute the deployment with the given parameters.

    Args:
        request: Runtime parameters supplied by the caller.

    Returns:
        A BaseDeploymentInvocationResponse describing the execution result.
    """
get_execution_metrics() -> ExecutionMetrics abstractmethod

Return lightweight execution metrics for observability.

Returns:

Type Description
ExecutionMetrics

A dictionary containing execution metrics.

Source code in src/zenml/deployers/server/service.py
184
185
186
187
188
189
190
@abstractmethod
def get_execution_metrics(self) -> ExecutionMetrics:
    """Return lightweight execution metrics for observability.

    Returns:
        A dictionary containing execution metrics.
    """
get_pipeline_invoke_models() -> Tuple[Type[BaseModel], Type[BaseModel]]

Generate the request and response models for the pipeline invoke endpoint.

Returns:

Type Description
Tuple[Type[BaseModel], Type[BaseModel]]

A tuple containing the request and response models.

Source code in src/zenml/deployers/server/service.py
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
def get_pipeline_invoke_models(
    self,
) -> Tuple[Type[BaseModel], Type[BaseModel]]:
    """Generate the request and response models for the pipeline invoke endpoint.

    Returns:
        A tuple containing the request and response models.
    """
    if TYPE_CHECKING:
        # mypy has a difficult time with dynamic models, so we return something
        # static for mypy to use
        return BaseModel, BaseModel

    else:

        class PipelineInvokeRequest(BaseDeploymentInvocationRequest):
            parameters: Annotated[
                self.input_model,
                WithJsonSchema(self.input_schema, mode="validation"),
            ]

        class PipelineInvokeResponse(BaseDeploymentInvocationResponse):
            outputs: Annotated[
                Optional[Dict[str, Any]],
                WithJsonSchema(self.output_schema, mode="serialization"),
            ]

        return PipelineInvokeRequest, PipelineInvokeResponse
get_service_info() -> ServiceInfo abstractmethod

Get service information.

Returns:

Type Description
ServiceInfo

A dictionary containing service information.

Source code in src/zenml/deployers/server/service.py
176
177
178
179
180
181
182
@abstractmethod
def get_service_info(self) -> ServiceInfo:
    """Get service information.

    Returns:
        A dictionary containing service information.
    """
health_check() -> None abstractmethod

Check service health.

Raises:

Type Description
RuntimeError

If the service is not healthy.

Source code in src/zenml/deployers/server/service.py
192
193
194
195
196
197
198
@abstractmethod
def health_check(self) -> None:
    """Check service health.

    Raises:
        RuntimeError: If the service is not healthy.
    """
initialize() -> None abstractmethod

Initialize service resources and run init hooks.

Raises:

Type Description
Exception

If the service cannot be initialized.

Source code in src/zenml/deployers/server/service.py
151
152
153
154
155
156
157
@abstractmethod
def initialize(self) -> None:
    """Initialize service resources and run init hooks.

    Raises:
        Exception: If the service cannot be initialized.
    """
PipelineDeploymentService(app_runner: BaseDeploymentAppRunner, **kwargs: Any)

Bases: BasePipelineDeploymentService

Default pipeline deployment service implementation.

Source code in src/zenml/deployers/server/service.py
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
def __init__(
    self, app_runner: "BaseDeploymentAppRunner", **kwargs: Any
) -> None:
    """Initialize the deployment service.

    Args:
        app_runner: The deployment application runner used with this service.
        **kwargs: Additional keyword arguments for the deployment service.

    Raises:
        RuntimeError: If snapshot cannot be loaded.
    """
    self.app_runner = app_runner
    self.deployment = app_runner.deployment
    if self.deployment.snapshot is None:
        raise RuntimeError("Deployment has no snapshot")
    self.snapshot = self.deployment.snapshot
Functions
cleanup() -> None

Execute cleanup hook if present.

Source code in src/zenml/deployers/server/service.py
356
357
358
def cleanup(self) -> None:
    """Execute cleanup hook if present."""
    BaseOrchestrator.run_cleanup_hook(self.snapshot)
execute_pipeline(request: BaseDeploymentInvocationRequest) -> BaseDeploymentInvocationResponse

Execute the deployment with the given parameters.

Parameters:

Name Type Description Default
request BaseDeploymentInvocationRequest

Runtime parameters supplied by the caller.

required

Returns:

Type Description
BaseDeploymentInvocationResponse

A BaseDeploymentInvocationResponse describing the execution result.

Source code in src/zenml/deployers/server/service.py
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
def execute_pipeline(
    self,
    request: BaseDeploymentInvocationRequest,
) -> BaseDeploymentInvocationResponse:
    """Execute the deployment with the given parameters.

    Args:
        request: Runtime parameters supplied by the caller.

    Returns:
        A BaseDeploymentInvocationResponse describing the execution result.
    """
    # Unused parameters for future implementation
    _ = request.run_name, request.timeout
    parameters = request.parameters.model_dump()
    start_time = time.time()
    logger.info("Starting pipeline execution")

    placeholder_run: Optional[PipelineRunResponse] = None
    try:
        # Create a placeholder run separately from the actual execution,
        # so that we have a run ID to include in the response even if the
        # pipeline execution fails.
        placeholder_run, deployment_snapshot = (
            self._prepare_execute_with_orchestrator(
                resolved_params=parameters,
            )
        )

        captured_outputs = self._execute_with_orchestrator(
            placeholder_run=placeholder_run,
            deployment_snapshot=deployment_snapshot,
            resolved_params=parameters,
            skip_artifact_materialization=request.skip_artifact_materialization,
        )

        # Map outputs using fast (in-memory) or slow (artifact) path
        mapped_outputs = self._map_outputs(captured_outputs)

        return self._build_response(
            placeholder_run=placeholder_run,
            mapped_outputs=mapped_outputs,
            start_time=start_time,
            resolved_params=parameters,
        )

    except Exception as e:
        logger.error(f"❌ Pipeline execution failed: {e}")
        return self._build_response(
            placeholder_run=placeholder_run,
            mapped_outputs=None,
            start_time=start_time,
            resolved_params=parameters,
            error=e,
        )
get_execution_metrics() -> ExecutionMetrics

Return lightweight execution metrics for observability.

Returns:

Type Description
ExecutionMetrics

Aggregated execution metrics.

Source code in src/zenml/deployers/server/service.py
458
459
460
461
462
463
464
465
466
467
def get_execution_metrics(self) -> ExecutionMetrics:
    """Return lightweight execution metrics for observability.

    Returns:
        Aggregated execution metrics.
    """
    return ExecutionMetrics(
        total_executions=self.total_executions,
        last_execution_time=self.last_execution_time,
    )
get_service_info() -> ServiceInfo

Get service information.

Returns:

Type Description
ServiceInfo

A dictionary containing service information.

Source code in src/zenml/deployers/server/service.py
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
def get_service_info(self) -> ServiceInfo:
    """Get service information.

    Returns:
        A dictionary containing service information.
    """
    uptime = time.time() - self.service_start_time
    settings = self.app_runner.settings
    api_urlpath = f"{self.app_runner.settings.root_url_path}{self.app_runner.settings.api_url_path}"
    return ServiceInfo(
        deployment=DeploymentInfo(
            id=self.deployment.id,
            name=self.deployment.name,
            auth_enabled=self.deployment.auth_key is not None,
        ),
        snapshot=SnapshotInfo(
            id=self.snapshot.id,
            name=self.snapshot.name,
        ),
        pipeline=PipelineInfo(
            name=self.snapshot.pipeline_configuration.name,
            parameters=self.snapshot.pipeline_spec.parameters
            if self.snapshot.pipeline_spec
            else None,
            input_schema=self.input_schema,
            output_schema=self.output_schema,
        ),
        app=AppInfo(
            app_runner_flavor=self.app_runner.flavor.name,
            docs_url_path=settings.docs_url_path,
            redoc_url_path=settings.redoc_url_path,
            invoke_url_path=api_urlpath + settings.invoke_url_path,
            health_url_path=api_urlpath + settings.health_url_path,
            info_url_path=api_urlpath + settings.info_url_path,
            metrics_url_path=api_urlpath + settings.metrics_url_path,
        ),
        total_executions=self.total_executions,
        last_execution_time=self.last_execution_time,
        status="healthy",
        uptime=uptime,
    )
health_check() -> None

Check service health.

Source code in src/zenml/deployers/server/service.py
469
470
471
def health_check(self) -> None:
    """Check service health."""
    pass
initialize() -> None

Initialize service with proper error handling.

Raises:

Type Description
Exception

If the service cannot be initialized.

Source code in src/zenml/deployers/server/service.py
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
def initialize(self) -> None:
    """Initialize service with proper error handling.

    Raises:
        Exception: If the service cannot be initialized.
    """
    self._client = Client()

    # Execution tracking
    self.service_start_time = time.time()
    self.last_execution_time: Optional[datetime] = None
    self.total_executions = 0

    # Cache a local orchestrator instance to avoid per-request construction
    self._orchestrator = SharedLocalOrchestrator(
        name="deployment-local",
        id=uuid4(),
        config=LocalOrchestratorConfig(),
        flavor="local",
        type=StackComponentType.ORCHESTRATOR,
        user=uuid4(),
        created=datetime.now(),
        updated=datetime.now(),
    )

    try:
        # Execute init hook
        BaseOrchestrator.run_init_hook(self.snapshot)

        # Log success
        self._log_initialization_success()

    except Exception as e:
        logger.error(f"❌ Failed to initialize service: {e}")
        logger.error(f"   Traceback: {traceback.format_exc()}")
        raise
SharedLocalOrchestrator(name: str, id: UUID, config: StackComponentConfig, flavor: str, type: StackComponentType, user: Optional[UUID], created: datetime, updated: datetime, environment: Optional[Dict[str, str]] = None, secrets: Optional[List[UUID]] = None, labels: Optional[Dict[str, Any]] = None, connector_requirements: Optional[ServiceConnectorRequirements] = None, connector: Optional[UUID] = None, connector_resource_id: Optional[str] = None, *args: Any, **kwargs: Any)

Bases: LocalOrchestrator

Local orchestrator tweaked for deployments.

This is a slight modification of the LocalOrchestrator: - uses request-scoped orchestrator run ids by storing them in contextvars - bypasses the init/cleanup hook execution because they are run globally by the deployment service

Source code in src/zenml/stack/stack_component.py
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
def __init__(
    self,
    name: str,
    id: UUID,
    config: StackComponentConfig,
    flavor: str,
    type: StackComponentType,
    user: Optional[UUID],
    created: datetime,
    updated: datetime,
    environment: Optional[Dict[str, str]] = None,
    secrets: Optional[List[UUID]] = None,
    labels: Optional[Dict[str, Any]] = None,
    connector_requirements: Optional[ServiceConnectorRequirements] = None,
    connector: Optional[UUID] = None,
    connector_resource_id: Optional[str] = None,
    *args: Any,
    **kwargs: Any,
):
    """Initializes a StackComponent.

    Args:
        name: The name of the component.
        id: The unique ID of the component.
        config: The config of the component.
        flavor: The flavor of the component.
        type: The type of the component.
        user: The ID of the user who created the component.
        created: The creation time of the component.
        updated: The last update time of the component.
        environment: Environment variables to set when running on this
            component.
        secrets: Secrets to set as environment variables when running on
            this component.
        labels: The labels of the component.
        connector_requirements: The requirements for the connector.
        connector: The ID of a connector linked to the component.
        connector_resource_id: The custom resource ID to access through
            the connector.
        *args: Additional positional arguments.
        **kwargs: Additional keyword arguments.

    Raises:
        ValueError: If a secret reference is passed as name.
    """
    if secret_utils.is_secret_reference(name):
        raise ValueError(
            "Passing the `name` attribute of a stack component as a "
            "secret reference is not allowed."
        )

    self.id = id
    self.name = name
    self._config = config
    self.flavor = flavor
    self.type = type
    self.user = user
    self.created = created
    self.updated = updated
    self.labels = labels
    self.environment = environment or {}
    self.secrets = secrets or []
    self.connector_requirements = connector_requirements
    self.connector = connector
    self.connector_resource_id = connector_resource_id
    self._connector_instance: Optional[ServiceConnector] = None
Functions
get_orchestrator_run_id() -> str

Get the orchestrator run id.

Returns:

Type Description
str

The orchestrator run id.

Source code in src/zenml/deployers/server/service.py
89
90
91
92
93
94
95
96
97
98
99
def get_orchestrator_run_id(self) -> str:
    """Get the orchestrator run id.

    Returns:
        The orchestrator run id.
    """
    run_id = self._shared_orchestrator_run_id.get()
    if run_id is None:
        run_id = str(uuid4())
        self._shared_orchestrator_run_id.set(run_id)
    return run_id
run_cleanup_hook(snapshot: PipelineSnapshotResponse) -> None classmethod

Runs the cleanup hook.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The snapshot to run the cleanup hook for.

required
Source code in src/zenml/deployers/server/service.py
112
113
114
115
116
117
118
119
120
121
@classmethod
def run_cleanup_hook(cls, snapshot: "PipelineSnapshotResponse") -> None:
    """Runs the cleanup hook.

    Args:
        snapshot: The snapshot to run the cleanup hook for.
    """
    # Bypass the cleanup hook execution because it is run globally by
    # the deployment service
    pass
run_init_hook(snapshot: PipelineSnapshotResponse) -> None classmethod

Runs the init hook.

Parameters:

Name Type Description Default
snapshot PipelineSnapshotResponse

The snapshot to run the init hook for.

required
Source code in src/zenml/deployers/server/service.py
101
102
103
104
105
106
107
108
109
110
@classmethod
def run_init_hook(cls, snapshot: "PipelineSnapshotResponse") -> None:
    """Runs the init hook.

    Args:
        snapshot: The snapshot to run the init hook for.
    """
    # Bypass the init hook execution because it is run globally by
    # the deployment service
    pass
Functions Modules

utils

ZenML deployers utilities.

Classes
Functions
deployment_snapshot_request_from_source_snapshot(source_snapshot: PipelineSnapshotResponse, deployment_parameters: Dict[str, Any]) -> PipelineSnapshotRequest

Generate a snapshot request for deployment execution.

Parameters:

Name Type Description Default
source_snapshot PipelineSnapshotResponse

The source snapshot from which to create the snapshot request.

required
deployment_parameters Dict[str, Any]

Parameters to override for deployment execution.

required

Raises:

Type Description
RuntimeError

If the source snapshot does not have an associated stack.

Returns:

Type Description
PipelineSnapshotRequest

The generated snapshot request.

Source code in src/zenml/deployers/utils.py
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
def deployment_snapshot_request_from_source_snapshot(
    source_snapshot: PipelineSnapshotResponse,
    deployment_parameters: Dict[str, Any],
) -> PipelineSnapshotRequest:
    """Generate a snapshot request for deployment execution.

    Args:
        source_snapshot: The source snapshot from which to create the
            snapshot request.
        deployment_parameters: Parameters to override for deployment execution.

    Raises:
        RuntimeError: If the source snapshot does not have an associated stack.

    Returns:
        The generated snapshot request.
    """
    if source_snapshot.stack is None:
        raise RuntimeError("Missing source snapshot stack")

    pipeline_configuration = pydantic_utils.update_model(
        source_snapshot.pipeline_configuration, {"enable_cache": False}
    )

    steps = {}
    for invocation_id, step in source_snapshot.step_configurations.items():
        updated_step_parameters = step.config.parameters.copy()

        for param_name in step.config.parameters:
            if param_name in deployment_parameters:
                updated_step_parameters[param_name] = deployment_parameters[
                    param_name
                ]

        # Deployment-specific step overrides
        step_update = {
            "enable_cache": False,  # Disable caching for all steps
            "step_operator": None,  # Remove step operators for deployments
            "retry": None,  # Remove retry configuration
            "parameters": updated_step_parameters,
        }

        step_config = pydantic_utils.update_model(
            step.step_config_overrides, step_update
        )
        merged_step_config = step_config.apply_pipeline_configuration(
            pipeline_configuration
        )

        steps[invocation_id] = Step(
            spec=step.spec,
            config=merged_step_config,
            step_config_overrides=step_config,
        )

    code_reference_request = None
    if source_snapshot.code_reference:
        code_reference_request = CodeReferenceRequest(
            commit=source_snapshot.code_reference.commit,
            subdirectory=source_snapshot.code_reference.subdirectory,
            code_repository=source_snapshot.code_reference.code_repository.id,
        )

    zenml_version = Client().zen_store.get_store_info().version

    # Compute the source snapshot ID:
    # - If the source snapshot has a name, we use it as the source snapshot.
    #   That way, all runs will be associated with this snapshot.
    # - If the source snapshot is based on another snapshot (which therefore
    #   has a name), we use that one instead.
    # - If the source snapshot does not have a name and is not based on another
    #   snapshot, we don't set a source snapshot.
    #
    # With this, we ensure that all runs are associated with the closest named
    # source snapshot.
    source_snapshot_id = None
    if source_snapshot.name:
        source_snapshot_id = source_snapshot.id
    elif source_snapshot.source_snapshot_id:
        source_snapshot_id = source_snapshot.source_snapshot_id

    updated_pipeline_spec = source_snapshot.pipeline_spec
    if (
        source_snapshot.pipeline_spec
        and source_snapshot.pipeline_spec.parameters is not None
    ):
        original_params: Dict[str, Any] = dict(
            source_snapshot.pipeline_spec.parameters
        )
        merged_params: Dict[str, Any] = original_params.copy()
        for k, v in deployment_parameters.items():
            if k in original_params:
                merged_params[k] = v
        updated_pipeline_spec = pydantic_utils.update_model(
            source_snapshot.pipeline_spec, {"parameters": merged_params}
        )

    return PipelineSnapshotRequest(
        project=source_snapshot.project_id,
        run_name_template=source_snapshot.run_name_template,
        pipeline_configuration=pipeline_configuration,
        step_configurations=steps,
        client_environment={},
        client_version=zenml_version,
        server_version=zenml_version,
        stack=source_snapshot.stack.id,
        pipeline=source_snapshot.pipeline.id,
        schedule=None,
        code_reference=code_reference_request,
        code_path=source_snapshot.code_path,
        build=source_snapshot.build.id if source_snapshot.build else None,
        source_snapshot=source_snapshot_id,
        pipeline_version_hash=source_snapshot.pipeline_version_hash,
        pipeline_spec=updated_pipeline_spec,
    )
get_deployment_input_schema(deployment: DeploymentResponse) -> Dict[str, Any]

Get the schema for a deployment's input parameters.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment for which to get the schema.

required

Returns:

Type Description
Dict[str, Any]

The schema for the deployment's input parameters.

Raises:

Type Description
RuntimeError

If the deployment has no associated input schema.

Source code in src/zenml/deployers/utils.py
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
def get_deployment_input_schema(
    deployment: DeploymentResponse,
) -> Dict[str, Any]:
    """Get the schema for a deployment's input parameters.

    Args:
        deployment: The deployment for which to get the schema.

    Returns:
        The schema for the deployment's input parameters.

    Raises:
        RuntimeError: If the deployment has no associated input schema.
    """
    if (
        deployment.snapshot
        and deployment.snapshot.pipeline_spec
        and deployment.snapshot.pipeline_spec.input_schema
    ):
        return deployment.snapshot.pipeline_spec.input_schema

    raise RuntimeError(
        f"Deployment {deployment.name} has no associated input schema."
    )
get_deployment_invocation_example(deployment: DeploymentResponse) -> Dict[str, Any]

Generate an example invocation command for a deployment.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment for which to generate an example invocation.

required

Returns:

Type Description
Dict[str, Any]

A dictionary containing the example invocation parameters.

Source code in src/zenml/deployers/utils.py
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
def get_deployment_invocation_example(
    deployment: DeploymentResponse,
) -> Dict[str, Any]:
    """Generate an example invocation command for a deployment.

    Args:
        deployment: The deployment for which to generate an example invocation.

    Returns:
        A dictionary containing the example invocation parameters.
    """
    parameters_schema = get_deployment_input_schema(deployment)

    properties = parameters_schema.get("properties", {})

    if not properties:
        return {}

    parameters = {}

    for attr_name, attr_schema in properties.items():
        parameters[attr_name] = "<value>"
        if not isinstance(attr_schema, dict):
            continue

        default_value = None

        if "default" in attr_schema:
            default_value = attr_schema["default"]
        elif "const" in attr_schema:
            default_value = attr_schema["const"]

        parameters[attr_name] = default_value or "<value>"

    return parameters
get_deployment_output_schema(deployment: DeploymentResponse) -> Dict[str, Any]

Get the schema for a deployment's output parameters.

Parameters:

Name Type Description Default
deployment DeploymentResponse

The deployment for which to get the schema.

required

Returns:

Type Description
Dict[str, Any]

The schema for the deployment's output parameters.

Raises:

Type Description
RuntimeError

If the deployment has no associated output schema.

Source code in src/zenml/deployers/utils.py
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
def get_deployment_output_schema(
    deployment: DeploymentResponse,
) -> Dict[str, Any]:
    """Get the schema for a deployment's output parameters.

    Args:
        deployment: The deployment for which to get the schema.

    Returns:
        The schema for the deployment's output parameters.

    Raises:
        RuntimeError: If the deployment has no associated output schema.
    """
    if (
        deployment.snapshot
        and deployment.snapshot.pipeline_spec
        and deployment.snapshot.pipeline_spec.output_schema
    ):
        return deployment.snapshot.pipeline_spec.output_schema

    raise RuntimeError(
        f"Deployment {deployment.name} has no associated output schema."
    )
invoke_deployment(deployment_name_or_id: Union[str, UUID], project: Optional[UUID] = None, timeout: int = 300, **kwargs: Any) -> Any

Call a deployment and return the result.

Parameters:

Name Type Description Default
deployment_name_or_id Union[str, UUID]

The name or ID of the deployment to call.

required
project Optional[UUID]

The project ID of the deployment to call.

None
timeout int

The timeout for the HTTP request to the deployment.

300
**kwargs Any

Keyword arguments to pass to the deployment.

{}

Returns:

Type Description
Any

The response from the deployment, parsed as JSON if possible,

Any

otherwise returned as text.

Raises:

Type Description
DeploymentNotFoundError

If the deployment is not found.

DeploymentProvisionError

If the deployment is not running or has no URL.

DeploymentHTTPError

If the HTTP request to the endpoint fails.

Source code in src/zenml/deployers/utils.py
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
def invoke_deployment(
    deployment_name_or_id: Union[str, UUID],
    project: Optional[UUID] = None,
    timeout: int = 300,  # 5 minute timeout
    **kwargs: Any,
) -> Any:
    """Call a deployment and return the result.

    Args:
        deployment_name_or_id: The name or ID of the deployment to call.
        project: The project ID of the deployment to call.
        timeout: The timeout for the HTTP request to the deployment.
        **kwargs: Keyword arguments to pass to the deployment.

    Returns:
        The response from the deployment, parsed as JSON if possible,
        otherwise returned as text.

    Raises:
        DeploymentNotFoundError: If the deployment is not found.
        DeploymentProvisionError: If the deployment is not running
            or has no URL.
        DeploymentHTTPError: If the HTTP request to the endpoint fails.
    """
    client = Client()
    try:
        deployment = client.get_deployment(
            deployment_name_or_id, project=project
        )
    except KeyError:
        raise DeploymentNotFoundError(
            f"Deployment with name or ID '{deployment_name_or_id}' not found"
        )

    if deployment.status != DeploymentStatus.RUNNING:
        raise DeploymentProvisionError(
            f"Deployment {deployment_name_or_id} is not running. Please "
            "refresh or re-deploy the deployment or check its logs for "
            "more details."
        )

    if not deployment.url:
        raise DeploymentProvisionError(
            f"Deployment {deployment_name_or_id} has no URL. Please "
            "refresh the deployment or check its logs for more "
            "details."
        )

    input_schema = None
    if deployment.snapshot and deployment.snapshot.pipeline_spec:
        input_schema = deployment.snapshot.pipeline_spec.input_schema

    if input_schema:
        # Resolve the references in the schema first, otherwise we won't be able
        # to access the data types for object-typed parameters.
        input_schema = jsonref.replace_refs(input_schema)
        assert isinstance(input_schema, dict)

        properties = input_schema.get("properties", {})

        # Some kwargs having one of the collection data types (list, dict) in
        # the schema may be supplied as a JSON string. We need to unpack
        # them before we construct the final JSON payload.
        #
        # We ignore all errors here because they will be better handled by the
        # deployment itself server side.
        for key in kwargs.keys():
            if key not in properties:
                continue
            value = kwargs[key]
            if not isinstance(value, str):
                continue
            attr_schema = properties[key]
            try:
                if attr_schema.get("type") == "object":
                    value = json.loads(value)
                    if isinstance(value, dict):
                        kwargs[key] = value
                elif attr_schema.get("type") == "array":
                    value = json.loads(value)
                    if isinstance(value, list):
                        kwargs[key] = value
            except (json.JSONDecodeError, ValueError):
                pass

    # Serialize kwargs to JSON
    params = dict(parameters=kwargs)
    try:
        payload = json.dumps(params, default=pydantic_encoder)
    except (TypeError, ValueError) as e:
        raise DeploymentHTTPError(
            f"Failed to serialize request data to JSON: {e}"
        )

    invoke_url_path = DEFAULT_DEPLOYMENT_APP_INVOKE_URL_PATH
    if deployment.snapshot:
        deployment_settings = (
            deployment.snapshot.pipeline_configuration.deployment_settings
        )
        invoke_url_path = f"{deployment_settings.root_url_path}{deployment_settings.api_url_path}{deployment_settings.invoke_url_path}"

    # Construct the invoke endpoint URL
    invoke_url = deployment.url.rstrip("/") + invoke_url_path

    # Prepare headers
    headers = {
        "Content-Type": "application/json",
        "Accept": "application/json",
    }

    # Add authorization header if auth_key is present
    if deployment.auth_key:
        headers["Authorization"] = f"Bearer {deployment.auth_key}"

    try:
        step_context = get_step_context()
    except RuntimeError:
        step_context = None

    if step_context:
        # Include these so that the deployment can identify the step
        # and pipeline run that called it, if called from a step.
        headers["ZenML-Step-Name"] = step_context.step_name
        headers["ZenML-Pipeline-Name"] = step_context.pipeline.name
        headers["ZenML-Pipeline-Run-ID"] = str(step_context.pipeline_run.id)
        headers["ZenML-Pipeline-Run-Name"] = step_context.pipeline_run.name

    # Make the HTTP request
    try:
        response = requests.post(
            invoke_url,
            data=payload,
            headers=headers,
            timeout=timeout,
        )
        response.raise_for_status()

        # Try to parse JSON response, fallback to text if not JSON
        try:
            return response.json()
        except ValueError:
            return response.text

    except requests.exceptions.HTTPError as e:
        raise DeploymentHTTPError(
            f"HTTP {e.response.status_code} error calling deployment "
            f"{deployment_name_or_id}: {e.response.text}"
        )
    except requests.exceptions.ConnectionError as e:
        raise DeploymentHTTPError(
            f"Failed to connect to deployment {deployment_name_or_id}: {e}"
        )
    except requests.exceptions.Timeout as e:
        raise DeploymentHTTPError(
            f"Timeout calling deployment {deployment_name_or_id}: {e}"
        )
    except requests.exceptions.RequestException as e:
        raise DeploymentHTTPError(
            f"Request failed for deployment {deployment_name_or_id}: {e}"
        )
load_deployment_requirements(deployment_settings: DeploymentSettings) -> List[str]

Load the software requirements for a deployment.

Parameters:

Name Type Description Default
deployment_settings DeploymentSettings

The deployment settings for which to load the software requirements.

required

Returns:

Type Description
List[str]

The software requirements for the deployment.

Raises:

Type Description
RuntimeError

If the deployment app runner flavor cannot be loaded.

Source code in src/zenml/deployers/utils.py
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
def load_deployment_requirements(
    deployment_settings: DeploymentSettings,
) -> List[str]:
    """Load the software requirements for a deployment.

    Args:
        deployment_settings: The deployment settings for which to load the
            software requirements.

    Returns:
        The software requirements for the deployment.

    Raises:
        RuntimeError: If the deployment app runner flavor cannot be loaded.
    """
    from zenml.deployers.server.app import BaseDeploymentAppRunnerFlavor

    try:
        deployment_app_runner_flavor = (
            BaseDeploymentAppRunnerFlavor.load_app_runner_flavor(
                deployment_settings
            )
        )
    except Exception as e:
        raise RuntimeError(
            f"Failed to load deployment app runner flavor from deployment "
            f"settings: {e}"
        ) from e

    return deployment_app_runner_flavor.requirements
Modules