Welcome to the ZenML Api Docs
Console
Logger
Step Operators
Runtime Configuration
Enums
Container Registries
Container Registry
A container registry is a store for (Docker) containers. A ZenML workflow involving a container registry would automatically containerize your code to be transported across stacks running remotely. As part of the deployment to the cluster, the ZenML base image would be downloaded (from a cloud container registry) and used as the basis for the deployed 'run'.
For instance, when you are running a local container-based stack, you would therefore have a local container registry which stores the container images you create that bundle up your pipeline code. You could also use a remote container registry like the Elastic Container Registry at AWS in a more production setting.
Integrations
The ZenML integrations module contains sub-modules for each integration that we
support. This includes orchestrators like Apache Airflow, visualization tools
like the facets
library, as well as deep learning libraries like PyTorch.
Services
Artifact Stores
Artifact Stores
In ZenML, the inputs and outputs which go through any step is treated as an
artifact and as its name suggests, an ArtifactStore
is a place where these
artifacts get stored.
Out of the box, ZenML comes with the BaseArtifactStore
and
LocalArtifactStore
implementations. While the BaseArtifactStore
establishes
an interface for people who want to extend it to their needs, the
LocalArtifactStore
is a simple implementation for a local setup.
Moreover, additional artifact stores can be found in specific integrations
modules, such as the GCPArtifactStore
in the gcp
integration and the
AzureArtifactStore
in the azure
integration.
Materializers
Materializers are used to convert a ZenML artifact into a specific format. They
are most often used to handle the input or output of ZenML steps, and can be
extended by building on the BaseMaterializer
class.
Artifacts
Artifacts are the data that power your experimentation and model training. It is actually steps that produce artifacts, which are then stored in the artifact store. Artifacts are written in the signature of a step like so:
def my_step(first_artifact: int, second_artifact: torch.nn.Module -> int:
# first_artifact is an integer
# second_artifact is a torch.nn.Module
return 1
Artifacts can be serialized and deserialized (i.e. written and read from the
Artifact Store) in various ways like TFRecords
or saved model
pickles, depending on what the step produces.The serialization and
deserialization logic of artifacts is defined by the appropriate Materializer.
Utils
The utils
module contains utility functions handling analytics, reading and
writing YAML data as well as other general purpose functions.
Constants
Metadata Stores
Metadata Store
The configuration of each pipeline, step, backend, and produced artifacts are
all tracked within the metadata store. The metadata store is an SQL database,
and can be sqlite
or mysql
.
Metadata are the pieces of information tracked about the pipelines, experiments and configurations that you are running with ZenML. Metadata are stored inside the metadata store.
Exceptions
Config
The config
module contains classes and functions that manage user-specific
configuration. ZenML's configuration is stored in a file called
.zenglobal.json
, located on the user's directory for configuration files.
(The exact location differs from operating system to operating system.)
The GlobalConfig
class is the main class in this module. It provides a
Pydantic configuration object that is used to store and retrieve configuration.
This GlobalConfig
object handles the serialization and deserialization of
the configuration options that are stored in the file in order to persist the
configuration across sessions.
Environment
Io
The io
module handles file operations for the ZenML package. It offers a
standard interface for reading, writing and manipulating files and directories.
It is heavily influenced and inspired by the io
module of tfx
.
Post Execution
After executing a pipeline, the user needs to be able to fetch it from history and perform certain tasks. The post_execution submodule provides a set of interfaces with which the user can interact with artifacts, the pipeline, steps, and the post-run pipeline object.
Visualizers
The visualizers
module offers a way of constructing and displaying
visualizations of steps and pipeline results. The BaseVisualizer
class is at
the root of all the other visualizers, including options to view the results of
pipeline runs, steps and pipelines themselves.
Orchestrators
Orchestrator
An orchestrator is a special kind of backend that manages the running of each step of the pipeline. Orchestrators administer the actual pipeline runs. You can think of it as the 'root' of any pipeline job that you run during your experimentation.
ZenML supports a local orchestrator out of the box which allows you to run your pipelines in a local environment. We also support using Apache Airflow as the orchestrator to handle the steps of your pipeline.
Stack
Steps
A step is a single piece or stage of a ZenML pipeline. Think of each step as being one of the nodes of a Directed Acyclic Graph (or DAG). Steps are responsible for one aspect of processing or interacting with the data / artifacts in the pipeline.
ZenML currently implements a basic step interface, but there will be other more customized interfaces (layered in a hierarchy) for specialized implementations. Conceptually, a Step is a discrete and independent part of a pipeline that is responsible for one particular aspect of data manipulation inside a ZenML pipeline.
Steps can be subclassed from the BaseStep
class, or used via our @step
decorator.
Repository
Pipelines
A ZenML pipeline is a sequence of tasks that execute in a specific order and
yield artifacts. The artifacts are stored within the artifact store and indexed
via the metadata store. Each individual task within a pipeline is known as a
step. The standard pipelines within ZenML are designed to have easy interfaces
to add pre-decided steps, with the order also pre-decided. Other sorts of
pipelines can be created as well from scratch, building on the BasePipeline
class.
Pipelines can be written as simple functions. They are created by using decorators appropriate to the specific use case you have. The moment it is run
, a pipeline is compiled and passed directly to the orchestrator.