Delta Live Tables

These dataclasses are used in the SDK to represent API requests and responses for services in the databricks.sdk.service.pipelines module.

class databricks.sdk.service.pipelines.CreatePipeline
allow_duplicate_names: bool | None = None

If false, deployment will fail if name conflicts with that of another pipeline.

catalog: str | None = None

A catalog in Unity Catalog to publish data from this pipeline to. If target is specified, tables in this pipeline are published to a target schema inside catalog (for example, catalog.`target`.`table`). If target is not specified, no data is published to Unity Catalog.

channel: str | None = None

DLT Release Channel that specifies which version to use.

clusters: List[PipelineCluster] | None = None

Cluster settings for this pipeline deployment.

configuration: Dict[str, str] | None = None

String-String configuration for this pipeline execution.

continuous: bool | None = None

Whether the pipeline is continuous or triggered. This replaces trigger.

deployment: PipelineDeployment | None = None

Deployment type of this pipeline.

development: bool | None = None

Whether the pipeline is in Development mode. Defaults to false.

dry_run: bool | None = None
edition: str | None = None

Pipeline product edition.

filters: Filters | None = None

Filters on which Pipeline packages to include in the deployed graph.

id: str | None = None

Unique identifier for this pipeline.

ingestion_definition: ManagedIngestionPipelineDefinition | None = None

The configuration for a managed ingestion pipeline. These settings cannot be used with the ‘libraries’, ‘target’ or ‘catalog’ settings.

libraries: List[PipelineLibrary] | None = None

Libraries or code needed by this deployment.

name: str | None = None

Friendly identifier for this pipeline.

notifications: List[Notifications] | None = None

List of notification settings for this pipeline.

photon: bool | None = None

Whether Photon is enabled for this pipeline.

serverless: bool | None = None

Whether serverless compute is enabled for this pipeline.

storage: str | None = None

DBFS root directory for storing checkpoints and tables.

target: str | None = None

Target schema (database) to add tables in this pipeline to. If not specified, no data is published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify catalog.

trigger: PipelineTrigger | None = None

Which pipeline trigger to use. Deprecated: Use continuous instead.

as_dict() dict

Serializes the CreatePipeline into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) CreatePipeline

Deserializes the CreatePipeline from a dictionary.

class databricks.sdk.service.pipelines.CreatePipelineResponse
effective_settings: PipelineSpec | None = None

Only returned when dry_run is true.

pipeline_id: str | None = None

The unique identifier for the newly created pipeline. Only returned when dry_run is false.

as_dict() dict

Serializes the CreatePipelineResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) CreatePipelineResponse

Deserializes the CreatePipelineResponse from a dictionary.

class databricks.sdk.service.pipelines.CronTrigger
quartz_cron_schedule: str | None = None
timezone_id: str | None = None
as_dict() dict

Serializes the CronTrigger into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) CronTrigger

Deserializes the CronTrigger from a dictionary.

class databricks.sdk.service.pipelines.DataPlaneId
instance: str | None = None

The instance name of the data plane emitting an event.

seq_no: int | None = None

A sequence number, unique and increasing within the data plane instance.

as_dict() dict

Serializes the DataPlaneId into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) DataPlaneId

Deserializes the DataPlaneId from a dictionary.

class databricks.sdk.service.pipelines.DeletePipelineResponse
as_dict() dict

Serializes the DeletePipelineResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) DeletePipelineResponse

Deserializes the DeletePipelineResponse from a dictionary.

class databricks.sdk.service.pipelines.DeploymentKind

The deployment method that manages the pipeline: - BUNDLE: The pipeline is managed by a Databricks Asset Bundle.

BUNDLE = "BUNDLE"
class databricks.sdk.service.pipelines.EditPipeline
allow_duplicate_names: bool | None = None

If false, deployment will fail if name has changed and conflicts the name of another pipeline.

catalog: str | None = None

A catalog in Unity Catalog to publish data from this pipeline to. If target is specified, tables in this pipeline are published to a target schema inside catalog (for example, catalog.`target`.`table`). If target is not specified, no data is published to Unity Catalog.

channel: str | None = None

DLT Release Channel that specifies which version to use.

clusters: List[PipelineCluster] | None = None

Cluster settings for this pipeline deployment.

configuration: Dict[str, str] | None = None

String-String configuration for this pipeline execution.

continuous: bool | None = None

Whether the pipeline is continuous or triggered. This replaces trigger.

deployment: PipelineDeployment | None = None

Deployment type of this pipeline.

development: bool | None = None

Whether the pipeline is in Development mode. Defaults to false.

edition: str | None = None

Pipeline product edition.

expected_last_modified: int | None = None

If present, the last-modified time of the pipeline settings before the edit. If the settings were modified after that time, then the request will fail with a conflict.

filters: Filters | None = None

Filters on which Pipeline packages to include in the deployed graph.

id: str | None = None

Unique identifier for this pipeline.

ingestion_definition: ManagedIngestionPipelineDefinition | None = None

The configuration for a managed ingestion pipeline. These settings cannot be used with the ‘libraries’, ‘target’ or ‘catalog’ settings.

libraries: List[PipelineLibrary] | None = None

Libraries or code needed by this deployment.

name: str | None = None

Friendly identifier for this pipeline.

notifications: List[Notifications] | None = None

List of notification settings for this pipeline.

photon: bool | None = None

Whether Photon is enabled for this pipeline.

pipeline_id: str | None = None

Unique identifier for this pipeline.

serverless: bool | None = None

Whether serverless compute is enabled for this pipeline.

storage: str | None = None

DBFS root directory for storing checkpoints and tables.

target: str | None = None

Target schema (database) to add tables in this pipeline to. If not specified, no data is published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify catalog.

trigger: PipelineTrigger | None = None

Which pipeline trigger to use. Deprecated: Use continuous instead.

as_dict() dict

Serializes the EditPipeline into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) EditPipeline

Deserializes the EditPipeline from a dictionary.

class databricks.sdk.service.pipelines.EditPipelineResponse
as_dict() dict

Serializes the EditPipelineResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) EditPipelineResponse

Deserializes the EditPipelineResponse from a dictionary.

class databricks.sdk.service.pipelines.ErrorDetail
exceptions: List[SerializedException] | None = None

The exception thrown for this error, with its chain of cause.

fatal: bool | None = None

Whether this error is considered fatal, that is, unrecoverable.

as_dict() dict

Serializes the ErrorDetail into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ErrorDetail

Deserializes the ErrorDetail from a dictionary.

class databricks.sdk.service.pipelines.EventLevel

The severity level of the event.

ERROR = "ERROR"
INFO = "INFO"
METRICS = "METRICS"
WARN = "WARN"
class databricks.sdk.service.pipelines.FileLibrary
path: str | None = None

The absolute path of the file.

as_dict() dict

Serializes the FileLibrary into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) FileLibrary

Deserializes the FileLibrary from a dictionary.

class databricks.sdk.service.pipelines.Filters
exclude: List[str] | None = None

Paths to exclude.

include: List[str] | None = None

Paths to include.

as_dict() dict

Serializes the Filters into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) Filters

Deserializes the Filters from a dictionary.

class databricks.sdk.service.pipelines.GetPipelinePermissionLevelsResponse
permission_levels: List[PipelinePermissionsDescription] | None = None

Specific permission levels

as_dict() dict

Serializes the GetPipelinePermissionLevelsResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) GetPipelinePermissionLevelsResponse

Deserializes the GetPipelinePermissionLevelsResponse from a dictionary.

class databricks.sdk.service.pipelines.GetPipelineResponse
cause: str | None = None

An optional message detailing the cause of the pipeline state.

cluster_id: str | None = None

The ID of the cluster that the pipeline is running on.

creator_user_name: str | None = None

The username of the pipeline creator.

health: GetPipelineResponseHealth | None = None

The health of a pipeline.

last_modified: int | None = None

The last time the pipeline settings were modified or created.

latest_updates: List[UpdateStateInfo] | None = None

Status of the latest updates for the pipeline. Ordered with the newest update first.

name: str | None = None

A human friendly identifier for the pipeline, taken from the spec.

pipeline_id: str | None = None

The ID of the pipeline.

run_as_user_name: str | None = None

Username of the user that the pipeline will run on behalf of.

spec: PipelineSpec | None = None

The pipeline specification. This field is not returned when called by ListPipelines.

state: PipelineState | None = None

The pipeline state.

as_dict() dict

Serializes the GetPipelineResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) GetPipelineResponse

Deserializes the GetPipelineResponse from a dictionary.

class databricks.sdk.service.pipelines.GetPipelineResponseHealth

The health of a pipeline.

HEALTHY = "HEALTHY"
UNHEALTHY = "UNHEALTHY"
class databricks.sdk.service.pipelines.GetUpdateResponse
update: UpdateInfo | None = None

The current update info.

as_dict() dict

Serializes the GetUpdateResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) GetUpdateResponse

Deserializes the GetUpdateResponse from a dictionary.

class databricks.sdk.service.pipelines.IngestionConfig
schema: SchemaSpec | None = None

Select tables from a specific source schema.

table: TableSpec | None = None

Select tables from a specific source table.

as_dict() dict

Serializes the IngestionConfig into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) IngestionConfig

Deserializes the IngestionConfig from a dictionary.

class databricks.sdk.service.pipelines.ListPipelineEventsResponse
events: List[PipelineEvent] | None = None

The list of events matching the request criteria.

next_page_token: str | None = None

If present, a token to fetch the next page of events.

prev_page_token: str | None = None

If present, a token to fetch the previous page of events.

as_dict() dict

Serializes the ListPipelineEventsResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ListPipelineEventsResponse

Deserializes the ListPipelineEventsResponse from a dictionary.

class databricks.sdk.service.pipelines.ListPipelinesResponse
next_page_token: str | None = None

If present, a token to fetch the next page of events.

statuses: List[PipelineStateInfo] | None = None

The list of events matching the request criteria.

as_dict() dict

Serializes the ListPipelinesResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ListPipelinesResponse

Deserializes the ListPipelinesResponse from a dictionary.

class databricks.sdk.service.pipelines.ListUpdatesResponse
next_page_token: str | None = None

If present, then there are more results, and this a token to be used in a subsequent request to fetch the next page.

prev_page_token: str | None = None

If present, then this token can be used in a subsequent request to fetch the previous page.

updates: List[UpdateInfo] | None = None
as_dict() dict

Serializes the ListUpdatesResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ListUpdatesResponse

Deserializes the ListUpdatesResponse from a dictionary.

class databricks.sdk.service.pipelines.ManagedIngestionPipelineDefinition
connection_name: str | None = None

Immutable. The Unity Catalog connection this ingestion pipeline uses to communicate with the source. Specify either ingestion_gateway_id or connection_name.

ingestion_gateway_id: str | None = None

Immutable. Identifier for the ingestion gateway used by this ingestion pipeline to communicate with the source. Specify either ingestion_gateway_id or connection_name.

objects: List[IngestionConfig] | None = None

Required. Settings specifying tables to replicate and the destination for the replicated tables.

as_dict() dict

Serializes the ManagedIngestionPipelineDefinition into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ManagedIngestionPipelineDefinition

Deserializes the ManagedIngestionPipelineDefinition from a dictionary.

class databricks.sdk.service.pipelines.ManualTrigger
as_dict() dict

Serializes the ManualTrigger into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) ManualTrigger

Deserializes the ManualTrigger from a dictionary.

class databricks.sdk.service.pipelines.MaturityLevel

Maturity level for EventDetails.

DEPRECATED = "DEPRECATED"
EVOLVING = "EVOLVING"
STABLE = "STABLE"
class databricks.sdk.service.pipelines.NotebookLibrary
path: str | None = None

The absolute path of the notebook.

as_dict() dict

Serializes the NotebookLibrary into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) NotebookLibrary

Deserializes the NotebookLibrary from a dictionary.

class databricks.sdk.service.pipelines.Notifications
alerts: List[str] | None = None

A list of alerts that trigger the sending of notifications to the configured destinations. The supported alerts are:

  • on-update-success: A pipeline update completes successfully. * on-update-failure: Each

time a pipeline update fails. * on-update-fatal-failure: A pipeline update fails with a non-retryable (fatal) error. * on-flow-failure: A single data flow fails.

email_recipients: List[str] | None = None

A list of email addresses notified when a configured alert is triggered.

as_dict() dict

Serializes the Notifications into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) Notifications

Deserializes the Notifications from a dictionary.

class databricks.sdk.service.pipelines.Origin
batch_id: int | None = None

The id of a batch. Unique within a flow.

cloud: str | None = None

The cloud provider, e.g., AWS or Azure.

cluster_id: str | None = None

The id of the cluster where an execution happens. Unique within a region.

dataset_name: str | None = None

The name of a dataset. Unique within a pipeline.

flow_id: str | None = None

The id of the flow. Globally unique. Incremental queries will generally reuse the same id while complete queries will have a new id per update.

flow_name: str | None = None

The name of the flow. Not unique.

host: str | None = None

The optional host name where the event was triggered

maintenance_id: str | None = None

The id of a maintenance run. Globally unique.

materialization_name: str | None = None

Materialization name.

org_id: int | None = None

The org id of the user. Unique within a cloud.

pipeline_id: str | None = None

The id of the pipeline. Globally unique.

pipeline_name: str | None = None

The name of the pipeline. Not unique.

region: str | None = None

The cloud region.

request_id: str | None = None

The id of the request that caused an update.

table_id: str | None = None

The id of a (delta) table. Globally unique.

uc_resource_id: str | None = None

The Unity Catalog id of the MV or ST being updated.

update_id: str | None = None

The id of an execution. Globally unique.

as_dict() dict

Serializes the Origin into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) Origin

Deserializes the Origin from a dictionary.

class databricks.sdk.service.pipelines.PipelineAccessControlRequest
group_name: str | None = None

name of the group

permission_level: PipelinePermissionLevel | None = None

Permission level

service_principal_name: str | None = None

application ID of a service principal

user_name: str | None = None

name of the user

as_dict() dict

Serializes the PipelineAccessControlRequest into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineAccessControlRequest

Deserializes the PipelineAccessControlRequest from a dictionary.

class databricks.sdk.service.pipelines.PipelineAccessControlResponse
all_permissions: List[PipelinePermission] | None = None

All permissions.

display_name: str | None = None

Display name of the user or service principal.

group_name: str | None = None

name of the group

service_principal_name: str | None = None

Name of the service principal.

user_name: str | None = None

name of the user

as_dict() dict

Serializes the PipelineAccessControlResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineAccessControlResponse

Deserializes the PipelineAccessControlResponse from a dictionary.

class databricks.sdk.service.pipelines.PipelineCluster
apply_policy_default_values: bool | None = None

Note: This field won’t be persisted. Only API users will check this field.

autoscale: PipelineClusterAutoscale | None = None

Parameters needed in order to automatically scale clusters up and down based on load. Note: autoscaling works best with DB runtime versions 3.0 or later.

aws_attributes: AwsAttributes | None = None

Attributes related to clusters running on Amazon Web Services. If not specified at cluster creation, a set of default values will be used.

azure_attributes: AzureAttributes | None = None

Attributes related to clusters running on Microsoft Azure. If not specified at cluster creation, a set of default values will be used.

cluster_log_conf: ClusterLogConf | None = None

The configuration for delivering spark logs to a long-term storage destination. Only dbfs destinations are supported. Only one destination can be specified for one cluster. If the conf is given, the logs will be delivered to the destination every 5 mins. The destination of driver logs is $destination/$clusterId/driver, while the destination of executor logs is $destination/$clusterId/executor.

custom_tags: Dict[str, str] | None = None

Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS instances and EBS volumes) with these tags in addition to default_tags. Notes:

  • Currently, Databricks allows at most 45 custom tags

  • Clusters can only reuse cloud resources if the resources’ tags are a subset of the cluster

tags

driver_instance_pool_id: str | None = None

The optional ID of the instance pool for the driver of the cluster belongs. The pool cluster uses the instance pool with id (instance_pool_id) if the driver pool is not assigned.

driver_node_type_id: str | None = None

The node type of the Spark driver. Note that this field is optional; if unset, the driver node type will be set as the same value as node_type_id defined above.

gcp_attributes: GcpAttributes | None = None

Attributes related to clusters running on Google Cloud Platform. If not specified at cluster creation, a set of default values will be used.

init_scripts: List[InitScriptInfo] | None = None

The configuration for storing init scripts. Any number of destinations can be specified. The scripts are executed sequentially in the order provided. If cluster_log_conf is specified, init script logs are sent to <destination>/<cluster-ID>/init_scripts.

instance_pool_id: str | None = None

The optional ID of the instance pool to which the cluster belongs.

label: str | None = None

A label for the cluster specification, either default to configure the default cluster, or maintenance to configure the maintenance cluster. This field is optional. The default value is default.

node_type_id: str | None = None

This field encodes, through a single value, the resources available to each of the Spark nodes in this cluster. For example, the Spark nodes can be provisioned and optimized for memory or compute intensive workloads. A list of available node types can be retrieved by using the :method:clusters/listNodeTypes API call.

num_workers: int | None = None

Number of worker nodes that this cluster should have. A cluster has one Spark Driver and num_workers Executors for a total of num_workers + 1 Spark nodes.

Note: When reading the properties of a cluster, this field reflects the desired number of workers rather than the actual current number of workers. For instance, if a cluster is resized from 5 to 10 workers, this field will immediately be updated to reflect the target size of 10 workers, whereas the workers listed in spark_info will gradually increase from 5 to 10 as the new nodes are provisioned.

policy_id: str | None = None

The ID of the cluster policy used to create the cluster if applicable.

spark_conf: Dict[str, str] | None = None

An object containing a set of optional, user-specified Spark configuration key-value pairs. See :method:clusters/create for more details.

spark_env_vars: Dict[str, str] | None = None

An object containing a set of optional, user-specified environment variable key-value pairs. Please note that key-value pair of the form (X,Y) will be exported as is (i.e., export X=’Y’) while launching the driver and workers.

In order to specify an additional set of SPARK_DAEMON_JAVA_OPTS, we recommend appending them to $SPARK_DAEMON_JAVA_OPTS as shown in the example below. This ensures that all default databricks managed environmental variables are included as well.

Example Spark environment variables: {“SPARK_WORKER_MEMORY”: “28000m”, “SPARK_LOCAL_DIRS”: “/local_disk0”} or {“SPARK_DAEMON_JAVA_OPTS”: “$SPARK_DAEMON_JAVA_OPTS -Dspark.shuffle.service.enabled=true”}

ssh_public_keys: List[str] | None = None

SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. Up to 10 keys can be specified.

as_dict() dict

Serializes the PipelineCluster into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineCluster

Deserializes the PipelineCluster from a dictionary.

class databricks.sdk.service.pipelines.PipelineClusterAutoscale
min_workers: int

The minimum number of workers the cluster can scale down to when underutilized. It is also the initial number of workers the cluster will have after creation.

max_workers: int

The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers.

mode: PipelineClusterAutoscaleMode | None = None

Databricks Enhanced Autoscaling optimizes cluster utilization by automatically allocating cluster resources based on workload volume, with minimal impact to the data processing latency of your pipelines. Enhanced Autoscaling is available for updates clusters only. The legacy autoscaling feature is used for maintenance clusters.

as_dict() dict

Serializes the PipelineClusterAutoscale into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineClusterAutoscale

Deserializes the PipelineClusterAutoscale from a dictionary.

class databricks.sdk.service.pipelines.PipelineClusterAutoscaleMode

Databricks Enhanced Autoscaling optimizes cluster utilization by automatically allocating cluster resources based on workload volume, with minimal impact to the data processing latency of your pipelines. Enhanced Autoscaling is available for updates clusters only. The legacy autoscaling feature is used for maintenance clusters.

ENHANCED = "ENHANCED"
LEGACY = "LEGACY"
class databricks.sdk.service.pipelines.PipelineDeployment
kind: DeploymentKind | None = None

The deployment method that manages the pipeline.

metadata_file_path: str | None = None

The path to the file containing metadata about the deployment.

as_dict() dict

Serializes the PipelineDeployment into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineDeployment

Deserializes the PipelineDeployment from a dictionary.

class databricks.sdk.service.pipelines.PipelineEvent
error: ErrorDetail | None = None

Information about an error captured by the event.

event_type: str | None = None

The event type. Should always correspond to the details

id: str | None = None

A time-based, globally unique id.

level: EventLevel | None = None

The severity level of the event.

maturity_level: MaturityLevel | None = None

Maturity level for event_type.

message: str | None = None

The display message associated with the event.

origin: Origin | None = None

Describes where the event originates from.

sequence: Sequencing | None = None

A sequencing object to identify and order events.

timestamp: str | None = None

The time of the event.

as_dict() dict

Serializes the PipelineEvent into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineEvent

Deserializes the PipelineEvent from a dictionary.

class databricks.sdk.service.pipelines.PipelineLibrary
file: FileLibrary | None = None

The path to a file that defines a pipeline and is stored in the Databricks Repos.

jar: str | None = None

URI of the jar to be installed. Currently only DBFS is supported.

maven: MavenLibrary | None = None

Specification of a maven library to be installed.

notebook: NotebookLibrary | None = None

The path to a notebook that defines a pipeline and is stored in the <Databricks> workspace.

as_dict() dict

Serializes the PipelineLibrary into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineLibrary

Deserializes the PipelineLibrary from a dictionary.

class databricks.sdk.service.pipelines.PipelinePermission
inherited: bool | None = None
inherited_from_object: List[str] | None = None
permission_level: PipelinePermissionLevel | None = None

Permission level

as_dict() dict

Serializes the PipelinePermission into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelinePermission

Deserializes the PipelinePermission from a dictionary.

class databricks.sdk.service.pipelines.PipelinePermissionLevel

Permission level

CAN_MANAGE = "CAN_MANAGE"
CAN_RUN = "CAN_RUN"
CAN_VIEW = "CAN_VIEW"
IS_OWNER = "IS_OWNER"
class databricks.sdk.service.pipelines.PipelinePermissions
access_control_list: List[PipelineAccessControlResponse] | None = None
object_id: str | None = None
object_type: str | None = None
as_dict() dict

Serializes the PipelinePermissions into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelinePermissions

Deserializes the PipelinePermissions from a dictionary.

class databricks.sdk.service.pipelines.PipelinePermissionsDescription
description: str | None = None
permission_level: PipelinePermissionLevel | None = None

Permission level

as_dict() dict

Serializes the PipelinePermissionsDescription into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelinePermissionsDescription

Deserializes the PipelinePermissionsDescription from a dictionary.

class databricks.sdk.service.pipelines.PipelinePermissionsRequest
access_control_list: List[PipelineAccessControlRequest] | None = None
pipeline_id: str | None = None

The pipeline for which to get or manage permissions.

as_dict() dict

Serializes the PipelinePermissionsRequest into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelinePermissionsRequest

Deserializes the PipelinePermissionsRequest from a dictionary.

class databricks.sdk.service.pipelines.PipelineSpec
catalog: str | None = None

A catalog in Unity Catalog to publish data from this pipeline to. If target is specified, tables in this pipeline are published to a target schema inside catalog (for example, catalog.`target`.`table`). If target is not specified, no data is published to Unity Catalog.

channel: str | None = None

DLT Release Channel that specifies which version to use.

clusters: List[PipelineCluster] | None = None

Cluster settings for this pipeline deployment.

configuration: Dict[str, str] | None = None

String-String configuration for this pipeline execution.

continuous: bool | None = None

Whether the pipeline is continuous or triggered. This replaces trigger.

deployment: PipelineDeployment | None = None

Deployment type of this pipeline.

development: bool | None = None

Whether the pipeline is in Development mode. Defaults to false.

edition: str | None = None

Pipeline product edition.

filters: Filters | None = None

Filters on which Pipeline packages to include in the deployed graph.

id: str | None = None

Unique identifier for this pipeline.

ingestion_definition: ManagedIngestionPipelineDefinition | None = None

The configuration for a managed ingestion pipeline. These settings cannot be used with the ‘libraries’, ‘target’ or ‘catalog’ settings.

libraries: List[PipelineLibrary] | None = None

Libraries or code needed by this deployment.

name: str | None = None

Friendly identifier for this pipeline.

notifications: List[Notifications] | None = None

List of notification settings for this pipeline.

photon: bool | None = None

Whether Photon is enabled for this pipeline.

serverless: bool | None = None

Whether serverless compute is enabled for this pipeline.

storage: str | None = None

DBFS root directory for storing checkpoints and tables.

target: str | None = None

Target schema (database) to add tables in this pipeline to. If not specified, no data is published to the Hive metastore or Unity Catalog. To publish to Unity Catalog, also specify catalog.

trigger: PipelineTrigger | None = None

Which pipeline trigger to use. Deprecated: Use continuous instead.

as_dict() dict

Serializes the PipelineSpec into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineSpec

Deserializes the PipelineSpec from a dictionary.

class databricks.sdk.service.pipelines.PipelineState

The pipeline state.

DELETED = "DELETED"
DEPLOYING = "DEPLOYING"
FAILED = "FAILED"
IDLE = "IDLE"
RECOVERING = "RECOVERING"
RESETTING = "RESETTING"
RUNNING = "RUNNING"
STARTING = "STARTING"
STOPPING = "STOPPING"
class databricks.sdk.service.pipelines.PipelineStateInfo
cluster_id: str | None = None

The unique identifier of the cluster running the pipeline.

creator_user_name: str | None = None

The username of the pipeline creator.

latest_updates: List[UpdateStateInfo] | None = None

Status of the latest updates for the pipeline. Ordered with the newest update first.

name: str | None = None

The user-friendly name of the pipeline.

pipeline_id: str | None = None

The unique identifier of the pipeline.

run_as_user_name: str | None = None

The username that the pipeline runs as. This is a read only value derived from the pipeline owner.

state: PipelineState | None = None

The pipeline state.

as_dict() dict

Serializes the PipelineStateInfo into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineStateInfo

Deserializes the PipelineStateInfo from a dictionary.

class databricks.sdk.service.pipelines.PipelineTrigger
cron: CronTrigger | None = None
manual: ManualTrigger | None = None
as_dict() dict

Serializes the PipelineTrigger into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) PipelineTrigger

Deserializes the PipelineTrigger from a dictionary.

class databricks.sdk.service.pipelines.SchemaSpec
destination_catalog: str | None = None

Required. Destination catalog to store tables.

destination_schema: str | None = None

Required. Destination schema to store tables in. Tables with the same name as the source tables are created in this destination schema. The pipeline fails If a table with the same name already exists.

source_catalog: str | None = None

The source catalog name. Might be optional depending on the type of source.

source_schema: str | None = None

Required. Schema name in the source database.

as_dict() dict

Serializes the SchemaSpec into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) SchemaSpec

Deserializes the SchemaSpec from a dictionary.

class databricks.sdk.service.pipelines.Sequencing
control_plane_seq_no: int | None = None

A sequence number, unique and increasing within the control plane.

data_plane_id: DataPlaneId | None = None

the ID assigned by the data plane.

as_dict() dict

Serializes the Sequencing into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) Sequencing

Deserializes the Sequencing from a dictionary.

class databricks.sdk.service.pipelines.SerializedException
class_name: str | None = None

Runtime class of the exception

message: str | None = None

Exception message

stack: List[StackFrame] | None = None

Stack trace consisting of a list of stack frames

as_dict() dict

Serializes the SerializedException into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) SerializedException

Deserializes the SerializedException from a dictionary.

class databricks.sdk.service.pipelines.StackFrame
declaring_class: str | None = None

Class from which the method call originated

file_name: str | None = None

File where the method is defined

line_number: int | None = None

Line from which the method was called

method_name: str | None = None

Name of the method which was called

as_dict() dict

Serializes the StackFrame into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) StackFrame

Deserializes the StackFrame from a dictionary.

class databricks.sdk.service.pipelines.StartUpdate
cause: StartUpdateCause | None = None
full_refresh: bool | None = None

If true, this update will reset all tables before running.

full_refresh_selection: List[str] | None = None

A list of tables to update with fullRefresh. If both refresh_selection and full_refresh_selection are empty, this is a full graph update. Full Refresh on a table means that the states of the table will be reset before the refresh.

pipeline_id: str | None = None
refresh_selection: List[str] | None = None

A list of tables to update without fullRefresh. If both refresh_selection and full_refresh_selection are empty, this is a full graph update. Full Refresh on a table means that the states of the table will be reset before the refresh.

validate_only: bool | None = None

If true, this update only validates the correctness of pipeline source code but does not materialize or publish any datasets.

as_dict() dict

Serializes the StartUpdate into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) StartUpdate

Deserializes the StartUpdate from a dictionary.

class databricks.sdk.service.pipelines.StartUpdateCause
API_CALL = "API_CALL"
JOB_TASK = "JOB_TASK"
RETRY_ON_FAILURE = "RETRY_ON_FAILURE"
SCHEMA_CHANGE = "SCHEMA_CHANGE"
SERVICE_UPGRADE = "SERVICE_UPGRADE"
USER_ACTION = "USER_ACTION"
class databricks.sdk.service.pipelines.StartUpdateResponse
update_id: str | None = None
as_dict() dict

Serializes the StartUpdateResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) StartUpdateResponse

Deserializes the StartUpdateResponse from a dictionary.

class databricks.sdk.service.pipelines.StopPipelineResponse
as_dict() dict

Serializes the StopPipelineResponse into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) StopPipelineResponse

Deserializes the StopPipelineResponse from a dictionary.

class databricks.sdk.service.pipelines.TableSpec
destination_catalog: str | None = None

Required. Destination catalog to store table.

destination_schema: str | None = None

Required. Destination schema to store table.

destination_table: str | None = None

Optional. Destination table name. The pipeline fails If a table with that name already exists. If not set, the source table name is used.

source_catalog: str | None = None

Source catalog name. Might be optional depending on the type of source.

source_schema: str | None = None

Schema name in the source database. Might be optional depending on the type of source.

source_table: str | None = None

Required. Table name in the source database.

as_dict() dict

Serializes the TableSpec into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) TableSpec

Deserializes the TableSpec from a dictionary.

class databricks.sdk.service.pipelines.UpdateInfo
cause: UpdateInfoCause | None = None

What triggered this update.

cluster_id: str | None = None

The ID of the cluster that the update is running on.

config: PipelineSpec | None = None

The pipeline configuration with system defaults applied where unspecified by the user. Not returned by ListUpdates.

creation_time: int | None = None

The time when this update was created.

full_refresh: bool | None = None

If true, this update will reset all tables before running.

full_refresh_selection: List[str] | None = None

A list of tables to update with fullRefresh. If both refresh_selection and full_refresh_selection are empty, this is a full graph update. Full Refresh on a table means that the states of the table will be reset before the refresh.

pipeline_id: str | None = None

The ID of the pipeline.

refresh_selection: List[str] | None = None

A list of tables to update without fullRefresh. If both refresh_selection and full_refresh_selection are empty, this is a full graph update. Full Refresh on a table means that the states of the table will be reset before the refresh.

state: UpdateInfoState | None = None

The update state.

update_id: str | None = None

The ID of this update.

validate_only: bool | None = None

If true, this update only validates the correctness of pipeline source code but does not materialize or publish any datasets.

as_dict() dict

Serializes the UpdateInfo into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) UpdateInfo

Deserializes the UpdateInfo from a dictionary.

class databricks.sdk.service.pipelines.UpdateInfoCause

What triggered this update.

API_CALL = "API_CALL"
JOB_TASK = "JOB_TASK"
RETRY_ON_FAILURE = "RETRY_ON_FAILURE"
SCHEMA_CHANGE = "SCHEMA_CHANGE"
SERVICE_UPGRADE = "SERVICE_UPGRADE"
USER_ACTION = "USER_ACTION"
class databricks.sdk.service.pipelines.UpdateInfoState

The update state.

CANCELED = "CANCELED"
COMPLETED = "COMPLETED"
CREATED = "CREATED"
FAILED = "FAILED"
INITIALIZING = "INITIALIZING"
QUEUED = "QUEUED"
RESETTING = "RESETTING"
RUNNING = "RUNNING"
SETTING_UP_TABLES = "SETTING_UP_TABLES"
STOPPING = "STOPPING"
WAITING_FOR_RESOURCES = "WAITING_FOR_RESOURCES"
class databricks.sdk.service.pipelines.UpdateStateInfo
creation_time: str | None = None
state: UpdateStateInfoState | None = None
update_id: str | None = None
as_dict() dict

Serializes the UpdateStateInfo into a dictionary suitable for use as a JSON request body.

classmethod from_dict(d: Dict[str, any]) UpdateStateInfo

Deserializes the UpdateStateInfo from a dictionary.

class databricks.sdk.service.pipelines.UpdateStateInfoState
CANCELED = "CANCELED"
COMPLETED = "COMPLETED"
CREATED = "CREATED"
FAILED = "FAILED"
INITIALIZING = "INITIALIZING"
QUEUED = "QUEUED"
RESETTING = "RESETTING"
RUNNING = "RUNNING"
SETTING_UP_TABLES = "SETTING_UP_TABLES"
STOPPING = "STOPPING"
WAITING_FOR_RESOURCES = "WAITING_FOR_RESOURCES"