Skip to main content
Version: Next

Transport

Tip: See current list of all supported transports.

HTTP

Allows sending events to HTTP endpoint, using ApacheHTTPClient.

Configuration

  • type - string, must be "http". Required.
  • url - string, base url for HTTP requests. Required.
  • endpoint - string specifying the endpoint to which events are sent, appended to url. Optional, default: /api/v1/lineage.
  • urlParams - dictionary specifying query parameters send in HTTP requests. Optional.
  • timeoutInMillis - integer specifying timeout (in milliseconds) value used while connecting to server. Optional, default: 5000.
  • auth - dictionary specifying authentication options. Optional, by default no authorization is used. If set, requires the type property.
    • type - string specifying the "api_key" or the fully qualified class name of your TokenProvider. Required if auth is provided.
    • apiKey - string setting the Authentication HTTP header as the Bearer. Required if type is api_key.
  • headers - dictionary specifying HTTP request headers. Optional.
  • compression - string, name of algorithm used by HTTP client to compress request body. Optional, default value null, allowed values: gzip. Added in v1.13.0.

Behavior

Events are serialized to JSON, and then are send as HTTP POST request with Content-Type: application/json.

Examples

Anonymous connection:

transport:
type: http
url: http://localhost:5000

With authorization:

transport:
type: http
url: http://localhost:5000
auth:
type: api_key
api_key: f38d2189-c603-4b46-bdea-e573a3b5a7d5

Full example:

transport:
type: http
url: http://localhost:5000
endpoint: /api/v1/lineage
urlParams:
param0: value0
param1: value1
timeoutInMillis: 5000
auth:
type: api_key
api_key: f38d2189-c603-4b46-bdea-e573a3b5a7d5
headers:
X-Some-Extra-Header: abc
compression: gzip

Kafka

If a transport type is set to kafka, then the below parameters would be read and used when building KafkaProducer. This transport requires the artifact org.apache.kafka:kafka-clients:3.1.0 (or compatible) on your classpath.

Configuration

  • type - string, must be "kafka". Required.

  • topicName - string specifying the topic on what events will be sent. Required.

  • properties - a dictionary containing a Kafka producer config as in Kafka producer config. Required.

  • localServerId - deprecated, renamed to messageKey since v1.13.0.

  • messageKey - string, key for all Kafka messages produced by transport. Optional, default value described below. Added in v1.13.0.

    Default values for messageKey are:

    • run:{parentJob.namespace}/{parentJob.name} - for RunEvent with parent facet
    • run:{job.namespace}/{job.name} - for RunEvent
    • job:{job.namespace}/{job.name} - for JobEvent
    • dataset:{dataset.namespace}/{dataset.name} - for DatasetEvent

Behavior

Events are serialized to JSON, and then dispatched to the Kafka topic.

Notes

It is recommended to provide messageKey if Job hierarchy is used. It can be any string, but it should be the same for all jobs in hierarchy, like Airflow task -> Spark application -> Spark task runs.

Examples

transport:
type: kafka
topicName: openlineage.events
properties:
bootstrap.servers: localhost:9092,another.host:9092
acks: all
retries: 3
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
messageKey: some-value

Notes: It is recommended to provide messageKey if Job hierarchy is used. It can be any string, but it should be the same for all jobs in hierarchy, like Airflow task -> Spark application.

Default values are:

  • run:{parentJob.namespace}/{parentJob.name}/{parentRun.id} - for RunEvent with parent facet
  • run:{job.namespace}/{job.name}/{run.id} - for RunEvent
  • job:{job.namespace}/{job.name} - for JobEvent
  • dataset:{dataset.namespace}/{dataset.name} - for DatasetEvent

Kinesis

If a transport type is set to kinesis, then the below parameters would be read and used when building KinesisProducer. Also, KinesisTransport depends on you to provide artifact com.amazonaws:amazon-kinesis-producer:0.14.0 or compatible on your classpath.

Configuration

  • type - string, must be "kinesis". Required.
  • streamName - the streamName of the Kinesis. Required.
  • region - the region of the Kinesis. Required.
  • roleArn - the roleArn which is allowed to read/write to Kinesis stream. Optional.
  • properties - a dictionary that contains a Kinesis allowed properties. Optional.

Behavior

  • Events are serialized to JSON, and then dispatched to the Kinesis stream.
  • The partition key is generated as {jobNamespace}:{jobName}.
  • Two constructors are available: one accepting both KinesisProducer and KinesisConfig and another solely accepting KinesisConfig.

Examples

transport:
type: kinesis
streamName: your_kinesis_stream_name
region: your_aws_region
roleArn: arn:aws:iam::account-id:role/role-name
properties:
VerifyCertificate: true
ConnectTimeout: 6000

Console

This straightforward transport emits OpenLineage events directly to the console through a logger. No additional configuration is required.

Behavior

Events are serialized to JSON. Then each event is logged with INFO level to logger with name ConsoleTransport.

Notes

Be cautious when using the DEBUG log level, as it might result in double-logging due to the OpenLineageClient also logging.

Configuration

  • type - string, must be "console". Required.

Examples

transport:
type: console

File

Designed mainly for integration testing, the FileTransport emits OpenLineage events to a given file.

Configuration

  • type - string, must be "file". Required.
  • location - string specifying the path of the file. Required.

Behavior

  • If the target file is absent, it's created.
  • Events are serialized to JSON, and then appended to a file, separated by newlines.
  • Intrinsic newline characters within the event JSON are eliminated to ensure one-line events.

Notes for Yarn/Kubernetes

This transport type is pretty useless on Spark/Flink applications deployed to Yarn or Kubernetes cluster:

  • Each executor will write file to a local filesystem of Yarn container/K8s pod. So resulting file will be removed when such container/pod is destroyed.
  • Kubernetes persistent volumes are not destroyed after pod removal. But all the executors will write to the same network disk in parallel, producing a broken file.

Examples

transport:
type: file
location: /path/to/your/file

Composite

The CompositeTransport is designed to aggregate multiple transports, allowing event emission to several destinations sequentially. This is useful when events need to be sent to multiple targets, such as a logging system and an API endpoint, one after another in a defined order.

Configuration

  • type - string, must be "composite". Required.
  • transports - may be a list or a map of transport configurations. Required.
  • continueOnFailure - boolean flag, determines if the process should continue even when one of the transports fails. Default is false.

Behavior

  • The configured transports will be initialized and used in sequence to emit OpenLineage events.
  • If continueOnFailure is set to false, a failure in one transport will stop the event emission process, and an exception will be raised.
  • If continueOnFailure is true, the failure will be logged, but the remaining transports will still attempt to send the event.

Notes for Multiple Transports

This transport can include a variety of other transport types (e.g., HttpTransport, KafkaTransport, etc.), allowing flexibility in event distribution. Ideal for scenarios where OpenLineage events need to reach multiple destinations for redundancy or different types of processing.

The transports configuration can be provided in two formats:

  1. A list of transport configurations, where each transport may optionally include a name field.
  2. A map of transport configurations, where the key acts as the name for each transport. The map format is particularly useful for configurations set via environment variables or Java properties, providing a more convenient and flexible setup.

Examples

transport:
type: composite
continueOnFailure: true
transports:
- type: http
url: http://example.com/api
name: my_http
- type: kafka
topicName: openlineage.events
properties:
bootstrap.servers: localhost:9092,another.host:9092
acks: all
retries: 3
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.StringSerializer
messageKey: some-value
continueOnFailure: true

Dataplex

To use this transport in your project, you need to include io.openlineage:transports-dataplex artifact in your build configuration. This is particularly important for environments like Spark, where this transport must be on the classpath for lineage events to be emitted correctly.

Configuration

  • type - string, must be "dataplex". Required.
  • endpoint - string, specifies the endpoint to which events are sent, default value is datalineage.googleapis.com:443. Optional.
  • projectId - string, the project quota identifier. If not provided, it is determined based on user credentials. Optional.
  • location - string, Dataplex location. Optional, default: "us".
  • credentialsFile - string, path to the Service Account credentials JSON file. Optional, if not provided Application Default Credentials are used
  • mode - enum that specifies the type of client used for publishing OpenLineage events to Dataplex. Possible values: sync (synchronous) or async (asynchronous). Optional, default: async.

Behavior

  • Events are serialized to JSON, included as part of a gRPC request, and then dispatched to the Dataplex endpoint.
  • Depending on the mode chosen, requests are sent using either a synchronous or asynchronous client.

Examples

transport:
type: dataplex
projectId: your_gcp_project_id
location: us
mode: sync
credentialsFile: path/to/credentials.json