Client: Add JWT authentication support#4313@jakub-moravecAdd JWT authenticator for Java and Python clients, enabling token-based authentication without requiring a custom authenticator implementation.
Spark: Extract input symlinks from DataSourceRDD#4283@kchledowskiEnable extraction of input dataset symlinks from DataSourceRDD, providing richer lineage information for RDD-based Iceberg operations.
Spark: Disable column-level lineage for LogicalRDD plans#4329@kchledowskiDisable column-level lineage extraction for LogicalRDD plans to prevent incorrect lineage caused by lost schema and transformation context.
Spark: Disable input schema extraction for LogicalRDD and add schema extraction for Iceberg DataSourceRDD#4331@kchledowskiDisable unreliable input schema extraction from LogicalRDD and instead extract schemas from Iceberg table metadata when reading via DataSourceRDD.
dbt: Align schema definition for dbt facets#4285@LegendPawel-MarutAlign schema definitions for dbt-run-run-facet and dbt-version-run-facet to fix validation inconsistencies.
dbt: Better error handling for finding the directory of profiles.yml#4320@ah12068Handle missing profiles_dir key in run_results.json gracefully, falling back to default profile directory resolution.
dbt: Fix --target-path CLI flag being ignored#4298@gaurav-atlanFix the --target-path CLI argument not being parsed and passed to artifact processors, causing the default target path to always be used.
Flink: Use Flink 2.x-only class for version detection#4312@mobuchowskiFix false Flink 2.x detection when modern V2-based connectors are used with Flink 1.x by using JobStatusChangedListenerFactory for version detection.
Java: Send Content-Encoding header for compressed requests#4282@Lukas-RiedelSend the Content-Encoding header when request body compression is enabled in the Java client, consistent with the Python client behavior.
Spark: Early return in DatabricksEventFilter for non-Databricks platforms#4315@mobuchowskiAdd fast environment detection check to skip Databricks-specific event filtering on non-Databricks platforms, reducing overhead.
Spark: Handle null Glue ARN in IcebergHandler#4311@mobuchowskiFix NullPointerException when processing Iceberg datasets with AWS Glue catalog by safely handling null Glue ARN values.
Spark: Handle ResolvedIdentifier in DropTableVisitor#4316@mobuchowskiFix ClassCastException on DROP TABLE commands in Databricks Runtime 14.2+ by handling ResolvedIdentifier alongside ResolvedTable.
Python: Remove unneeded build runtime dependency#4344@mobuchowskiRemove the build package from runtime dependencies as it is only needed at build time and is already handled by the build system configuration.
Spark: Resolve parent job name in ParentRunFacet for AWS Glue#4340@tstrilkaFix incorrect parent job name in ParentRunFacet for child events (SQL_JOB, RDD_JOB) on AWS Glue, where the raw spark.app.name was used instead of the resolved application name from platform-specific name resolvers.
Spark: Skip Iceberg RDD integration tests on Java 8 with Spark 3.5@mobuchowskiSkip testAppendWithRDDTransformations and testAppendWithRDDProcessing on Java 8 + Spark 3.5 where the Iceberg vendor module is not compiled due to Iceberg 1.7 requiring Java 11+.