Skip to main content

Home

Unravel for Azure Databricks

A single deployment of Unravel for Azure Databricks can monitor all your clusters across all your Databricks instances and workspaces.

Unravel for Azure Databricks provides:

  • A single pane of glass to help you understand your resources, infrastructure, applications, and users across Databricks instances and workspaces.

  • Unified view across workspaces and instances.

  • Usage breakdown and trending.

  • Insights into and recommendation for your applications.

  • Visibility into data usage including what tables are being accessed by which user and application, and the extent of the access (hot, warm, and cool tables).

Unravel for Azure can help you:

  • Visual and understand your resources, apps, and users across all Databricks instances and workspaces.

  • Speed up applications.

  • Improve the efficiency of your resource utilization.

  • Root cause and resolve application problems.

Using Unravel for Azure Databricks is likely to reduce your costs.

Changes and required configurations for Unravel for Azure Databricks

Spark jobs

The Spark APM program tab is only loaded for spark-submit job types.

The following configs must be added when configuring the spark-submit job.

  • "--files"

    Example: "--files","dbfs:/test/main/scala/spark/sql/benchmarks/PartGroupBy.scala"

  • "--conf","spark.unravel.program.dir="

    Example: "--conf","spark.unravel.program.dir=/dbfs/test/main/scala/spark/sql/benchmarks"