Skip to main content

Home

v4.5.4.2 Release notes

Software version

Release Date: 11/18/2019

See v4.5.4.2 for download information

Software support
  • Upgrade from 4.5.x. All that is required is a RPM upgrade.

Sensor upgrade
  • You must install the new sensor.

Certified platforms
Updates to Unravel's configuration properties
  • None

Unsupported
  • Migration Planning reports aren't supported for MapR.

New features
  • None

Improvements
  • None

Bug fixes
  • Spark:

    • Spark apps no longer hang or fail to exit after Unravel Sensor is installed. (CUSTOMER-1201)

    • Unravel Spark sensor no longer causes java.lang.ClassCircularityError errors when firing against Data Profiler Agent spark jobs. (EAR-18)

    • java.lang.ClassCircularityError: javax/crypto/BadPaddingException no longer occurs for loadExternalClass method in BTrace for a spark app run. (USPARK-434)

  • AutoActions

    • Badge is not displayed for MR app killed by an AutoAction. (AA-91)

    • Auto Actions Violation Badge functionality is not working for Impala queries (Running, Killed). (AA-191)

    • Create/Update UI bug prevents user from adding individual apps by app name (only and except modes) in long-running Impala query template. (UIX-1984)

  • Cluster optimization report is empty when one day interval is selected. (REPORT-565)

  • Data insights page

    • Hive table details page parameters are inconsistently null. (Datapage-189, UIX-1874)

    • Created and Accessed Partition details are missing in Overview. (DATAPAGE-109)

    • After an upgrade from 4.5.0.x to 4.5.1.x the Table KPI (total size of all tables) data is missing. (DATAPAGE-133)

      Workaround

      1. Access the database command line: /usr/local/unravel/install_bin/db_access.sh

      2. Truncate dashboard summaries table: mysql> delete from dashboard_summaries where entity_type='dkt';"

      3. Restart Unravel tw daemon: /etc/init.d/unravel_tw restart

    • When switching view by labels (All, Warm, Cold, and Hot) the page can become unresponsive. (UIX-1878)

      Workaround

      • Only solution available is to relaunch the crashed browser, and toggle ONLY ONCE, from default view ALL to any of the following labels (Hot, Warm, Cold).

  • HBase:: Namespace missing in HBase table name. (UIX-1736)

  • Kafka isn't working when clusterID has a space. (UKAFKA-69)

  • Migration planning

    • Migration planning reports are not supported for MapR

    • The following Cloud product/service configurations are not supported:

      • Cloud Product/Service: Azure (IaaS), Azure (HDI)

        Storage Type: Object

        Region: Germany Northeast, South Africa North, Germany Central, US DoD East, US DoD Central, South Africa West, US Gov Virginia, US Gov Iowa.

    • UI displays the last successfully generated report regardless of whether there were subsequent failed attempts to generate a report. There is no indication of the time the displayed report was generated. (REPORT-667)

    • Cluster Discovery Report

      • You can only pick the date, not the time, for the report. The time that gets automatically chosen for this report does not match the time (automatically) picked for the Chargeback reports.

        Therefore, the data in the charts displayed By App Type, By User and By Queue on the Cluster Discovery Report doesn’t match the ones on the Chargeback reports. (REPORT-684)

      • OS value is not shown for source distribution CDH 6.3. (REPORT-930)

      • There is a slight mismatch in the selected Date and Time ranges between the Applications page and the Cluster Discovery report when the chosen interval is the last 7, 30, or 90 days. (UIX-2085)

    • Services and Version Compatibility:

      • Isn't supported on CDH versions 6.1.1 and 6.0.1. (REPORT-889)

      • The latest target versions for EMR, HDI, and Dataproc may not be included. (REPORT-1094)

      • Some of the latest versions of the components shown may be different from those present in the actual source environment. This is because the ones shown are based on the Distribution Stack's Version and not captured from the actual environment itself.

    • Cloud Mapping per Host

      • The New Report modal hangs when you don't provide the Unravel property for cluster type (CDH/HDP) com.unraveldata.cluster.type. (REPORT-731)

      • Lift and Shift: DFS Size Recommendation is different between Local Storage and Object Storage options. (REPORT-1117)

      • When HDP is the source system, Object Storage size, Local Attached Storage size and cost aren't accurate because per host logical DFS size isn't captured correctly for source distribution HDP. (REPORT-990, REPORT-945)

      • Unravel assumes one core on-premises is equivalent to one cloud vCPU and finds the best possible match without considering any performance differences. (REPORT-581)

      • Compatibility between selected VM type and the storage tier/class isn't verified at this time. (REPORT-940)

  • MR:

    • The Move to Queue option in Actions (MR APM) does not work. (PLATFORM-1195)

    • Cluster ID missing for MR apps. (PLATFORM-1434)

    • Timestamp is missing for Failed MR apps. (UIX-1873)

  • Platform

    • appstatus.AppInfoAccessor: Unable to update ES document. (CUSTOMER-658)

    • Spark worker is missing Azure libraries which causes failure when loading event log and executor logs from Azure.

    • RM Polling is delayed when Yarn kill and move actions are enabled in AutoActions. (PLATFORM-1479)

    • Setting impalad as Impala data source raises errors for invalid HTTP address. (PLATFORM-1444)

  • Queue Analysis: Graph zooming and resetting is not working properly in Edge. (UIX-158)

  • Sessions

    • Limitation in HDP cluster: While applying recommendations on a Spark session sometimes we get error like “Could not run the Spark application on the cluster: java.lang.IllegalStateException: hdp.version is not set while running Spark under HDP, please set through HDP_VERSION in spark-env.sh or add a java-opts file in conf with -Dhdp.version=xxx”. The exact reason for this bug is yet to be ascertained.

      Workaround

      1. Open and edit spark-env.sh of the client where unravel is running.

      2. Add a line similar to export “HDP_VERSION=2.6.5.0-292". Make sure to put the proper HDP version in the line.

    • In a MapR cluster, Spark apps fail after tuning recommendations are applied. (SESS-250)

    • Hive: Newly tuned app in MapR is not shown after applying recommendations. (SESS-251)

  • Spark query plan and query text do not match in the Spark APM. (DT-186)

  • UIX

    • Global search and filter by app name are not working. (UIX-1998)

    • Download CSV and XLS is not working for Tez and Spark APMs. (UIX-1999)

  • Workflow: name search is case-sensitive. (UIX-1904)

For support issues contact Unravel Support.