Compose Release Notes
Compose Release Notes
Notes
Skipping versions: Customers who are not upgrading directly from the previous version are
strongly encouraged to review the release notes for all versions higher than their currently
installed version.
For more information about a particular feature, please refer to the Compose Help.
HELP.QLIK.COM
1 Migration and upgrade
For information on the procedure for upgrading from Compose for Data Warehouses to
Compose February 2021, see the February 2021 release notes.
Upgrading from Compose for Data Warehouses 6.6.1 (September 2020) or 7.0
(November 2020):
a. Upgrade to Compose February 2021.
b. Upgrade to Compose May 2021.
c. Compose May 2022.
l Customers with Data Warehouse projects should regenerate all task ETLs either by selecting the
task and clicking the Generate button in the Manage Tasks and Manage Data Marts windows, or
by running the generate_project CLI as described in the Compose online help.
l Customers with Data Lake projects should regenerate all task ETLs by selecting the task and
clicking the Generate button in the Manage Storage Tasks window, or by running the generate_
project CLI as described in the Compose online help.
Upgrade scripts
After upgrading, depending on the version from which you upgraded, you might need to generate upgrade
scripts and run them in your databases.
Upgrade script 1
Should be run only if upgrading from versions earlier than Compose August 2021.
Various performance enhancements require modifications to the internal Compose tables in the following
data warehouses:
If you have Data Warehouse projects configured to use any of the above databases, you need to generate
an upgrade script and then run it in each of the relevant databases.
Running the script in Google Cloud BigQuery and Amazon Redshift databases will delete
historical monitoring metadata.
Upgrade script 2
Should be run only if upgrading from versions earlier than Compose August 2021 Service
Release 02.
This upgrade script must be run after upgrading, as the database structure has been slightly modified to
correctly report the error mart for each source (as part of the Uniform source consolidation (page 9)
feature).
Upgrade script 3
Should be run only if upgrading from versions earlier than Compose August 2021 SP 12, and
only if you have projects with Microsoft Azure Synapse Analytics data warehouse (or intend to
create such projects in the future).
For each of your projects, the CLI output will tell you the name of the script and its location. Each
script has a different name, consisting of the script identifier (the bold part), the project name, and a
timestamp.
3. Access each of your databases using SQL Workbench or a similar tool and run the script(s).
4. When the script(s) completes successfully, generate and run your tasks in Compose.
1.3 Licensing
Existing Compose for Data Warehouses customers who want to create and manage Data Warehouse
projects only in Qlik Compose can use their existing license. Similarly, existing Compose for Data Lakes
customers who want to create and manage Data Lake projects only in Qlik Compose can use their existing
license.
Customers migrating from Qlik Compose for Data Warehouses or Qlik Compose for Data Lakes, and who
want to create and manage both Data Warehouse projects and Data Lakes projects in Qlik Compose, will
need to obtain a new license. Customers upgrading from Compose February 2021 can continue using their
existing license.
It should be noted that the license is enforced only when trying to generate, run, or schedule a task (via the
UI or API ). Other operations such as Test Connection may also fail if you do not have an appropriate
license.
Migration can be performed from Compose for Data Lakes 6.6 only.
Relevant to Compose May 2022 SR1 only. Requires Replicate November 2022 or later.
From Compose May 2022 SR1, if you use Replicate November 2022 or later to land data in Databricks,
only the Replicate Databricks (Cloud Storage) target endpoint can be used. If you are using Replicate May
2022, you can continue using te existing Databricks target endpoints.
l Qlik Replicate - Qlik Compose is compatible with Replicate November 2021 latest service release
and Replicate May 2022.
l Enterprise Manager - Qlik Compose is compatible with Enterprise Manager May 2022.
l Qlik Replicate: Qlik Compose is compatible with Replicate May 2022 and Replicate November
2022.
l Enterprise Manager: Qlik Compose is compatible with Enterprise Manager November 2022.
l Qlik Replicate: Qlik Compose is compatible with Replicate May 2022, Replicate November 2022,
and Replicate November 2022 SR1.
l Enterprise Manager: Qlik Compose is compatible with Enterprise Manager November 2022 SR1.
2 What's new?
The following section describes the enhancements and new features introduced in Qlik Compose May
2022.
The "What's new?" is cumulative, meaning that it also describes features that were already
released as part of Compose August 2021 service/patch releases. This is because customers
upgrading from initial release versions might not be aware of features that were released in
interim service releases.
When you select the Keep in Change Tables option, the changes are kept in the Change Tables after they
are applied (instead of being deleted or archived). This is useful as it allows you to:
l Use the changes in multiple Compose projects that share the same landing
l Leverage Change Table data across multiple mappings and/or tasks in the same project
l Preserve the Replicate data for auditing purposes or reprocessing in case of error
l Reduce cloud data warehouse costs by eliminating the need to delete changes after every ETL
execution
Referenced dimensions
This version introduces support for referencing dimensions. To facilitate this new functionality, a new
Reference selected dimensions option has been added to the Import Dimensions dialog which, together
with the toolbar button, has been renamed to Import and Reference Dimensions.
The ability to reference dimensions improves data mart design efficiency and execution flexibility by
facilitating the reuse of data sets. Reuse of dimension tables across data marts allows you to break up fact
tables into smaller units of work for both design and data loading, while ensuring consistency of data for
analytics.
l The automatic data mart adjust feature has been extended to include DROP COLUMN and
ADD COLUMN support.
l In previous versions, adding a dimension which did not relate to any fact would require the data mart
to b e dropped and recreated. From this version, such dimensions can be adding using auto-adjust,
including Date and Time dimensions.
l The generate_project CLI now supports automatic data mart adjust for specific objects. In
previous versions, Compose would adjust the data marts by dropping and recreating the tables,
regardless of the required change. This would sometimes take a lot of time to complete. From this
version, only the changes will be adjusted. For example, if a new column was added to a dimension,
only that specific column will be added to the data mart tables. To support this new functionality the -
-stopIfDatamartsNeedRecreation parameter must be included in the command. I this parameter is
omitted and the data mart needs to be adjusted, Compose will drop and recreate the data mart
tables like it did in previous versions.
To facilitate this, a new mark_reload_datamart_on_next_run CLI has been developed. The new CLI
allows users to mark dimensions and facts to be reloaded on the next data mart run. These can either be
specific dimensions and facts or multiple dimensions and facts (either from the same data mart or different
data marts) using a CSV file.
Hubs CMPS_HubIns
Satellites CMPS_SatIns
To enable uniform source consolidation configuration, a new Consolidation tab has been added to the
data warehouse task settings.
When the Consolidate uniform sources option is enabled, Compose will read from the selected data
sources and write the data to one consolidated entity. This is especially useful if your source data is
managed across several databases with the same structure, as instead of having to define multiple data
warehouse tasks (one for each source), you only need to define a single task that consolidates the data
from the selected data sources.
Environment variables
Environment variables allow developers to build more portable expressions, custom ETLs, and Compose
configurations, which is especially useful when working with several environments such as DTAP
(Development, Testing, Acceptance and Production). Different environments (for example, development
and production) often have environment-specific settings such as database names, schema names, and
Replicate task names. Variables allow you to easily move projects between different environments without
needing to manually configure the settings for each environment. This is especially useful if many settings
are different between environments. For each project, you can use the predefined environment variables or
create your own environment variables.
Support for data profiling and data quality rules when using Google
Cloud BigQuery
You can now configure data profiling and data quality rules when using Google Cloud BigQuery as a data
warehouse.
Performance improvements
This version provides the following performance improvements:
l Validating a model with self-referencing entities is now significantly faster than in previous versions.
For instance, it now takes less than a minute (instead of up to two hours) to validate a model with
5500 entities.
l The time it takes to "Adjust" the data warehouse has been significantly reduced. For instance, it now
takes less than three minutes (instead of up to two hours) to adjust a data warehouse with 5500
entities.
l Optimized queries, resulting in significantly improved data warehouse loading and CDC
performance.
l Significantly improved the loading speed of data mart Type 2 dimensions with more than two
entities. In order to benefit from this improvement, customers upgrading with existing data marts
needs to regenerate their data mart ETLs.
l Improved performance of data warehouse loading, by reducing statements executed when there is
no data to process. This change impacts cloud data warehouses such as Snowflake, Amazon
Redshift, Google BigQuery, and so on.
Customers who want to leverage this support need to create Redshift Spectrum external tables and
discover them. Additionally, when running a CDC task, the new Keep in Change Tables option described
above needs to be turned on.
l Exclude the corresponding record from the ODS views - This is the default option as records
marked as deleted should not usually be included in ODS views.
l Include the corresponding record in the ODS views - Although not common, in some cases, you
might want include records marked as deleted in the ODS views in order to analyze the number of
deleted records and investigate the reason for their deletion. Also, regulatory compliance might
require you to be able to retrieve the past record status (which requires change history as well).
As this was the default behavior in previous versions, you might need to select this
option to maintain backward compatibility.
In previous versions, HDS resolution was one second. This was problematic at times as multiple changes
to a Primary Key within a second resulted in only the last change appearing in the HDS. To view all the
history, customers were forced to review the landing.
From this version, all changes (history) will shown in the HDS, facilitating better support for auditing.
Databricks projects
New Databricks versions
l Databricks 9.1 LTS is now supported on all cloud providers (AWS, Azure, and Google Cloud
Platform).
l Databricks 10.4 LTS is now supported on all cloud providers (AWS, Azure, and Google Cloud
Platform).
Databricks 10.4 LTS is supported from Compose May 2022 SR1 only.
Compose May 2022 SR1 introduces support for SQL Warehouse compute. To benefit from this support,
customers need to use the new Replicate Databricks (Cloud Storage) target endpoint, which is available
from Replicate November 2022. SQL Warehouse compute offers a lower cost alternative to clusters while
also allowing Parquet file format to be used in the Landing Zone.
The following image shows the banner with both an Environment title and a Project title:
The banner text is shown without the Environment title and Project title console labels. This
provides greater flexibility as it allows you add any banner text you like, regardless of the actual
label name. For example, specifying Project owner: Mike Smith in the Project title field,
will display that text in the banner.
Security Hardening
For security reasons, command tasks are now blocked by default. To be able to run command tasks, a
Compose administrator needs to turn on this capability using the Compose CLI. For more information, see
the Compose online help.
This functionality only applies to command tasks created after a clean installation. If you
upgrade to this version, command tasks will continue to work as previously.
You can set and update user and group roles using the Compose CLI. You can also remove users and
groups from a role in one of the available scopes (for example, Admin in All Projects). This is especially
useful if you need to automate project deployment.
End of support for Databricks 7.3 is applicable to Compose May 2022 SR1 only.
4 Resolved issues
This section lists the resolved for the Compose May 2022 initial release and subsequent service releases.
Type: Issue
Description: After the data mart database name was applied as an environment variable, Compose would
not clear the cache automatically, resulting in the old cache object not being reset.
Type: Issue
Component/Process: UI
Description: Selecting a Replicate task would not be possible when using a Hortonworks Data Platform
endpoint in a Cloudera Data Platform Compose project.
Type: Enhancement
Description: An option has been added to remove environment information when exporting projects (CLI)
or creating deployment packages.
To facilitate this functionality, the --without_environment_specificsparameter was added to the CLI and a
Replace environment specifics with defaults option was added to the Create Deployment
Packagewindow.
Type: Issue
Description: The following error would sometimes be encountered when deploying a project:
Invalid Configuration file the database <name> Landing does not exist
Type: Enhancement
Description: A new Project title field has been added to the project settings'General tab. The value of the
field will be included in the project deployment.
Type: Issue
Description: When the schema name was *, testing the connection for the landing database would return
the following error:
Type: Issue
Component/Process: Lineage
Description: When importing data marts using the Composecli import_csv command, the "Show lineage"
option for corresponding domain attributes would be disabled.
Type: Issue
Description: When a landing connection was removed from the target project, project deployment would
fail with the following error:
Type: Issue
Description: Hub tables would sometimes be updated unnecessarily which would result in unnecessary
updates of the related dimensions.
Type: Issue
Type: Issue
Type: Issue
Description: When a data mart contained an entity with multiple satellites, the query would sometimes be
generated incorrectly.
Type: Issue
Type: Issue
Description: The Compare CSV CLI would sometimes not complete successfully.
Type: Issue
Description: An error would sometimes occur when opening the Expression Editor.
Type: Issue
Description: Records in the data warehouse would not be updated with a NULL value, even though the
data warehouse task was set to "Set the target value to null".
Type: Issue
Description: Validating the metadata would fail with an error that "ID" is a reserved word.
Type: Issue
Description: In the generated project documentation, the domain name would be shown in the attribute
name field.
Type: Issue
Component/Process: Databricks
Description: After upgrading to 2021.08 SP08, Databricks connection issues would be encountered when
a token was revoked.
Type: Issue
Description: The following Oracle syntax error would be encountered during the initial load task command:
::
Type: Issue
Component/Process: Facts
Description: State oriented facts would not reflect changes that were made to the Type 2 relation or
changes that were made to the dimension table.
Type: Issue
Description: Users with the "Designer" role were not able to deploy project deployment packages.
Type: Issue
Description: After running the import_csv CLI command to import tasks, the generated task statements
would contain a syntax error.
Type: Issue
Description: When working with large models, it would not be possible to edit a dimension or fact.
Type: Issue
Description: Importing a CSV file to a project with a Microsoft Azure Synapse Analytics data warehouse
would fail if the CSV contained an NVARCHAR attribute.
Type: Issue
Component/Process: Security
Type: Issue
Description: Running the generate_upgrade_script command would fail after upgrading to 2021.8.0.425.
Type: Issue
Type: Issue
SQL compilation error: <p>Object does not exist, or operation cannot be performed.
Type: Issue
Description: Creating a denormalized new dimension would create the root dimension only.
Type: Issue
Component/Process: Workflows
Description: In rare cases, it would not be possible to create, edit, or duplicate workflows.
Type: Issue
Component/Process: Upgrade
Description: After migrating to 2021.5, projects containing two domain attributes with the same name but a
different case (e.g. abc and Abc) would fail to load with the following error:
SYS,GENERAL_EXCEPTION, An item with the same key has already been added.
Type: Issue
Description: It would not be possible to open a project after deployment if one schema was missing.
Type: Issue
Description: Fact tables would contain obsolete VIDs from dimensions, resulting in orphaned records.
Type: Issue
Description: Data mart loading tasks would sometimes fail with the following error:
Cannot write value for process parameter twice: 1265: Duplicate write to param DimCnt_Tot
Type: Issue
Component/Process: Loading data mart dimensions into Snowflake and Microsoft Azure Synapse
Analytics
Description: When a data mart ETL task failed, the next task would sometimes load duplicate rows into
dimensions.
Type: Issue
Description: Adding data mart dimensions would sometimes fail without a clear error.
Type: Issue
Description: The following error would occur when validating the data warehouse:
Index was out of range. Must be non-negative and less than the size of the collection
Type: Issue
Component/Process: Snowflake
Description: The data warehouse ETL would fail to create a transient table with a "already exists" error.
Type: Issue
Component/Process: CLI
Description: Importing a project repository to a new project that does not exist it would fail with the
following error:
Type: Issue
Component/Process: Backdating
Description: Backdated data in the Data Warehouse would not get updated in the Data Mart.
Type: Issue
Component/Process: Backdating
Description: Migrating a project from an older version would disable the backdating options. The issue was
resolved by adding a new CLI command line that sets the "Add actual data row and a precursor row" option
for all entities as well as in the project settings.
After running the command, refresh the browser to see the changes.
Type: Issue
Description: When a landing table had a foreign key, discovering the table would result in the following
error (excerpt):
Specified argument was out of the range of valid values.
Type: Issue
Description: Validation of Databricks storage and Snowflake data warehouse would be excessively long.
The slow Databricks validation would also impact schema evolution.
Type: Issue
Description: In Google BigQuery projects, the data mart pivot table displays a "no data error" when there is
data in tables.
Type: Issue
Description: In Google BigQuery projects, the following error would be encountered when using the data
profiler: "SYS,GENERAL_EXCEPTION,Sequence contains no elements"
Type: Issue
Description: The OID and VID column names would include the entire path from the fact source to the
dimension instead of just the dimension name.
Type: Issue
Description: When setting up a MySQL source connection, testing the connection would return the
following error: "Object reference not set to an instance of an object".
Type: Issue
Description: After deleting an entity, export of projects using the CLI would sometimes fail.
Type: Issue
Description: When a dimension contained more than 10 entities, loading of the data mart would fail with
the following error: "Case expressions may only be nested to level 10.Operation cancelled by user"
Type: Issue
Description: Data mart task generation would fail when attributes of the same entity were assigned to
different satellite tables.
Type: Issue
Description: Generating Bulk Operations would not include the last data mart in the list.
Type: Issue
Type: Issue
Component/Process: CLI
Type: Issue
Type: Issue
Description: Running the export_csv command would cause ETL Set generation to fail for lookups with the
following error:
Type: Issue
Description: Data Mart creation would sometimes fail with the following error "Sequence contains no
matching element".
Type: Issue
Description: An error would sometimes be encountered when trying to delete a star schema.
Type: Issue
Component/Process: ETLs
Description: The ETL for handling data mart dimensions would use the non-optimized approach for one of
the statements.
Type: Issue
Component/Process: Snowflake
Description: After four hours of inactivity, a "Snowflake Authentication token has expired" error would be
shown.
Type: Issue
Component/Process: ETLs
Description: Verification of unused and/or outdated column mapping expressions would lead to redundant
errors.
Type: Issue
Description: Validation of Type 2 dimensions would sometimes fail with an error that no Type 2 columns
were detected (and that the dimension should be created as Type 1), even though Type 2 relationships
existed in the dimension.
Type: Issue
Component/Process: Security
Type: Issue
Component/Process: UI
Description: Editing a data mart entity after creating the data mart would result in all of the fields being
reordered alphabetically.
Type: Issue
Description: Enabling the Write metadata to the TDWM tables in the data warehouse option in the
project settings would have no effect.
Type: Issue
Description: The source schema connection would not be updated after deploying a deployment package.
Type: Issue
Description: Data mart creation would fail when there were more than 500 relationships.
Type: Issue
Description: An error would occur when trying to connect to Amazon Redshift using SSL.
Type: Issue
Description: When there was a 3-tier relationship - for example, Entity_A→Entity_B→Entity_C - and the
Fact table contained columns from Entity_A and Entity_C, changes in the relationship values in Entity_B
(which should have updated columns from Entity_C in the Fact) would not be updated in the Fact table.
Type: Issue
Description: Reading from live views would take an excessively long time.
Type: Issue
Description: Columns with numeric(n,n) data types would not be retrieved from the Landing Zone.
Type: Issue
Component/Process: Import
Description: The following error would sometimes be encountered when importing a data mart:
Type: Issue
Description: Generating the project would truncate the data mart tables when running the following
command:
After generating the project, you need to clear the cache by running the following command:
Type: Issue
Description: When loading dimensions, a column would sometimes be used twice, causing the data mart
task to fail.
Type: Issue
Description: A runtime parameter ("MutCnt_8323" or similar) was incorrectly initialized, causing the data
mart task to fail.
Type: Enhancement
Description: Performance was improved by adding indexes to Transactional and State Oriented fact
tables.
Type: Enhancement
Description: Performance was improved by creating the TEMP table as a HEAP table instead of a HASH
table.
Type: Enhancement
Description: Performance was improved by updating the statistics after each incremental load of the
dimensions.
Type: Enhancement
Description: Performance was improved for data mart ETL tasks by adding indexes (over columns used
for join clauses) to intermediate tables.
Type: Issue
Component/Process: Diagnostics
Description: Diagnostic packages would contain the server name of the customer environment, which
would sometimes result in users being locked out when the package was deployed in our internal testing
environment. Now, the diagnostic packages will be generated without the server name.
Type: Issue
Description: The project documentation for Multi-Table ETLs and Post-Loading ETLs was generated
without contents.
Type: Enhancement
Description: A session expired error would sometimes occur during the CLI commands that took a long
time to complete (e.g. import_csv). To resolve such timeouts, users can now add the "–timeout seconds"
parameter to the command. Setting "--timeout -1" will run the command without it timing out.
Type: Issue
Description: Errors in Post-ETL stored procedures run on Microsoft Azure Synapse Analytics would not be
reported.
Type: Issue
Description: While working with Snowflake via the private link configuration, the engine task would
sometimes stop unexpectedly.
Type: Issue
Description: When dropping a relationship to a lookup-table in the Model, adjusting the data mart would
fail with the following error:
Type: Issue
Description: The following error would sometimes be encountered when generating ETLs after data mart
validation:
Type: Issue
Description: Data mart tasks would sometimes fail with the following error:
Type: Enhancement
Description: Subquery HIVE errors would sometimes be encountered when creating and reading from the
real-time view. The issue was resolved by updating the latest applied partition during runtime,
Type: Issue
Description: Performance issues would sometimes be encountered when loading data warehouse
satellites tables.
Type: Issue
Description: When generating project documentation, the following error would sometimes occur:
System.OutOfMemoryException
Type: Issue
Description: Adding a dimension without the "dummy" row would result in incomplete loading on the next
task run.
Type: Issue
Description: The following error would sometimes be encountered when trying to generate a data mart:
Type: Issue
Description: An "invalid column name" error would sometimes be encountered when running data mart
tasks.
Type: Issue
Description: When regenerating tasks, Compose would automatically adjust all facts tables by removing
the Date dimension OID.
Type: Issue
Description: The following error would sometimes occur in an Aggregated Star Schema with a Date
dimension:
Type: Issue
Description: A "Sequence contains no elements" error would sometimes occur when generating data mart
tasks.
Type: Issue
Description: A "Value Cannot be Null" error would sometimes occur when generating data mart tasks.
Type: Issue
Description: An "Invalid identifier" error would sometimes occur when running data mart tasks.
Type: Enhancement
Component/Process: Databricks
Type: Issue
Description: When opening a Custom ETL, code with more than 11 lines would not load completely.
Type: Issue
Description: When running the import_project_repository command, a "Data mart is not valid" error would
sometimes be encountered.
Type: Issue
Component/Process: Logging
Description: When a data mart task instance failed, the data mart log information would sometimes be
inaccurate.
Type: Issue
Type: Issue
Description: Data warehouse ETL generation would fail when query-based mapping included custom
environment variables.
Type: Issue
Description: Expressions would be ignored when defined on existing fact table attributes.
Type: Enhancement
Description: The header__batch_modified column will now be cast as varchar(32) for the outbound
Apache Impala views. To leverage this enhancement, you need to set an environment variable.
Type: Issue
Description: CDC tasks using AWS glue would sometimes fail with the following error:
Type: Issue
Description: When using Hive 3.1.3, the following error would sometimes be encountered:
Type: Issue
Description: Upgrading from Compose for Data Lakes 6.6 would cause the column prefix to change from
header to hdr. To leverage this fix, you need to set an environment variable.
Type: Issue
Component/Process: Discovery
Type: Issue
Description: The compare_csv CLI option would not work properly when project items contained line
breaks.
Type: Issue
Description: The following error would sometimes occur when generating a data mart:
Type: Issue
Component/Process: Tasks
Description: When a project had two full load tasks reading from the same landing zone database, the
CDC task would start from the default partition.
Type: Issue
Description: Performance issues would be encountered when updating the fact table.
Type: Enhancement
Description: Revised ELT statements to reduce number of statements and improve performance running
against Synapse including
Type: Issue
Description: Data mart loading would sometimes fail at the "Merging changes into dimension" stage.
Type: Issue
Description: Applying a predefined variable using the CLI would result in an "Object reference not set to an
instance of an object" error.
Type: Issue
Component/Process: Security
Description: Updated the ojdbc component (ojdbc7-12.1.0.2) to the newest version, which fixes security
vulnerability CVE-2016-3506.
Type: Issue
Component/Process: Security
Description: Updated the PostgreSQL component (postgresql-42.2.25) to the newest version, which fixes
security vulnerability CVE-2022-21724.
Type: Issue
Description: If the connection to the database was aborted, the task would not recover.
Type: Issue
Description: Changing dimensions from Type 2 to Type 1 would sometimes result in the following errors
when recreating the associated tables:
Type: Issue
Description: When validating a data mart with Date and Time columns, the validation would incorrectly
report the following message:
The data mart tables in the database are different from the data mart
definition
Type: Issue
Description: After importing a project from a diagnostic package, editing the connection settings would
result in a 'SYS,DESERIALIZE_TO_TYPE' incompatibility error.
Type: Issue
Description: Due to an issue with handling relationships, the data mart task would sometimes fail with the
following error:
Type: Issue
Component/Process: Filters
Description: The fact table would not use the filter of the dimension table it was related to.
Type: Issue
Description: Relationship prefixes would be ignored when adding dimensions to existing facts.
Type: Issue
Description: Loading the data mart would sometimes fail with an "Invalid column name" error.
Type: Issue
Description: The SQL server TempDB system database would reach capacity during Data Mart task
execution.
Type: Issue
Description: Data mart tasks would take an excessively long time to complete.
Type: Issue
Description: When there were multiple relationships to the same table, issues would be encountered when
generating the data mart task.
Type: Issue
Description: The UPDATE STATS command would only update the stats on some of the fact tables,
instead of all of them.
Type: Issue
Description: When running Full Load ETL statements, records would be loaded directly into the indexed
data mart table using CTE (Common Table Expression). These inserts would take an excessively long
time to complete.
Type: Issue
Description: When an entity had a self-referencing relationship, data mismatches would sometimes occur
between the data warehouse and data mart hierarchies.
Type: Issue
Description: The OBSOLETE__INDICATION = 0 rows indicator would be temporarily missing from the
data mart while the task was running.
Type: Issue
Description: A task with five or more relationships would take an excessively long time to complete.
Type: Issue
Description: When defining a multi-column filter condition on a data mart dimension, where one column
was from a Satellite table and the other column was from a Hub table, the condition would not be
processed correctly.
Type: Issue
Description: The following error would sometime occur after running the data mart task:
Type: Issue
Description: The OPTION(FORCE ORDER) hint would not be added for state-oriented fact tables.
Type: Issue
Description: An "ambiguous column" error would occur in the data mart after upgrading from Compose for
Data Warehouses 7.0 (November 2020).
Type: Issue
Description: A join clause would be used for INSERT/UPDATE operations, even when flags were set.
Type: Issue
Description: Previously deleted records would still be shown as deleted after the source was reloaded.
Type: Issue
Description: ETL tasks would try to connect to localhost instead of the configured DSN, and fail.
Type: Feature
Description: Added the ability to manage user and group roles using the Compose CLI.
Type: Issue
Type: Issue
Description: The following error would occur when using the JDBC 4.2 driver:
Type: Issue
Component/Process: Databricks
Description: The following error would occur when attempting to connect using the latest Databricks JDBC
driver:
Type: Enhancement
Description: Added support for the new "Databricks (Cloud Storage)" Replicate endpoint.
Type: Issue
Component/Process: Snowflake
Description: "Header" columns would be case-sensitive in task statements. The issue was resolved by
setting the "setIgnoreCaseFlag" flag.
Type: Issue
Description: When using the Drop and Recreate > Tables Data Warehouse option, data would not be
populated into the Date and Time hub tables.
Type: Issue
Description: Updating "ghost" references in the data warehouse would not add the records to the
dimension.
Type: Issue
Description: It would not be possible to run multiple instances of the Compose CLI. Therefore, it would not
be possible to run multiple project workflows in parallel using the Compose CLI.
Type: Issue
Description: MIN/MAX custom date functions in the data mart task statements would be dropped
prematurely.
Type: Issue
Description: When generating ETLs after data mart validation, the following errors would sometimes
occur:
-OR-
Type: Issue
Type: Issue
Component/Process: Installation
Description: Some of the HTML files were missing after the installation.
Type: Enhancement
Component/Process: Views
Description: CDP view creation was modified for Apache Impala compatibility.
Type: Issue
Component/Process: Upgrade
Description: After upgrading from Compose November 2021 to Compose May 2022, the following error
would occur:
Type: Issue
Description: Data mart tasks would sometimes fail with the following error:
5 Known issues
This section describes the known issues for this release.
Description: When using Replicate to move source data to Compose, both the Full Load and Store
Changes replication options must be enabled. This means that when Replicate captures a new column, it
is added to the Replicate Change Table only. In other words, the column is stored without being added to
the actual target table (which in terms of Compose is the table containing the Full Load data only i.e. the
landing table).
For example, let's assume the Employees source table contains the columns First Name and Last Name.
Later, the column Middle Name is added to the source table as well. The Change Table will contain the
new column while the Replicate Full Load target table (the Compose Landing table) will not.
In older versions of Compose for Data Warehouses, mappings relied on the Full Load tables (the Compose
Landing tables), meaning that users were not able to see any new columns (i.e. Middle Name in the above
example) until they were created in the Full Load tables via a reload.
From Compose May 2021, the Compose Discover and Mappings windows show changes to new columns
that exist in both the Change Tables and the Replicate Full Load target tables. This allows Schema
Evolution to suggest adding columns that exist in either of them.
Although this is a much better implementation, it may create another issue. If a Full Load or Reload occurs
in Compose before the Replicate reload, Compose will try to read from columns that have not yet been
propagated to the Landing tables (assuming they exist in the Change Tables only). In this case, the
Compose task will fail with an error indicating that the columns are missing.
Should you encounter such a scenario, either execute a reload in Replicate or create an additional
mapping without the new columns to allow Compose to perform a Full Load from the Landing tables.
Description: If a dimension being referenced is dropped and created, or reloaded for any reason (for
example, the source data mart is fully rebuilt on each load), any facts to which the referenced dimension
was added should be reloaded too. Compose does not handle this automatically.
Workaround:
Description: When generating the data warehouse task, if any attribute with the JSON data type is defined
as Type 2, the following error will occur: