Streaming analysis nodes
The following new streaming nodes have been added:
- Streaming Rename - Renames data store fields from a streaming input. For more information,
see the "Streaming Rename" node help topic. - Streaming Compute - Uses an expression to create new computed fields or to change the
values of existing fields. For more information, see the "Streaming Compute" node help topic. - Streaming Filter - Applies filter expressions to data store fields to create a new field containing only values that meet the specified conditions. For more information, see the "Streaming Filter" node help topic.
- Streaming Select - Allows you to select fields from one node to pass on to another node. For more information, see the "Streaming Select" node help topic.
Script rules
You can now incorporate existing script rules from rule libraries into case stores and process models. This allows you to reuse code for complex processing.
Delta Lake data store
A new Delta Lake store repository type is now supported for internal data stores on Cloud editions of the product. The Delta Lake store repository type can handle large volumes of data and allows records to be updated via an analysis after initial loading.
User interface enhancements
The following enhancements have been made across the application:
- The user interface of the Admin > Environments page has been enhanced.
- The icons used by dashboard, data store and case stores have been updated in the top navigation menu bar to be consistent with other areas of the application.
Terminology change
Throughout the application, the term "Super Group" has been superseded by the term "Environment Group".
Microsoft Excel data stores
Data type detection has been improved for Microsoft Excel type data stores. Previously, when creating a data store using an Excel file, all fields were generated as String fields. Now, other data types are also detected, such as dates and number formatted fields.
The Skip Empty Columns option on Microsoft Excel data stores has been removed. When working with new data stores, or when generating fields again on an existing data store, there is no longer an option to skip empty columns. Existing data stores will continue to work as before if the Generate fields option is not used.
API enhancements
Added the ability to query case store and data store audit logs via the Data360 DQ+ API.
Data Store node
When the Data Load Range property is set to Based on Date Parameters, Based on Work ID Parameter or Based on File Path Pattern Parameter, if the Parameter Name does not exist, the execution will fail.
Previously, if a parameter was not found, the node would read all new data that had been loaded
since the last execution.
Custom Completion Status node
There is a limit on the amount of data that can be output when using the Custom Completion Status node in an analysis. This limit has been increased to allow at least 10,000 characters.
The Custom Completion Status node is used to pass information to a process model, or to an application in the case of API invocation.
Vertica
The version of Vertica that is used with the product has been updated to Vertica 10.
Azure deployments
If you have a Microsoft Azure deployment and are upgrading from 4.3.x to 5.1, please remove the line beginning data_appregistrationobjectid from the dqplus.tfvars file. This only applies if you already have an installation.
Comments
0 comments
Please sign in to leave a comment.