ATTENTION: In order to continue receiving product update notices, please Sign In and select “Follow” on the Data360 Analyze announcements page here.
We are excited to announce the Data360 Analyze 3.6.0 release is now available.
Product downloads are available within our Data360 Analyze Download section.
New in this release:
This Generally Available Long Term Support (LTS) release provides the following enhancements that are now available for the first time in any Data360 Analyze release:
- Streamlined and improved functionality when importing data from delimited files
- A new and improved node to write data to delimited files
- Additional flexibility when configuring network connections to Amazon S3
- Additional capabilities to query database metadata and enable data lineage
- The ability to annotate your data flows using boxes added to the canvas
- Additional flexibility and capabilities when defining the Run Properties of scheduled jobs
- The ability to pop-out Data Viewer tabs into new windows and compare data sets side-by-side
- Support for conditional execution of scheduled data flows, with support for failure notifications by email.
See below in this article for further details of these enhancements.
In addition to the 'net new' functionality stated above, the Data360 Analyze 3.6.0 release also includes functionality that was previously made available in the Data3Sixty Analyze 3.5.x Feature releases but was not available to customers utilizing 3.4.0 LTS or one of the 3.4.x. LTS maintenance releases.
Enhancements previously released in 3.5.2:
The enhancements delivered in the 3.5.2 release include:
- Improved data profiling and data type conversion capabilities in the Modify Fields node
- The JDBC nodes now streamline access to data stored in a cloud data warehouse and simplify the configuration of the nodes when they are used to process batches of records
- The ability to search for items within a data flow
- Centralized access to details of schedules and scheduled runs on the Analyze instance.
See the 3.5.2 release announcement for further details of these enhancements.
Enhancements previously released in 3.5.1:
The enhancements delivered in the 3.5.1 release include:
- The Reorder Fields node - which enables you select the fields, and set the order in which fields appear in the output data set
- The JDBC Nodes now allow you to specify key=value pairs on separate lines
- The Data360 Update for Salesforce nodes node provides the option to continue processing input records when a Salesforce update transaction has failed. The Salesforce nodes have also been updated to use v.46.0 of the Salesforce REST API
- The Encrypt Fields node now supports the use of the AES 256 encryption algorithm
- The BRD File node now supports the ability to pass through input fields to its output
- The Fixed Format File node now allows you to optionally skip lines at the start of a file
- The application now supports Connection Points, which allow you to selectively propagate one of multiple data sets to downstream nodes, and bundle/unbundle connections for multiple data sets to reduce visual clutter on the canvas
- The ability to navigate to the source of data at a node's input pin
- The application now ships with the JDBC driver for the IBM DB2 database
- New Run properties are available which provide 'path safe' options for substituting run date/ time values.
See the 3.5.1 release announcement for further details of these enhancements.
Enhancements previously released in 3.5.0:
The enhancements delivered in the 3.5.0 release include:
- The Data Profiler node that enables you to profile an input data set to identify field semantic types, generate metadata and a range of statistics
- The JDBC nodes can now be configured to obtain the SqlQuery property value from an input field
- The Trim Fields node now allows you to remove occurrences of multiple individual characters at the start/end of a field value, or to remove a prefix/suffix string containing multiple characters
- You can now navigate to the node associated with an error in the Errors panel
- Composite nodes now have unique node identifiers that can be accessed using property substitutions.
See the 3.5.0 release announcement for further details of these enhancements.
Overview of net new enhancements in 3.6.0
Output CSV/Delimited Node
The Output CSV/Delimited node is a new node that is intended to replace the (now superseded) Output Delimited node.
The new node provides a range of additional capabilities similar to those available for the CSV/Delimited Input node.
The output Filename can now be derived from an input field - enabling a single node to be used write records to multiple files.
You can optimize the performance of the node and its resource utilization when writing data to a very many files by pre-sorting the input data by the fieldname and setting the ‘InputSorted’ property to True.
The filename field must not contain Null values.
The ‘Field Order’ property enables you to specify which of the input fields to be output, and the order in which the fields will be written in the output file(s).
The field search functionality allows you to quickly find matching fields. Clicking the Add fields (+) button opens a two-pane dialog that lists the selected and available fields.
The file character set and file format can now be specified.
The default file character set is UTF-8 and the default format is RFC4810.
When format properties are explicitly defined in the properties in the Format section of the node’s property panel, these values take precedence over the values derived from the ‘Format’ property.
The HeaderMode property enables you to specify whether:
- No header record is written
- The header comprises the field name (default)
- The header comprises <fieldName>:<fieldType>
You can now specify the formats to be used when Date, Time and Datetime type fields are written. The format can be selected from the listed options or you can enter a custom format.
The NullMode property allows you to specify how Null values are handled:
- Write two consecutive delimiters (default)
- Write and empty value surrounded by quotes
- Write a custom value
- Write a custom value surrounded by quotes
The NullString property is used to define the custom value that will replace a Null value in the output file. For example this could be used to write a “NA” string when the output data is to be used with the R language.
The node provides additional controls to handle exception processing:
By default the node will not overwrite an existing file – this action can be changed using the FileExistsBehavior property.
You can specify the action taken when a character cannot be encoded to the character set used by the file:
- Generate an error (default)
- Ignore characters that cannot be encoded
- Replace characters that cannot be encoded with a Replacement character
The BadEncodingReplacementCharacter property defines the Replacement character ‘?’ default).
The S3 Get, S3 List, S3 Put and S3 Delete nodes now allow the connection to Amazon S3 to be routed via a proxy server.
The node supports authentication using credentials for the proxy user. By default the node will use the system proxy when no proxy properties have been defined. The IgnoreSystemProxy property can be set to configure the node to ignore the system proxy.
Fixed Format File Node
The Fixed Format File node now continues the process of importing data when it encounters an empty line within the data.
Database Metadata Node
The Database Metadata node was previously Experimental. It is now a fully supported node.
The DeepSQL node was previously Experimental. It is now a fully supported node.
The node now allows input fields to be passed through to its output.
You can now annotate your data flows using text boxes on the canvas. A text box can be added to the canvas using the ‘T’ button on the menu bar:
You can also Right-click on the canvas and select ‘Insert’-> ‘Text Box’ in the context menu:
You can utilize a range of text format options when creating/ editing a text box:
When editing a text box the box is in the foreground:
You can change the transparency to review the placement of text by clicking the ‘Toggle temporary transparency’ button.
Clicking on the canvas exits the edit session for the text box. When a text box is not being edited it is displayed on the canvas behind other artifacts (nodes, connections):
You can layer boxes on top of other boxes and change the color of the background in a box:
The right-click context menu displays available actions for the selected box.
You can resize and move the position of a text box on the canvas.
To prevent unintended changes to the size and position of a box you can lock it, and later unlock it to make changes.
The Properties panel now comprises three tabs:
The Properties tab enables you to view and edit the properties of nodes, as normal.
You can now click on the Data Flow tab to manage Data Flow properties from any point in the data flow without having to navigate to the top level of the data flow.
Similarly, you can access Run properties from any point in the data flow by clicking on the Run tab.
Corresponding keyboard shortcuts allow you to switch between the different tabs:
- Ctrl + 5 -> Properties
- Ctrl + 6 -> Data Flow
- Ctrl + 7 -> Run
Run Property Sets
You can now create and maintain sets of run properties. Run Property Sets are reusable objects that can be referenced by different data flows within the Analyze instance.
New Run properties can be created and added to a Run Property Set, imported from a data flow or inherited from properties in another Run Properties Set.
Available Run Property Sets are visible in, and can be managed from, the Directory view:
You can configure a data flow to use a Run Property Set:
Run properties can be imported into a Run Property Set from a data flow.
The new Run Property Set contains the selected properties:
Run properties can be imported from a legacy run property file (.brg).
When the run property file contains multiple run property sets (e.g. defining properties for different environments), each Run Property Set is imported and is available in the directory. The names of the Run Property Sets comprise <fileName>-<runName>.
The property set that is marked as the current run is also populated in the data flow’s run properties:
You can now pop-out a Data Viewer tab so that it can be viewed as an independent window. The new Data Viewer window can be displayed in the same monitor or moved to a different monitor.
When the original data set for the popped-out window is still available (i.e. the node has not been re-run since the window has been popped-out) you can:
- Use the Data Viewer’s sorting and filtering capabilities.
- Add a configured Filter/Split node to the canvas
- Pop-in the Data Viewer window
If required, you can re-run the node when an associated data viewer tab has been popped-out. You can then open the Data Viewer again to view the new data from the node. This allows you to, for instance, see the data before and after you have made a change to the node’s configuration.
However, when the node has been re-run, the original data is no longer available and a warning is displayed.
You cannot then use filtering in the popped-out window, add a configured Filter/Split node to the canvas or pop-in the window to the main Data Viewer.
You can now align selected nodes either horizontally or vertically.
Conditional Execution of Scheduled Data Flow
When scheduling a data flow, you can now configure a second data flow to be run on successful completion of the first scheduled data flow.
You can optionally configure the action to be taken if the scheduled data flow fails to complete successfully. You can either run a specified data flow or send a failure notification email:
Users with the Administrator role can configure the data flow that is used by the Analyze system to send the email. The default data flow used to send the email notification is located in the ‘All Folders’ -> ‘Data360 Services directory’:
Alternatively, a custom data flow can be selected to send the notifications. This is configured in the system settings, see: ‘Settings’ -> ‘Scheduling’.
The audit log now contains information about which system processes are used to run nodes.
The Simple Scheduled Tasks Ad-hoc Run API to return the execution-plan-state locator when the request is posted.
System Backup Settings
For Data360 Analyze Desktop editions, the default time the system backup is taken is now 12 noon. The default time for the system backup for Server editions is unchanged (2 a.m.)
A new system property has been introduced to specify the number of errors that can occur before the system backup operation is aborted. See the ‘System administration’ > ‘Editing backup settings’ Help topic for further details.
Compatibility with Legacy Data Flows
The Lavastorm Analytic Engine (LAE) product allowed the use of deprecated operators (‘Expert’ functions). Analyze previously did not allow the use of these deprecated operators.
To improve compatibility with legacy data flows, Analyze now allows the use of the deprecated operators.
The node reports a warning when a deprecated operator is recognized.
Data360 Analyze Desktop
- Windows 7 is no longer a supported platform for Data360 Analyze Desktop editions.
Data360 Analyze Server
- Windows Server 2019 is now a supported platform for Data360 Analyze Server edition.
- Internet Explorer 11 is no longer a supported browser. Certain functionality (e.g. editing boxes) is unavailable when Internet Explorer 11 is used to access the application user interface.
- Microsoft Edge (Chromium-based) is now a supported browser.
The Output Delimited node now has a status of 'Superseded' and the node's name has been modified to reflect its status.
The node remains available in the node library but is now only visible when the 'Show superseded' option is checked in the node library's context menu.
Data flows using the Output Delimited will continue to run successfully. Infogix recommends using the Output CSV/Delimited Node for new projects.
The following issues have been resolved in this release:
LAE-21430, LAE-21783, LAE-22003, LAE-22464, LAE-22886
LAE-22939, LAE-22942, LAE-22948, LAE-22995, LAE-23023
LAE-23034, LAE-23059, LAE-23135, LAE-23144, LAE-23193
LAE-23198, LAE-23200, LAE-23201, LAE-23238, LAE-23239
See the release notes for details of the resolved issues.