ATTENTION: In order to continue receiving product update notices, please Sign In and select “Follow” on the Data3Sixty Analyze announcements page here.
Infogix are excited to announce the availability of Data3Sixty Analyze 3.5.2 which includes several new features to explore.
Product downloads are available within our Data3Sixty Analyze Download section.
What’s New?
Enhancements in this release include the following:
- Improved data profiling and data type conversion capabilities in the Modify Fields node.
- The JDBC nodes now streamline access to data stored in a cloud data warehouse and simplify the configuration of the nodes when they are used to process batches of records.
- The ability to search for items within a data flow.
- Centralized access to details of schedules and scheduled runs on the Analyze instance.
The release also previews a number of new Experiment nodes: the Output CSV/Delimited node, the Database Metadata node and DeepSQL node.
Node Enhancements
Modify Fields node
The node now supports the conversion of numeric (long, int) values to date, time and datetime types.
The node now provides a ‘DetectionThreshold’ property that sets the percentage of sampled records that must match for a field to be detected as the data type.
The Modify Fields node’s profiling capabilities have been enhanced to recognize field string values comprising numbers with a trailing minus sign, e.g. 3000-
The resulting output field type is a long or double, depending on the values in the input field.
JDBC Nodes
The JDBC Query, JDBC Execute and JDBC Store nodes now lists SnowFlake in the ‘DbType’ property of the node - simplifying the configuration of the node when working with data stored in a SnowFlake cloud data warehouse.
The JDBC Execute node and JDBC Store node now provide a ‘BatchMode’ property to leverage batch processing in the RDBMS. Batching is used by default. Note, the BatchMode property replaces the ‘LoadMethod’ property, which has been removed from the node.
Application Enhancements
Search Data Flow
You can now search for items in a data flow using the new Search tab in the left navigation panel.
The Search functionality allows you to find matches in:
- The name of nodes
- The type of node
- Contents of node properties and user-defined scripts.
You have the option find matches where:
- the term is contained in the attribute
- the match is a whole word
Matches are case insensitive. The scope of a search depend on the current navigation 'level' within the data flow - if you are within a Composite node, the matches will be restricted to nodes you have configured within that Composite.
Matches are displayed in the Search Results list:
When an item in the list is selected, the canvas navigation changes to display the selected node and highlight the matching name text:
When searching within properties and scripts, the results indicate the node and the corresponding property:
When a match is selected, the node’s properties panel displays, and highlights, the corresponding property value:
When the match is within a script, the text is highlighted in the script property:
Run Management
You can now see a centralized collection of schedules and scheduled runs in the Analyze Directory view - helping you to better understand the schedules that are enabled, disabled and currently executing on your system.
The functionality is available to users provisioned with the Scheduler role. The schedules and runs displayed depend on your permissions and the permissions on the schedule.
The Schedules view lists the schedules you have access to and the schedule is grayed out if it is disabled.
To help you focus on items of interest, you can use the Search functionality to filter the list of schedules and sort the list clicking one of the column headers.
The properties panel displays details of the selected schedule and the schedule's context menus provides quick access to the set of actions you can perform such as enabling/disabling the schedule. You can also access the runs for the schedule by clicking the ‘Runs’ button.
The Runs view lists the scheduled runs you have access to:
In addition to providing access to the ‘All’ runs view you can use the ‘In Progress’ button or ‘Failed’ button to quickly filter the runs that are displayed.
The properties panel provides details for the selected run and the context menu allows you to open or delete the run:
The Search function enables you to filter the runs by name. You can also sort the list of runs by clicking on a column header.
Users with the administrator role can by default view the schedules of other users on the Analyze instance. Similarly, admin users can view the scheduled runs of all users, enabling them to identify long running or failed runs.
See the Scheduling topic in the Help documentation for additional information on the new run management features.
Database Drivers
SnowFlake
The application now includes the JDBC driver for the SnowFlake cloud data warehouse, streamlining the configuration of Analyze when working with cloud-based data sources.
Amazon Redshift
The The Amazon Redshift JDBC driver has been updated to version 1.2.36.1060.
MariaDB Driver
The MariaDB JDBC driver has been updated to version 2.5.2.
Looping Execution Performance
We have improved the performance of Analyze when running large data flows that contain loops.
Experimental nodes
Note: Experimental nodes may be subject to change in a future release.
Output CSV/Delimited (Experimental) Node
The Output CSV/Delimited (Experimental) will, in a future release, be the replacement for the existing Output Delimited node. In addition to providing capabilities similar to the Output Delimited node, the new node delivers the following additional capabilities:
Field Order Selection
You can specify the fields and field order in which input fields will be written in the output file(s).
Writing to multiple files
The ability to write input data records to multiple output files. The name of the destination file is derived from the filename specified in an input field. You can optionally indicate the data records are sorted by the filename field to increase the efficiency of writing the data to multiple files.
File Character Set
You can specify the file character set to be used when writing data to the file(s). This is particularly useful when working with Unicode data. The default is UTF-8.
File Format
You can optionally specify the format of the file to be written. The default is RFC4180.
Database Metadata (Experimental) node
The Database Metadata (Experimental) node uses third-party JDBC drivers to connect to and query the metadata of the specified database.
DeepSQL (Experimental) node
The DeepSQL (Experimental) node analyzes the supplied SQL and produces a set of ‘nodes’ and ‘edges’ to enable data lineage. The node analyses the code from the Database Metadata (Experimental) node. You can then publish metadata to Data3Sixty Govern to enrich your data governance glossaries.
Resolved Issues
The following issues have been resolved in this release:
LAE-9858, LAE-21327, LAE-21832, LAE-21860, LAE-21882
LAE-22015, LAE-22102, LAE-22121, LAE-22131, LAE-22376
LAE-22411, LAE-22413, LAE-22465, LAE-22476, LAE-22513
LAE-22533, LAE-22535, LAE-22590, LAE-22609, LAE-22616
LAE-22625, LAE-22685, LAE-22771, LAE-22792, LAE-22795
LAE-22853
See the release notes for details of the resolved issues.
For more about this release view the release notes. And don't forget to check out the Community to get your questions answered or to give us feedback. We love to hear from you.
Comments
0 comments
Please sign in to leave a comment.