This update focuses on adding improved organization flexibility to projects, workflows, and tables in Analyze. More importantly, however, it coincides with our major infrastructure upgrade which will have PlaidCloud operating on a container based Kubernetes cluster…(translation – more safety, scalability, and resilience).  Additionally, we will also have a new home for PlaidCloud on Google Cloud, thus requiring movement of large amounts of data.  Therefore, no changes are required on your end as we move to Google Cloud so you will connect and operate just as you do today.

For current Analyze users, there will be some updates to the user interfaces as we have made the operations easier and allow user based system organization of the items.  More information and documentation on the changes will be coming next week.

These changes are the result of a vastly improved process of keeping past changes to everything in Analyze.  We plan to roll-out new features soon to take advantage of this new history capability for simple things like undo or more complex things like copying items from specific points in time.

What to expect:

  • Improvements in ability to organize and find projects, workflows, tables, steps, and user defined functions (UDFs) through hierarchies and labels
  • Increased speed for workflow process
  • Improved integration with Dashboards
  • New data editors allowing for data entry and update directly in PlaidCloud
  • Instant table dependency tracing
  • Reuse of steps in multiple workflows
  • Instant  tracing of step usage in workflows
  • PlaidCloud systems are available using JSON-RPC from local systems, UDFs, Jupyter notebooks, command line, or other services
  • Improved logging allows quick searching across steps, workflows, and projects
  • One click change of external connection domains to quickly switch between Dev, Test, QA, and Production
  • Ability to lock projects from changes or set to a Read-Only status
  • Direct data sharing among projects improves performance by reducing the need for file transfers
  • Improvements to UDF tools to ensure UDF logic runs exactly the same on your local system, within a workflow, or within a Jupyter notebook
  • Easier oAuth registration and access for using JSON-RPC
  • Move to a micro-services set-up using an underlying technology called Kubernetes.  This technology allows us to perform upgrades without slowing production.