This update is focused on adding significantly improved organization flexibility to projects, workflows, and tables in Analyze.  It also coincides with our major infrastructure upgrade which will have PlaidCloud operating on a container based Kubernetes cluster…translation – more reliability, scalability, and resilience.  We will also have a new home for PlaidCloud on Google Cloud, thus requiring movement of large amounts of data.  No changes are required on your end as we move to Google Cloud.  You will connect and operate just as you do today.

For current Analyze users, there will be some updates to the user interfaces as we have simplified the organization of operations and allowed user based hierarchy organization of the items.  More information and documentation on the changes will be coming next week.

Underpinning all of these changes is a vastly improved process of retaining historical changes to everything in Analyze.  We plan to roll-out features in the near future to take advantage of this new history capability for simple things like undo or more complex things like copying items from specific points in time.

What to expect:

  • Significant improvements in ability to organize and find projects, workflows, tables, steps, and user defined functions (UDFs) through hierarchies and labels
  • Significant speed improvements for workflow execution
  • Improved integration with Dashboards
  • New data editors allowing for data entry and update directly in PlaidCloud
  • Instant table dependency tracing
  • Reuse of steps in multiple workflows
  • Instant  tracing of step usage in workflows
  • PlaidCloud operations are accessible using JSON-RPC interaction from local systems, UDFs, Jupyter notebooks, command line, or other services
  • Improved logging allows quick searching across steps, workflows, and projects
  • One click change of external connection environments to quickly switch between Dev, Test, QA, and Production
  • Ability to lock projects from changes or set to a Read-Only status
  • Direct data sharing among projects improves performance by reducing the need for file transfers
  • Significant improvements to UDF tools to ensure UDF logic runs exactly the same on your local system, within a workflow, or within a Jupyter notebook
  • Simplified oAuth registration and access for using JSON-RPC
  • Move to a micro-services architecture using an underlying technology called Kubernetes.  Kubernetes allows us  to perform upgrades without production downtime.