Table Extract

Home/Table Extract

Table Extract

ParameterValue
CategoryTable
Operationtable_extract
Workflow IconIcon
Input TypePlaidCloud Table
Output TypePlaidCloud Table

Description #

Use to extract data from an existing Analyze data table into another data table. Examples include, but are not limited to, the following:

  • Sort
  • Group
  • Summarization
  • Filter/Subset Rows
  • Drop Extra Columns
  • Math Operations
  • String Operations

Note: There is no actual function exclusive to this transform. All sorting, grouping, filtering, etc. can be performed in any other transform with the Table Data Selection and Data Filters tabs.

Extract Parameters #

Source and Target #

To establish the source and target, first select the data table to be exported from the Source Table dropdown menu. Next, select the target file path from PlaidCloud Document using the dropdown menu to select the appropriate account before navigating to the actual directory in the section immediately below. Finally, provide the target file with a descriptive name.../../../_images/common_export_source_and_target3.png

Note

Providing a file extension is advised, but not required by Analyze. The data table will be exported into the appropriate file format with or without an extension.

Table Data Selection #

The Table Data Selection tab is used to map columns from the source data table to the target data table. All source columns on the left side of the window are automatically mapped to the target data table depicted on the right side of the window. Using the Inspect Source menu button, there are a few additional ways to map columns from source to target:

  • Populate Both Mapping Tables: Propagates all values from the source data table into the target data table. This is done by default.
  • Populate Source Mapping Table Only: Maps all values in the source data table only. This is helpful when modifying an existing workflow when source column structure has changed.
  • Populate Target Mapping Table Only: Propagates all values into the target data table only.

In addition to each of these options, each choice offers the ability to preview the source data.

If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:

  • Propagate All will insert all source columns into the target data table, whether they already existed or not.
  • Propagate Selected will insert selected source column(s) only.
  • Right click on target side and select Insert Row to insert a row immediately above the currently selected row.
  • Right click on target side and select Append Row to insert a row at the bottom (far right) of the target data table.

Warning

Selecting Propagate All may effectively create a duplicate of every column. Analyze does not check to see if the columns are already mapped. Make sure duplicate column names do not exist.

To delete columns from the target data table, select the desired column(s), then right click and select Delete.

To rearrange columns in the target data table, select the desired column(s), then right click and select Move to TopMove UpMove Down, or Move to Bottom.

To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return distinct results only.

Warning

When the target data table contains only a subset of the source data table, only select the check box next to the columns which are to be included in the target data table. Selecting all checkboxes could provide output that does not appear to be distinct.

To aggregate results, select the Summarize menu option. This will toggle a set of drop down boxes for each column in the target data table. The following summarization options are available:

  • Group by (set as default)
  • Sum
  • Min
  • Max
  • First
  • Last
  • Count
  • Mean
  • Median
  • Mode
  • Std Dev
  • Variance
  • Product
  • Absolute Val
  • Quantile
  • Skew
  • Kurtosis
  • Mean Abs Dev
  • Cumulative Sum
  • Cumulative Min
  • Cumulative Max
  • Cumulative Product

Todo

For more aggregation details, see the Analyze overview page [here](/docs/analyze/#aggregation).

Data Filters #

To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.

Select Subset of Source Data #

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.../../../_images/common_data_filters_subset_source_data3.png

Note

Compound filters must have individual elements wrapped in parentheses. For example, if filtering for Temperature and Humidity, a valid filter would look like this:





Duplicates #

To report duplicates, select the Report Duplicates in Table checkbox and then specify an output table which will contain all of the duplicate records.

../../../_images/common_data_filters_duplicates3.png

Caution

This will not remove the duplicate items from the target data table. To remove duplicate items, use the Distinct menu options as specified in the [Table Data Selection](../transforms/common_features#table-data-selection) section.

Select Subset of Final Data #

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.





Example code here

Select Subset of Source Data #

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.../../../_images/common_data_filters_subset_source_data3.png

Note

Compound filters must have individual elements wrapped in parentheses. For example, if filtering for Temperature and Humidity, a valid filter would look like this:





Duplicates #

To report duplicates, select the Report Duplicates in Table checkbox and then specify an output table which will contain all of the duplicate records.

../../../_images/common_data_filters_duplicates3.png

Caution

This will not remove the duplicate items from the target data table. To remove duplicate items, use the Distinct menu options as specified in the [Table Data Selection](../transforms/common_features#table-data-selection) section.

Source Table Slicing (Limit) #

To limit the data, check the Apply Row Slicer box and then specify the following:

  • Initial Rows to Skip: Rows of data to skip (column header row is not included in count)
  • End at Row: Last row of data to include. Note that this is different from simply counting rows at the end to drop

../../../_images/common_data_filters_source_table_slicing3.png

Select Subset of Final Data #

Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.





Example code here

Final Data Table Slicing (Limit) #

To limit the data, simply check the Apply Row Slicer box and then specify the following:

  • Initial Rows to Skip: Rows of data to skip (column header row is not included in count)
  • End at Row: Last row of data to include. This is different from simply counting rows at the end to drop

../../../_images/common_data_filters_target_table_slicing3.png

Workflow Configuration Forms #

Table Extract

Examples #

Data Filter – Temperature #

In this example, the Source TableImport Google Spreadsheet, is filtered to include only results in which the temperature was listed at 75 degrees Fahrenheit or above. As such, the Target Table is named Filter Results Temp 75+Table Extract_1

All columns are mapped from source to target. No grouping, sorting, or summarization options are specified. Table Extract_2

In the Data Filters tab, the source data is subset with the following expression: row[‘TemperatureF’] >= 75. This expression only keeps rows which have a value in the TemperatureF column equal to 75 or higher. Table Extract_3

As expected on an Ohio summer day, the temperature first climbs above 75 degrees around noon and then remains there until nearly 10 PM.

Table Extract Results

Table Data Selection – Unique Values #

In this example, the same Source TableImport Google Spreadsheet, is used, but in this case, it will be used to identify distinct conditions reported throughout the day. Accordingly, the Target Table is named Distinct ConditionsTable Extract_4

In this case, only a single column from the source data table is mapped to the target data table. Additionally, the Make Distinct button has been selected and applied only to the Conditions column. This should return only distinct values found in the source data table. 

Table Extract_5

Important

When the target data table contains only a subset of the source data table, select the check box next to only the columns which are to be included in the target data table. Selecting all checkboxes could provide output that does not appear to be distinct.

Since this example is looking for distinct values, it may be helpful to also identify non-distinct duplicate values. As such, any values which exist as duplicates will be added to the duplicate values from raw data input data table. Table Extract_6

On this day, there were 4 unique conditions reported throughout the day. Table Extract_7

BCS Demo – Mathematical Expression #

For an example showing how to use a mathematical expression to populate a value in an additional column, please see the Calculate Harris Score section of the BCS Demo.

BCS Demo – Sort Multiple Columns #

For an example showing how to sort the target data table by multiple columns, please see the Sort Rankings by Team section of the BCS Demo.

BCS Demo – Conditional Expression #

For an example showing an if/then/else conditional expression in lambda-like syntax (single line), please see the Convert Rankings to Points section of the BCS Demo.

BCS Demo – Group and Summarization #

For an example showing how to group and summarize results, please see the Calculate Total Computer Points section of the BCS Demo.

Go to Top