Import delimited text files from PlaidCloud Document. This includes, but is not limited to, the following delimiter types:
To establish the source and target, first select the data table to be exported from the Source Table dropdown menu. Next, select the target file path from PlaidCloud Document using the dropdown menu to select the appropriate account before navigating to the actual directory in the section immediately below. Finally, provide the target file with a descriptive name.
Providing a file extension is advised, but not required by Analyze. The data table will be exported into the appropriate file format with or without an extension.
Analyze provides built-in functionality to preview source file data so users are not required to find the original file and open to recall its contents. Simply select Inspect Source File and a new window will open with a data preview (file stats are also available in a separate tab). Since some files can be quite large, the default limit is set to preview only 300 rows, but this can be adjusted as necessary.
Inspecting the source file will also give Analyze a chance to determine the delimiter being used.
As mentioned above, Inspect Source File will attempt to determine the delimiter in the source file. If another delimiter is desired, use this section to specify the delimiter. Users can choose from a list of standard delimiters or specify a different value as needed.
To specify a custom delimiter, select User Defined Separator –> and then Other –>, and type the custom delimiter into the text box.
The Text Qualifier section allows users to specify how to handle the data with regards to quotation marks and escape characters. Choose from the following settings:
dates and numbers
For input files with extraneous records, you can specify any number of rows to ignore from the top and/or bottom of the input file. This is especially helpful for files with control sums at the bottom.
Choose from any of the following options:
Selecting the Skip Quality Check box reduces overhead by removing an additional pass to each import step. While the quality check is helpful in determining the quality of source files, it can impact performance for larger files. Turn this setting ON to skip the check for large files that have structural integrity.
file encoding conversion
Analyze handles boolean and null values specially. This section provides the ability to specify values which should be treated specially.
The Table Data Selection tab is used to map columns from the source data table to the target data table. All source columns on the left side of the window are automatically mapped to the target data table depicted on the right side of the window. Using the Inspect Source menu button, there are a few additional ways to map columns from source to target:
In addition to each of these options, each choice offers the ability to preview the source data.
If the source and target column options aren’t enough, other columns can be added into the target data table in several different ways:
Selecting Propagate All may effectively create a duplicate of every column. Analyze does not check to see if the columns are already mapped. Make sure duplicate column names do not exist.
To delete columns from the target data table, select the desired column(s), then right click and select Delete.
To rearrange columns in the target data table, select the desired column(s), then right click and select Move to Top, Move Up, Move Down, or Move to Bottom.
To return only distinct options, select the Distinct menu option. This will toggle a set of checkboxes for each column in the source. Simply check any box next to the corresponding column to return only distinct results.
When the target data table contains only a subset of the source data table, only select the check box next to the columns which are to be included in the target data table. Selecting all checkboxes could provide output that does not appear to be distinct.
To aggregate results, select the Summarize menu option. This will toggle a set of drop down boxes for each column in the target data table. The following summarization options are available:
For more aggregation details, see the Analyze overview page [here](/docs/analyze/#aggregation).
To allow for maximum flexibility, data filters are available on the source data and the target data. For larger data sets, it can be especially beneficial to filter out rows on the source so the remaining operations are performed on a smaller data set.
Any valid Python expression is acceptable to subset the data. Please see Expressions for more details and examples.
Compound filters must have individual elements wrapped in parentheses. For example, if filtering for Temperature and Humidity, a valid filter would look like this:
To report duplicates, select the Report Duplicates in Table checkbox and then specify an output table which will contain all of the duplicate records.
This will not remove the duplicate items from the target data table. To remove duplicate items, use the Distinct menu options as specified in the [Table Data Selection](../transforms/common_features#table-data-selection) section.
Example code here
To limit the data, check the Apply Row Slicer box and then specify the following:
To limit the data, simply check the Apply Row Slicer box and then specify the following:
To perform basic Find/Replace operations, right click and select Insert Row or Append Row to add a new row prior to your selection or at the end of the list, respectively. Then, fill out the Find and Replace With fields. This will replace all instances found in the target data table, regardless of column position. Keep in mind that text replacement is case sensitive, so searching for analyze is not the same as searching for Analyze.
Do not wrap replacement strings in quotation marks unless you are looking for quotation-mark-wrapped strings within the data. This is different from typical string expressions found elsewhere in Analyze, which do require strings to be wrapped in quotation marks.
In this example, the text file, Export CSV comma delimited.csv, is imported from the Analyze Demo Output directory of PlaidCloud Document into a data table named Import CSV. The Inspect Source File button was used to correctly determine the CSV Dialect value of Excel CSV.
No changes are made to the default settings in the Import Parameters tab.
All columns are mapped from source to target as Float, String, or Datetime data types, for number data, string data, and date data, respectively. No additional operations are performed.
The modeler has optimistically chosen to replace a few instances of weather conditions. Mostly Cloudy becomes Partly Sunny, while Partly Cloudy becomes Mostly Sunny. Note that the text replacement strings are not wrapped in quotation marks.
For an example showing how to import a tab-delimited text file, please see the Import CSV section of the BCS Demo.