The PXF object store connectors support plain delimited and comma-separated value format text data. This section describes how to use PXF to access text data in an object store, including how to create, query, and insert data into an external table that references files in the object store.
Note: Accessing text data from an object store is very similar to accessing text data in HDFS.
Prerequisites
Ensure that you have met the PXF Object Store Prerequisites before you attempt to read data from or write data to an object store.
Reading Text Data
Use the <objstore>:text
profile when you read plain text delimited and <objstore>:csv
when reading .csv data from an object store where each row is a single record. PXF supports the following <objstore>
profile prefixes:
Object Store | Profile Prefix |
---|---|
Azure Blob Storage | wasbs |
Azure Data Lake | adl |
Google Cloud Storage | gs |
MinIO | s3 |
S3 | s3 |
The following syntax creates a Greenplum Database readable external table that references a simple text file in an object store:
The specific keywords and values used in the Greenplum Database CREATE EXTERNAL TABLE command are described in the table below.
Keyword | Value |
---|---|
<path‑to‑file> | The path to the directory or file in the object store. When the <server_name> configuration includes a pxf.fs.basePath property setting, PXF considers <path‑to‑file> to be relative to the base path specified. Otherwise, PXF considers it to be an absolute path. <path‑to‑file> must not specify a relative path nor include the dollar sign ($ ) character. |
PROFILE=<objstore>:text PROFILE=<objstore>:csv | The PROFILE keyword must identify the specific object store. For example, s3:text . |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. |
IGNORE_MISSING_PATH=<boolean> | Specify the action to take when <path-to-file> is missing or invalid. The default value is false , PXF returns an error in this situation. When the value is true , PXF ignores missing path errors and returns an empty fragment. |
SKIP_HEADER_COUNT=<numlines> | Specify the number of header lines that PXF should skip in the first split of each <file> before reading the data. The default value is 0, do not skip any lines. |
FORMAT | Use FORMAT 'TEXT' when <path-to-file> references plain text delimited data.Use FORMAT 'CSV' when <path-to-file> references comma-separated value data. |
delimiter | The delimiter character in the data. For FORMAT 'CSV' , the default <delim_value> is a comma (, ). Preface the <delim_value> with an E when the value is an escape sequence. Examples: (delimiter=E'\t') , (delimiter ':') . |
Note: PXF does not support the (HEADER)
formatter option in the CREATE EXTERNAL TABLE
command. If your text file includes header line(s), use SKIP_HEADER_COUNT
to specify the number of lines that PXF should skip at the beginning of the first split of each file.
If you are accessing an S3 object store:
-
You can provide S3 credentials via custom options in the
CREATE EXTERNAL TABLE
command as described in Overriding the S3 Server Configuration with DDL. -
If you are reading CSV-format data from S3, you can direct PXF to use the S3 Select Amazon service to retrieve the data. Refer to Using the Amazon S3 Select Service for more information about the PXF custom option used for this purpose.
Example: Reading Text Data from S3
Perform the following procedure to create a sample text file, copy the file to S3, and use the s3:text
and s3:csv
profiles to create two PXF external tables to query the data.
To run this example, you must:
- Have the AWS CLI tools installed on your system
- Know your AWS access ID and secret key
-
Have write permission to an S3 bucket
-
Create a directory in S3 for PXF example data files. For example, if you have write access to an S3 bucket named
BUCKET
: -
Locally create a delimited plain text data file named
pxf_s3_simple.txt
:Note the use of the comma (
,
) to separate the four data fields. -
Copy the data file to the S3 directory you created in Step 1:
-
Verify that the file now resides in S3:
-
Start the
psql
subsystem: -
Use the PXF
s3:text
profile to create a Greenplum Database external table that references thepxf_s3_simple.txt
file that you just created and added to S3. For example, if your server name iss3srvcfg
: -
Query the external table:
-
Create a second external table that references
pxf_s3_simple.txt
, this time specifying thes3:csv
PROFILE
and theCSV
FORMAT
:When you specify
FORMAT 'CSV'
for comma-separated value data, nodelimiter
formatter option is required because comma is the default delimiter value.
Reading Text Data with Quoted Linefeeds
Use the <objstore>:text:multi
profile to read plain text data with delimited single- or multi- line records that include embedded (quoted) linefeed characters. The following syntax creates a Greenplum Database readable external table that references such a text file in an object store:
The specific keywords and values used in the CREATE EXTERNAL TABLE command are described in the table below.
Keyword | Value |
---|---|
<path‑to‑file> | The path to the directory or file in the data store. When the <server_name> configuration includes a pxf.fs.basePath property setting, PXF considers <path‑to‑file> to be relative to the base path specified. Otherwise, PXF considers it to be an absolute path. <path‑to‑file> must not specify a relative path nor include the dollar sign ($ ) character. |
PROFILE=<objstore>:text:multi | The PROFILE keyword must identify the specific object store. For example, s3:text:multi . |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. |
IGNORE_MISSING_PATH=<boolean> | Specify the action to take when <path-to-file> is missing or invalid. The default value is false , PXF returns an error in this situation. When the value is true , PXF ignores missing path errors and returns an empty fragment. |
SKIP_HEADER_COUNT=<numlines> | Specify the number of header lines that PXF should skip in the first split of each <file> before reading the data. The default value is 0, do not skip any lines. |
FORMAT | Use FORMAT 'TEXT' when <path-to-file> references plain text delimited data.Use FORMAT 'CSV' when <path-to-file> references comma-separated value data. |
delimiter | The delimiter character in the data. For FORMAT 'CSV' , the default <delim_value> is a comma (, ). Preface the <delim_value> with an E when the value is an escape sequence. Examples: (delimiter=E'\t') , (delimiter ':') . |
Note: PXF does not support the (HEADER)
formatter option in the CREATE EXTERNAL TABLE
command. If your text file includes header line(s), use SKIP_HEADER_COUNT
to specify the number of lines that PXF should skip at the beginning of the first split of each file.
If you are accessing an S3 object store, you can provide S3 credentials via custom options in the CREATE EXTERNAL TABLE
command as described in Overriding the S3 Server Configuration with DDL.
Example: Reading Multi-Line Text Data from S3
Perform the following steps to create a sample text file, copy the file to S3, and use the PXF s3:text:multi
profile to create a Greenplum Database readable external table to query the data.
To run this example, you must:
- Have the AWS CLI tools installed on your system
- Know your AWS access ID and secret key
-
Have write permission to an S3 bucket
-
Create a second delimited plain text file:
-
Copy/paste the following data into
pxf_s3_multi.txt
:Notice the use of the colon
:
to separate the three fields. Also notice the quotes around the first (address) field. This field includes an embedded line feed separating the street address from the city and state. -
Copy the text file to S3:
-
Use the
s3:text:multi
profile to create an external table that references thepxf_s3_multi.txt
S3 file, making sure to identify the:
(colon) as the field separator. For example, if your server name iss3srvcfg
:Notice the alternate syntax for specifying the
delimiter
. -
Query the
pxf_s3_textmulti
table:
Writing Text Data
The <objstore>:text|csv
profiles support writing single line plain text data to an object store. When you create a writable external table with PXF, you specify the name of a directory. When you insert records into a writable external table, the block(s) of data that you insert are written to one or more files in the directory that you specified.
Note: External tables that you create with a writable profile can only be used for INSERT
operations. If you want to query the data that you inserted, you must create a separate readable external table that references the directory.
Use the following syntax to create a Greenplum Database writable external table that references an object store directory:
The specific keywords and values used in the CREATE EXTERNAL TABLE command are described in the table below.
Keyword | Value |
---|---|
<path‑to‑dir> | The path to the directory in the data store. When the <server_name> configuration includes a pxf.fs.basePath property setting, PXF considers <path‑to‑dir> to be relative to the base path specified. Otherwise, PXF considers it to be an absolute path. <path‑to‑dir> must not specify a relative path nor include the dollar sign ($ ) character. |
PROFILE=<objstore>:text PROFILE=<objstore>:csv | The PROFILE keyword must identify the specific object store. For example, s3:text . |
SERVER=<server_name> | The named server configuration that PXF uses to access the data. |
<custom‑option>=<value> | <custom-option>s are described below. |
FORMAT | Use FORMAT 'TEXT' to write plain, delimited text to <path-to-dir>.Use FORMAT 'CSV' to write comma-separated value text to <path-to-dir>. |
delimiter | The delimiter character in the data. For FORMAT 'CSV' , the default <delim_value> is a comma (, ). Preface the <delim_value> with an E when the value is an escape sequence. Examples: (delimiter=E'\t') , (delimiter ':') . |
DISTRIBUTED BY | If you want to load data from an existing Greenplum Database table into the writable external table, consider specifying the same distribution policy or <column_name> on both tables. Doing so will avoid extra motion of data between segments on the load operation. |
Writable external tables that you create using an <objstore>:text|csv
profile can optionally use record or block compression. You specify the compression type and codec via custom options in the CREATE EXTERNAL TABLE
LOCATION
clause. The <objstore>:text|csv
profiles support the following custom write options:
Option | Value Description |
---|---|
COMPRESSION_CODEC | The compression codec alias. Supported compression codecs for writing text data include: default , bzip2 , gzip , and uncompressed . If this option is not provided, Greenplum Database performs no data compression. |
COMPRESSION_TYPE | The compression type to employ; supported values are RECORD (the default) or BLOCK . |
If you are accessing an S3 object store, you can provide S3 credentials via custom options in the CREATE EXTERNAL TABLE
command as described in Overriding the S3 Server Configuration with DDL.
Example: Writing Text Data to S3
This example utilizes the data schema introduced in Example: Reading Text Data from S3.
Column Name | Data Type |
---|---|
location | text |
month | text |
number_of_orders | int |
total_sales | float8 |
This example also optionally uses the Greenplum Database external table named pxf_s3_textsimple
that you created in that exercise.
Procedure
Perform the following procedure to create Greenplum Database writable external tables utilizing the same data schema as described above, one of which will employ compression. You will use the PXF s3:text
profile to write data to S3. You will also create a separate, readable external table to read the data that you wrote to S3.
-
Create a Greenplum Database writable external table utilizing the data schema described above. Write to the S3 directory
BUCKET/pxf_examples/pxfwrite_s3_textsimple1
. Create the table specifying a comma (,
) as the delimiter. For example, if your server name iss3srvcfg
:You specify the
FORMAT
subclausedelimiter
value as the single ascii comma character,
. -
Write a few individual records to the
pxfwrite_s3_textsimple1
S3 directory by invoking the SQLINSERT
command onpxf_s3_writetbl_1
: -
(Optional) Insert the data from the
pxf_s3_textsimple
table that you created in [Example: Reading Text Data from S3] (#profile_text_query) intopxf_s3_writetbl_1
: -
Greenplum Database does not support directly querying a writable external table. To query the data that you just added to S3, you must create a readable external Greenplum Database table that references the S3 directory:
You specify the
'CSV'
FORMAT
when you create the readable external table because you created the writable table with a comma (,
) as the delimiter character, the default delimiter for'CSV'
FORMAT
. -
Query the readable external table:
The
pxf_s3_textsimple_r1
table includes the records you individually inserted, as well as the full contents of thepxf_s3_textsimple
table if you performed the optional step. -
Create a second Greenplum Database writable external table, this time using Gzip compression and employing a colon
:
as the delimiter: -
Write a few records to the
pxfwrite_s3_textsimple2
S3 directory by inserting directly into thepxf_s3_writetbl_2
table: -
To query data from the newly-created S3 directory named
pxfwrite_s3_textsimple2
, you can create a readable external Greenplum Database table as described above that references this S3 directory and specifiesFORMAT 'CSV' (delimiter=':')
.
Content feedback and comments