Request data exports

Prerequisites and process for initiating ongoing data exports

Upon request, Lightup will begin regular exports of your data quality metrics to your cloud-based storage (currently, S3 or Azure Blob). You can then use the data exports for custom analysis, custom dashboard generation, and archiving purposes. If you use S3 for cloud storage, you can create a Redshift database and import the data there.

What data we export

When exporting is turned on, each day we export a a summary of all metric data points collected that day along with indicators for whether or not an incident was detected for each data point. The export goes in a new CSV file in your cloud storage (S3 bucket or Azure Blob Container), using the following columns with one row per incident logged. You can then use these exports as needed.

Items listed below have the format columnName - value source: value data type.

  • eventTs - datapoint timestamp: seconds from Unix Epoch

  • slice - datapoint slice: JSON object

  • value - datapoint value (what it represents depends on the type of metric): floating point number

  • metricUuid - datapoint metric uuid in Lightup db: string

  • metricName - datapoint metric name: string

  • metricType - datapoint metric type (whatever we have in config.aggregation.type): string

  • metricAggregationWindow - datapoint metric aggregation window (hourly, daily, etc.): string

  • metricTags - datapoint metric tags: Array[string]

  • sourceUuid - datapoint metric source uuid in Lightup db: string

  • sourceName - datapoint metric source name: string

  • schemaName - datapoint metric schema name: string

  • tableUuid - datapoint metric table uuid in Lightup db: string

  • tableName - datapoint metric table name in Lightup db: string

  • columnUuid - datapoint metric column uuid in Lightup db: string

  • columnName - datapoint metric column name: string

  • incidentCount - number of active incidents associated with the datapoint metric at eventTs time: integer

  • workspaceId - workspace id of the metric: uuid

  • monitorUuid - uuid of the monitor if datapoint was monitored: string

  • monitorName - name of the monitor if datapoint was monitored: string

  • monitorTags - monitor tags if datapoint was monitored: string

  • monitoredValue - processed value that is compared with monitor bounds: floating point number

  • monitorLowerBound - lower bound of the monitor if datapoint was monitored: floating point number

  • monitorUpperBound - upper bound of the monitor if datapoint was monitored: floating point number

  • dataExtractionComments - Description of any errors that were encountered during data extraction: string

Start getting data exports

Email support@lightup.ai and provide the following information, based on your cloud storage account. There are currently two options: S3 and Azure Blob.

S3 exports - parameters

Include these details in your message:

  • S3 Bucket name where you want the export files (Required)

  • AWS access key ID (Optional)

  • AWS secret access key (Optional)

  • Region name (Optional)

AWS Blob exports - parameters

Include these details in your message:

  • Azure Blob Container name where you want the export files (Required)

  • Account name (Required)

  • Account key (Required)

Import S3 CSV files into a Redshift database

If you use S3 cloud storage for Lightup you can create a Redshift database and import the data from your CSV files, unlocking numerous analytical options.

  1. Create a Redshift database. For syntax, see Amazon Redshift - CREATE DATABASE.

  2. Open Redshift query editor in AWS console.

  3. Make sure Redshift IAM role is allowed to read from the source S3 bucket. For help using IAM roles, see AWS Identity and Access Management - Using IAM Roles.

  4. Create a table using following SQL:

    CREATE TABLE datapointsexport (
    eventTs timestamp,
    slice super,
    value float,
    metricUuid varchar,
    metricName varchar,
    metricType varchar,
    metricAggregationWindow varchar,
    metricTags super,
    sourceUuid varchar,
    sourceName varchar,
    schemaName varchar,
    tableUuid varchar,
    tableName varchar,
    columnUuid varchar,
    columnName varchar,
    incidentCount int,
    workspaceId varchar,
    monitorUuid varchar,
    monitorName varchar,
    monitorTags varchar,
    monitoredValue float,
    monitorLowerBound float,
    monitorUpperBound float,
    incidentData varchar,
    userDescription varchar,
    dataExtractionComments varchar
    );

  5. To import a CSV file, run the following SQL, but replace "datapointsexport" with the name of the CSV file (excluding the .csv extension), "lightup-datapoints-dump" with your S3 bucket name, and the value of iam_role:

        COPY datapointsexport
        FROM 's3://lightup-datapoints-dump'
        iam_role 'arn:aws:iam::231612517276:role/RedshiftCopy'
        csv
        IGNOREHEADER 1
        DELIMITER ','
        EMPTYASNULL
        TIMEFORMAT 'epochsecs';
        

For information about automating imports from S3 files into Redshift, see A Zero-Administration Amazon Redshift Database Loader.

Last updated