Request data exports
Prerequisites and process for initiating ongoing data exports
The process described on this page requires a Lightup deployment that uses S3 or Azure Blob storage, as detailed below.
Upon request, Lightup will begin regular exports of your data quality metrics to your cloud-based storage (currently, S3 or Azure Blob). You can then use the data exports for custom analysis, custom dashboard generation, and archiving purposes. If you use S3 for cloud storage, you can create a Redshift database and import the data there.
What data we export
When exporting is turned on, each day we export a a summary of all metric data points collected that day along with indicators for whether or not an incident was detected for each data point. The export goes in a new CSV file in your cloud storage (S3 bucket or Azure Blob Container), using the following columns with one row per incident logged. You can then use these exports as needed.
eventTs - datapoint timestamp: seconds from Unix Epoch
slice - datapoint slice: JSON object
value - datapoint value (what it represents depends on the type of metric): floating point number
metricUuid - datapoint metric uuid in Lightup db: string
metricName - datapoint metric name: string
metricType - datapoint metric type (whatever we have in config.aggregation.type): string
metricAggregationWindow - datapoint metric aggregation window (
hourly
,daily
,etc
.): stringmetricTags - datapoint metric tags: Array[string]
sourceUuid - datapoint metric source uuid in Lightup db: string
sourceName - datapoint metric source name: string
schemaName - datapoint metric schema name: string
tableUuid - datapoint metric table uuid in Lightup db: string
tableName - datapoint metric table name in Lightup db: string
columnUuid - datapoint metric column uuid in Lightup db: string
columnName - datapoint metric column name: string
incidentCount - number of active incidents associated with the datapoint metric at eventTs time: integer
workspaceId - workspace id of the metric: uuid
monitorUuid - uuid of the monitor if datapoint was monitored: string
monitorName - name of the monitor if datapoint was monitored: string
monitorTags - monitor tags if datapoint was monitored: string
monitoredValue - processed value that is compared with monitor bounds: floating point number
monitorLowerBound - lower bound of the monitor if datapoint was monitored: floating point number
monitorUpperBound - upper bound of the monitor if datapoint was monitored: floating point number
dataExtractionComments - Description of any errors that were encountered during data extraction: string
Start getting data exports
S3 exports - parameters
Include these details in your message:
S3 Bucket name where you want the export files (Required)
AWS access key ID (Optional)
AWS secret access key (Optional)
Region name (Optional)
AWS Blob exports - parameters
Include these details in your message:
Azure Blob Container name where you want the export files (Required)
Account name (Required)
Account key (Required)
Import S3 CSV files into a Redshift database
If you use S3 cloud storage for Lightup you can create a Redshift database and import the data from your CSV files, unlocking numerous analytical options.
The schema of the exports is subject to planned change. If you create a database per the following steps, include this factor in your maintenance plans.
Open Redshift query editor in AWS console.
Create a table using following SQL:
To import a CSV file, run the following SQL, but replace "datapointsexport" with the name of the CSV file (excluding the .csv extension), "lightup-datapoints-dump" with your S3 bucket name, and the value of iam_role:
Last updated