Set up Log Level Data (LLD)

Springserve regularly writes data relevant for each request that it receives in temporary tables.

The Log Level Data (LLD) table is structured so that every row represents an event that happens from the time we receive a request to the time the creative is delivered in full.  Each request can be identified by an auction id. 

The event_type column in LLD shows the events corresponding to each row.

The basic version records only impressions events.

To turn on this feature, contact with your Sales representative/AM for pricing. Prices vary according to the amount of data that needs to be sent, so work with your manager to get a sample of the data that needs to be sent.  

The most common way is dropping csv files in either an s3 bucket, a GCS bucket or setting up a Snowflake partition. You can use the below case-specific instructions according to your destination setup.

Data retention

We hold L3 data for 10 days and L1 data for 28 days. After 28 days the data will be completely erased from our database with no possibility of recovering.

For external deliveries, we deliver the files once every hour. We normally deliver a set of gzipped csv files but other file formats are available such as JSONAVROORCPARQUET, and XML.

The files will be delivered in a folder structure such as: springserve/yyyy/mm/dd/hh/lld_master 

The files are delayed 3 hours because the LOG LEVEL table takes about two hours to be written, to make sure all the data has been written to the table before we transmit.

S3 Buckets Delivery

What we need:

  • Requesting account id/name
  • Data requested (ie table columns)
  • Deadline
  • Bucket name
  • folder name (optional)
  • AWS key id
  • AWS secret key

Please use the following link for instructions on how to setup it up:

https://docs.snowflake.com/en/user-guide/data-load-s3-config-aws-iam-user.html

Make sure to give us READ, WRITE and DELETE permissions for the bucket. There is the possibility to use an arn role.

GCS Delivery

What we need:

  • Requesting account id/name
  • Data requested (ie table columns)
  • Bucket name
  • folder name (optional)

In the case of GCS please follow the following procedure to set up the permissions:

https://docs.snowflake.com/en/user-guide/data-load-gcs-config.html#step-3-grant-the-service-account-permissions-to-access-bucket-objects

Please grant access to our Snowflake GCS service account:

xovuyerzcj@sfc-va-1-m2y.iam.gserviceaccount.com

We need these permissions:

  • storage.buckets.get (for calculating data transfer costs)
  • storage.objects.create
  • storage.objects.delete
  • storage.objects.get
  • storage.objects.list

Snowflake partition

If you already work with Snowflake, you can ask for a snowflake partition. This means that we give you access to the section of our Snowflake tables that is relevant to your account.

Snowflake data shares are currently only available in Snowflake region AWS us-east-1

What we need:

  • Requesting account id/name
  • Data requested (ie tables)
  • Deadline
  • Client's Snowflake organization name
  • Client's Snowflake account locator (can be found by running SELECT CURRENT_ACCOUNT();)

This being a data share, it will be affected by the retention policy. Please remember to store any data that you do not want deleted.

Other reports

What we need:

  • Method of delivery (email or bucket or sftp)
  • Data requested
  • Frequency and time range
  • Deadline

Please send the destination credentials via email for privacy reasons.