BigQuery: Snowplow Schema

Prerequisites

You must grant 'bigquery.dataViewer' access to Indicative's service account for your BigQuery project. In order to perform the following steps you must have administrative access to the BigQuery console as well as your BigQuery database.

For this self-service integration, we also have some data requirements:

  1.  All of your events must be unified into one singular table as opposed to having separate tables for each event type.
  2.  There can only be a maximum of one authenticatedID and one unauthenticatedID for aliasing. 
  3.  The event timestamp must be in UTC
  4.  All joins must be done beforehand.
  5.  Sharded tables, meaning if your BigQuery tables end with the _MMDDYYYY format are not currently supported.

We can still support any integrations that do not meet the above requirements, but you will need to get in touch with a product specialist. Additionally, if there are additional enrichments required such as joining with user property tables or deriving custom user_ids, please contact us.

Instructions

Adding a Data Source In Indicative

  1. In Indicative, click on Settings and select Data Sources

    mceclip0.png

  2. Click on New Data Source

    mceclip1.png

  3. Select Connect via Data Warehouse or Lake
    connectData.png
  4. Select BigQuery as your data connection and Snowplow as the connection schema and click  Connect
    BQSnowplow.png
  5. You should see this Google + Snowplow Overview screen. Click Next
    mceclip4.png

Connection Information

mceclip5.png

  1.  Open the BigQuery console on Google Cloud Platform and Select a project.
  2. Enter the GCP Project ID containing your Snowplow data.
    mceclip2.png

  3. Enter the Dataset Name
    mceclip4.png
  4. Enter the Table Name and click Next in Indicative.mceclip5.png

Grant Permissions
mceclip6.png

This integration works by sharing the dataset with Indicative's service account and only requires read-only access to that dataset. Indicative takes on the cost of the query and caches this data in Indicative's proprietary analytics engine.

  1. Within the BigQuery Console, select your Project and your dataset from the previous section.
  2. Click on Share Dataset

  3. In the Dataset Permissions panel, in the Add Members field, place the user below.
    integrations@indicative-988.iam.gserviceaccount.com
  4. In the Select a Role dropdown, select BigQuery Data Viewer and click Add.
    mceclip5.png

Data Loading

mceclip7.png

  1. Start Date
    Select the date from where Indicative should load your data from. 

    If your data history exceeds 1 billion events, a Solutions Engineer will contact you to assist with the integration.

  2. Schedule Interval
    Select the frequency to make new data available in Indicative. 

  3. Processing Delay
    Select when we should start extracting your data in UTC. This time should be when all of your previous day's data is fully available in your table for extraction.

Event Modeling
mceclip8.png

 

  1. In the Structured Event Name section, select the field that should be used to derive Indicative event names. Typically, most customers use the se_action field, but it completely depends on your implementation. 
    We will first look at this field's value to use as the event name in Indicative. If this value is null, then we will use the event_name field. If this field's value is also null, we will then use the event field.
    If you are not using Snowplow structured events, select none.

  2. For Timestamp, select the field that represents the time that the event was performed. Indicative will use this field to run its queries. If unsure, leave as derived_tstamp.
    1. collector_tstamp - Timestamp for the event recorded by the collector.
    2. dvce_created_tstamp - Timestamp for the event recorded on the client device.
    3. dvce_sent_tstampWhen the event was actually sent by the client device.
    4. etl_tstamp - Timestamp for when the event was validated and enriched. Note: the name is historical and does not mean that the event is loaded at this point (this is further downstream).
    5. derived_tstamp - Timestamp making allowance for inaccurate device clock.
    6. true_tstamp - User-set "true timestamp" for the event.

  3. For Vendor Name, input the Snowplow vendor names used so we can simplify your event property names. 
  4. Click Next

    After this step, we will perform a few checks on your data with the model that you provided. The checks are:

    • Valid event field (Do at least 80% of your records have a value for the event field?)
    • Valid timestamp field (Do at least 80% of your records have a value for the timestamp field?)
    • Total number of unique events. We recommend 20-300 unique events and limit it to 2000. 

User Modeling
mceclip10.png

After some basic checks, we can define your users within your data. For more information on User Identification (Aliasing), please refer to this article.

  1. If you choose to enable Aliasing, click on Enabled:
  2. Type - Select the Snowplow field type
    1. Atomic - If the anonymous ID field is an atomic field, select this option.
      1. Field Name - Select the field that should be used to identify anonymous users
    2. Context - If your anonymous ID is contained within the Contexts field, choose this option
      1. Field Name - Select the context field that contains your anonymousID

  3. I you choose to disable Aliasing, press Disabled:
    1. Type - Select the Snowplow field type
      1. Atomic - If the anonymous ID field is an atomic field, select this option.
        1. Field Name - Select the field that should be used to identify anonymous users
      2. Context - If your anonymous ID is contained within the Contexts field, choose this option
        1. Field Name - Select the context field that contains your anonymousID

If you have a non-null value that represents null UserID values, please click on the Show Advanced button. In this field, please enter these non-null values. 

After this step, we will perform additional checks on your data with the user model that you provided. The checks are:

  • User Hotspot (Is there a single UserID that represents over 40% of your records?)
  • Anti-Hotspot (Does your data have too many unique userIDs? A good events table contains multiple events per user)
  • Aliasing
    • Too many unauthenticated IDs for a single authenticated userID
    • Too many authenticated IDs for a single anonymous ID

Assisted Modeling

mceclip11.png

You should see a summary of your data based on the last 7 days in three main blocks.

You should only be concerned if the margin of error is significant. If so, please reach out to a product specialist:

  1. Events Summary
    You should see a daily breakdown of your Total Event Count, and the number of Unique Event Names. If there are certain events to exclude, please click on the Exclude checkbox for those events.

    If you would like to exclude any events by regex or property value, please contact a product specialist.

    If this section looks good, click Next

  2. Properties Summary
    Here you will see the number of Unique Event Property Names. If there are certain properties to exclude, please click on the Exclude checkbox for those events.

    If you require more advanced configurations such as parsing out JSON fields, creating derived properties, or  excluding properties based on regex, please contact a product specialist.

    If this section looks good, click Next

  3. Users Summary
    This section lists the number of Unique users seen. If the numbers do not look correct, please go back to the User Modeling section to confirm that the correct ID was chosen. Please note that the counts may not reflect exactly what gets loaded into Indicative due to aliasing and other event modeling configurations.

    If this section looks good, click Next

Waiting For Data
allset.png

If you see this screen, you're all done! You should see your data in Indicative within 48-72 hours and will be notified by email. 

0 users found this helpful