Skip to content

Instructions for Lab Members Performing Crawls

Ebuka Akubilo edited this page Mar 6, 2025 · 27 revisions

This page provides guidance for running and saving data from a crawl of our complete 11,708-site dataset. Performing a full crawl involves three main stages:

  • Crawling crawl-set-pt1.csv through crawl-set-pt8.csv (our crawl set divided into 8 batches)
  • Creating and crawling redo-sites.csv (which you'll generate based on the initial crawl results)
  • Parsing the crawled data and saving it to Google Drive

Below, we outline the steps involved in each stage.

Crawling the First 8 Batches

  1. Initial Steps:
    Before starting the crawl, ensure you've set up Docker and cloned the crawler repository.

    • Docker and Repository Installation
      If you haven't already, follow steps 1–4 in the README to install and initialize Docker and set up the gpc-web-crawler repository:

      • Install Docker Desktop.
      • Authenticate Docker.
      • Clone the repository.
  2. Navigate to the directory:
    In your Terminal, navigate to the gpc-web-crawler directory.

  3. Clean the Directory
    Remove previous crawl outputs if present:

    rm -rf crawl_results
  4. Check Docker stack status:
    Run the following command to verify that the Docker compose stack (gpc-web-crawler) isn't already running:

    make check-if-up
    • If the command prints true (the stack is running), run make stop to shut down the compose stack.
    • If it prints false (the stack is not running), proceed to the next step.
  5. Run batches 1 through 8:
    For each batch number n (where n is from 1 to 8), repeat these steps:

    1. Run:
      make start
    2. When prompted for a number between 1 and 8, enter your chosen batch number (n).
    3. Once the crawl is complete, shut down the compose stack:
      make stop
  6. Rename crawl_results to Crawl_Data_Mon_Year where Mon and Year are the month and year, respectively, of the crawl.

  7. Upload the renamed crawl_results directory to the Web Crawler folder in Google drive.

Creating and crawling redo sites:

After completing the initial crawl of the 8 batches, the next step is to identify and crawl the redo sites. "Redo sites" are those that previously resulted in an error due to their subdomain structure. For these sites, we attempt the crawl again without their subdomains.

  1. After the 8 batches are done, the redo sites need to be identified and run. (redo sites are sites that had an error and a subdomain. We redo those sites without the subdomain). The following colab will identify those sites (modify path1 to be the path to the new data). Two files will be created: redo-sites.csv and redo-original-sites.csv; the first contains the sites you will crawl again (i.e., without the subdomains), and the second contains the same sites but with their subdomains (used to identify the exact sites later). Put both of these files into the equivalent of Crawl_Data_Dec_2023 in the drive.

  2. Crawl redo-sites.csv, and save the usual data, substituting redo in for pt1 in folder and file names. The redo data will override the initially collected data in the analysis colab.

  3. Run the well-known-collection.py script as described in the readme to collect the well-known data.

  4. Create a well-known folder in the equivalent of Crawl_Data_Dec_2023 in the drive. Put the well-known-data.csv and well-known-errors.json output files into this folder. Save the terminal output as Terminal Saved Output-wellknown.txt and put it into this folder.

    When in doubt, follow the file structure from previous crawls. All data and analysis code can be found here in the drive.

Parsing/analyzing crawl data:

After the full crawl is done and the data is saved in the correct format, parse the data using this colab. The parsed data will appear in this google sheet. Graphs for that month can be created by running this colab. Graphs comparing data from multiple crawls can be created using this colab. Figures are automatically saved to this folder. This colab serves as a library for the other colabs.

Google Drive Web_Crawler directories and files:

  • crawl-set-pt1.csv/ -- crawl-set-pt8.csv/: Files of 8 batches of crawl set.
  • Crawl_Data_Month_Year_ (e.g., Crawl_Data_April_2024): Folders of the result of our past crawls.
  • Crawl_Data: A file that compiles all the Crawl Data accumulated over the series of crawls (a compiled version of the Crawl_Data_Month_Year_).
  • sites_with_GPP : A file that collates all the sites with GPP (as of December 2023); this analysis is now reflected as statistics as a figure in Processing_Analysis_Data.
  • Ad_Network_Analysis: A file that has the result of the manual analysis of up to 70 ad networks' privacy policies.
  • Web_Crawl_Domains_2023: A file that has a collation of detailed information regarding sites in our crawl set (i.e., their ad networks, contact information, Tranco ranks).
  • Collecting_Sites_To_Crawl: A folder with files that explain and justify our methodology and process of the collection of sites to crawl (ReadMe and Methodology).
  • similarweb: A folder of our analysis that process the SimilarWeb data and determine what Tranco rank would have sufficient traffic to be subject to the CCPA.
  • GPC_Detection_Performance: A folder of ground truth data collection on validation sets of sites for verifyiing the USPS and GPP strings via the USPAPI value, OptanonConsent cookie, and GPP string value, each before and after sending a GPC signal.
  • Processing_Analysis_Data: A folder that has all the colabs for parsing, processing and analyzing the crawl results and the figures created from the analysis.

What to do if a crawl fails in the middle:

If a crawl fails in the middle of a batch (i.e., completely stops running), just restart from where it left off. This means change the following line in local-crawler.js for (let site_id in sites) { to for (let site_id = x; site_id < sites.length; site_id++){ //replace x with the site_id you want + 1. You can determine the last input site id by looking in the analysis database and start after that one. Before you start crawling again, rename the existing error-logging.json to any other file name. Otherwise, you will lose all errors up until the crawl failed; these errors are necessary for parsing in the colabs. (error-logging.json is overwritten with the error object stored in local-crawler.js every time there is an error). After the whole batch has successfully completed, manually merge the error json files so that all errors for the batch are in error-logging.json.

General info about data analysis in the colabs:

GPP String decoding:

The GPP String encoding/decoding process is described by the IAB here. The IAB has a website to decode and encode GPP strings. This is helpful for spot checking and is the quickest way to encode/decode single GPP strings. They also have a JS library to encode and decode GPP strings on websites. Because we cannot directly use this library to decode GPP strings in Python, we converted the JS library to Python and use that for decoding (Python library found here). The Python library will need to be updated when the IAB adds more sections to the GPP string. More information on updating the Python library and why we use it can be found in issue 89. GPP strings are automatically decoded using the Python library in the colabs.

Clone this wiki locally