Skip to content
jloomisVCE edited this page Oct 22, 2020 · 18 revisions

Sample and Index against layers

Introduction

In this page we document how to:

  • Sample occurrences against layers in biocache (ie. add sampling data to Cassandra).
  • Index biocache sampling data in the biocache SOLR core.

This process depends upon specific details being properly applied in sequence. It is easy to miss or bungle a step, and it can be hard to tell which one caused a problem. To avoid those headaches, testing the outcome of each step can help to troubleshoot. Also, some background information about system configuration provides the administrator with an overview of the process, and will hopefully help to debug issues as they arise. To that end, a synopsis:

  • Sampling takes each occurrence having geospatial data (lat, lng) in biocache (Cassandra DB) and references it against each properly-configured layer. The outcome of successful sampling for a single occurrence is a dependent value in biocache for the column 'cl_p' in the table 'occ'. The value of cl_p will look something like this:

    cl_p | {"cl100001":"England", "cl100002":"North Yorkshire", "cl100003":"Beast Cliff", "cl100004":"OV 0000"}

    where json keys like "cl10001" are layers' field IDs which you identified when you configured layers in the spatial portal. View field IDs directly with eg. https://spatial.l-a.site/ws/manageLayers/field/cl10001

We'll sample and raster data against some added layers.

  1. Check the configuration for biocache-store is pointing to sampling URL:
    1. $ /data/biocache/config
    2. $ more biocache-config.properties
  2. Look for spatial.layers.url=http://spatial.l-a.site/ws/fields
    1. You can do this by connecting with ssh to the livingatlas-demo server you are using and issue command biocache config | grep -e ".*layers.*url"
  3. Load a DwCA into the collectory. If you are just testing, please choose a small dataset (<50k records) just for speed, preferably Mammals (as this affects a later step of the documentation with taxonomy).
    1. For IPT users:
      1. Start here: http://collections.l-a.site/admin/
      2. Create a data provider and point at IPT instance by setting website URL to IPT URL e.g. https://ipt.gbif.es
      3. Click “Update data resources” button
      4. Note: check the unique fields. Typical values are catalogNumber or occurrenceID. The default is catalogNumber
      5. Find a UID e.g. ´dr123´ to load
    2. For Non-IPT users:
      1. Start here: http://collections.l-a.site/admin/
      2. Create a data resource
      3. Upload your DwCA
      4. Note: check the unique fields. Typical values are catalogNumber or occurrenceID. The default is catalogNumber
  4. Load a DwCA into the biocache using command line tool
    1. Use the command biocache load dr123
    2. Validate the data has been loaded using Cassandra command line tool. Use the tool cqlsh on the command line.
      1. Connect to occ keyspace using use occ;,
      2. Run select * from occ;.
  5. Process the data resource - Use the command biocache process -dr dr123
  6. Sampling - Use the command biocache sample -dr dr123
  7. Indexing - Use the command biocache index -dr dr123
  8. Test the indexing was successful by:
    1. Viewing the SOLR admin console: http://index.l-a.site:8983 See solr admin interface to tips to access this.
    2. View the results in biocache services
      1. http://biocache.l-a.site/occurrences/search?q=:
    3. Test with an Area Report in the Spatial Portal
      1. Search for Gazetteer Polygon e.g. “Queensland”
      2. Tools > Area Report - and follow wizard
Clone this wiki locally