Skip to content

wtsi-hgi/tickets-search

Repository files navigation

Service Desk Search Application

A dockerized application with Elasticsearch backend and a Zen-style Alpine.js frontend for fast querying of service desk tickets by context, owner, and agent.

Features

  • Fast Search: Full-text search across ticket titles and conversation messages
  • Filter by Owner/Agent: Dropdown filters for specific users
  • Context Search: Search within conversation content
  • Relevance Scoring: Results ranked by Elasticsearch relevance
  • Zen-style UI: Clean, minimalist interface with smooth animations
  • Real-time Stats: Display total tickets, unique owners, and agents
  • Responsive Design: Works on desktop and mobile devices

Architecture

  • Backend: Flask API with Elasticsearch integration
  • Frontend: Alpine.js with vanilla CSS (no frameworks)
  • Database: Elasticsearch for fast full-text search
  • Containerization: Docker Compose for easy deployment

Quick Start

  1. Clone and navigate to the project:

    cd /path/to/project
  2. Start the application:

    make up

    Or manually:

    docker-compose up -d
  3. Access the application: Open your browser and go to http://localhost:3000

Using the Makefile

The project includes a simplified Makefile for essential operations:

# Show all available commands
make help

# Basic operations
make up          # Start the application
make down        # Stop the application
make restart     # Restart the application
make logs        # Show application logs

# Testing and monitoring
make test        # Test the application
make health      # Check application health

# Development
make dev         # Start development mode with auto-reload
make dev-stop    # Stop development mode

# Cleanup
make clean       # Stop and remove everything

Development Mode

For frontend development with auto-reload:

# Start development mode
make dev

# This will:
# - Start Elasticsearch in the background
# - Mount your local files for auto-reload
# - Start the web service with debug mode
# - Automatically restart when you change files

Data Ingestion

The application automatically ingests all JSON files from the data/ directory. Each file should contain a service desk ticket with the following structure:

{
  "ticket_id": "521549",
  "title": "Ticket Title",
  "owner": {
    "name": "Owner Name",
    "email": "owner@example.com"
  },
  "agent": {
    "name": "Agent Name", 
    "email": "agent@example.com"
  },
  "conversations": [
    {
      "user": "user@example.com",
      "date": "2016-03-18T14:16:58",
      "message": "Message content..."
    }
  ]
}

API Endpoints

Search Tickets

GET /api/search?q=query&page=1&per_page=20&include_jira=1

Parameters:

  • q: Full-text search query (title and conversation messages)
  • page: Page number (1-indexed)
  • per_page: Results per page (default 20)
  • include_jira: 1 to include Jira tickets, 0 to exclude

Get Statistics

GET /api/stats

Returns total tickets, unique owners, and unique agents.

Get Owners List

GET /api/owners

Returns list of all ticket owners.

Get Agents List

GET /api/agents

Returns list of all agents.

Jira Tickets

  • The dataset can include both RT and Jira tickets. Jira tickets can be optionally included/excluded via the include_jira query parameter in /api/search.
  • To fetch Jira tickets for the HT project into data/, use:
    make jira-fetch
    This uses credentials from a local .env file (not committed). Expected variables include JIRA_USERNAME and JIRA_API_TOKEN.
  • During ingestion, each document is annotated with source_file to indicate origin (e.g. files prefixed with jira_).

Support

  • For issues and feature requests, please open a GitHub issue on this repository.
  • Operational questions (data ingestion, Elasticsearch, or deployment): contact HGI Service Desk via your usual RT channel, or the HGI Slack #help-hgi channel.
  • Urgent incidents: escalate via the HGI on-call process.

Search Features

Full-Text Search

  • Searches across ticket titles and conversation messages
  • Uses English analyzer for better relevance
  • Supports fuzzy matching for typos
  • Title matches are weighted higher than message matches

Filtering

  • Filter by specific owner or agent
  • Combine filters with text search
  • Real-time dropdown population from indexed data

Relevance Scoring

  • Results ranked by Elasticsearch relevance score
  • Score displayed for each result
  • Higher scores indicate better matches

Development

Local Development

  1. Install dependencies:

    pip install -r requirements.txt
  2. Start Elasticsearch (requires Docker):

    docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.11.0
  3. Run the application:

    python app.py

Data Ingestion Only

To only ingest data without running the web server:

docker-compose run data-ingestion

Rebuilding

To rebuild the application after changes:

docker-compose down
docker-compose build --no-cache
docker-compose up -d

Configuration

Environment Variables

  • ELASTICSEARCH_URL: Elasticsearch connection URL (default: http://localhost:9200)

Elasticsearch Settings

The application creates an index with the following optimizations:

  • Single shard: Optimized for small to medium datasets
  • English analyzer: Better text analysis for English content
  • Nested conversations: Proper handling of conversation arrays
  • Keyword fields: Exact matching for emails and IDs

Troubleshooting

Elasticsearch Connection Issues

  1. Check if Elasticsearch is running:

    curl http://localhost:9200
  2. Check container logs:

    docker-compose logs elasticsearch

Data Ingestion Issues

  1. Check data format in JSON files
  2. Verify file permissions
  3. Check ingestion logs:
    docker-compose logs data-ingestion

Performance Optimization

For large datasets:

  1. Increase Elasticsearch memory:

    environment:
      - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
  2. Add more shards in index settings

  3. Use bulk indexing for faster ingestion

License

This project is open source and available under the MIT License.

About

Search through legacy RT and Jira tickets

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published