A dockerized application with Elasticsearch backend and a Zen-style Alpine.js frontend for fast querying of service desk tickets by context, owner, and agent.
- Fast Search: Full-text search across ticket titles and conversation messages
- Filter by Owner/Agent: Dropdown filters for specific users
- Context Search: Search within conversation content
- Relevance Scoring: Results ranked by Elasticsearch relevance
- Zen-style UI: Clean, minimalist interface with smooth animations
- Real-time Stats: Display total tickets, unique owners, and agents
- Responsive Design: Works on desktop and mobile devices
- Backend: Flask API with Elasticsearch integration
- Frontend: Alpine.js with vanilla CSS (no frameworks)
- Database: Elasticsearch for fast full-text search
- Containerization: Docker Compose for easy deployment
-
Clone and navigate to the project:
cd /path/to/project
-
Start the application:
make up
Or manually:
docker-compose up -d
-
Access the application: Open your browser and go to
http://localhost:3000
The project includes a simplified Makefile for essential operations:
# Show all available commands
make help
# Basic operations
make up # Start the application
make down # Stop the application
make restart # Restart the application
make logs # Show application logs
# Testing and monitoring
make test # Test the application
make health # Check application health
# Development
make dev # Start development mode with auto-reload
make dev-stop # Stop development mode
# Cleanup
make clean # Stop and remove everything
For frontend development with auto-reload:
# Start development mode
make dev
# This will:
# - Start Elasticsearch in the background
# - Mount your local files for auto-reload
# - Start the web service with debug mode
# - Automatically restart when you change files
The application automatically ingests all JSON files from the data/
directory. Each file should contain a service desk ticket with the following structure:
{
"ticket_id": "521549",
"title": "Ticket Title",
"owner": {
"name": "Owner Name",
"email": "owner@example.com"
},
"agent": {
"name": "Agent Name",
"email": "agent@example.com"
},
"conversations": [
{
"user": "user@example.com",
"date": "2016-03-18T14:16:58",
"message": "Message content..."
}
]
}
GET /api/search?q=query&page=1&per_page=20&include_jira=1
Parameters:
q
: Full-text search query (title and conversation messages)page
: Page number (1-indexed)per_page
: Results per page (default 20)include_jira
:1
to include Jira tickets,0
to exclude
GET /api/stats
Returns total tickets, unique owners, and unique agents.
GET /api/owners
Returns list of all ticket owners.
GET /api/agents
Returns list of all agents.
- The dataset can include both RT and Jira tickets. Jira tickets can be optionally included/excluded via the
include_jira
query parameter in/api/search
. - To fetch Jira tickets for the HT project into
data/
, use:This uses credentials from a localmake jira-fetch
.env
file (not committed). Expected variables includeJIRA_USERNAME
andJIRA_API_TOKEN
. - During ingestion, each document is annotated with
source_file
to indicate origin (e.g. files prefixed withjira_
).
- For issues and feature requests, please open a GitHub issue on this repository.
- Operational questions (data ingestion, Elasticsearch, or deployment): contact HGI Service Desk via your usual RT channel, or the HGI Slack
#help-hgi
channel. - Urgent incidents: escalate via the HGI on-call process.
- Searches across ticket titles and conversation messages
- Uses English analyzer for better relevance
- Supports fuzzy matching for typos
- Title matches are weighted higher than message matches
- Filter by specific owner or agent
- Combine filters with text search
- Real-time dropdown population from indexed data
- Results ranked by Elasticsearch relevance score
- Score displayed for each result
- Higher scores indicate better matches
-
Install dependencies:
pip install -r requirements.txt
-
Start Elasticsearch (requires Docker):
docker run -d --name elasticsearch -p 9200:9200 -e "discovery.type=single-node" -e "xpack.security.enabled=false" docker.elastic.co/elasticsearch/elasticsearch:8.11.0
-
Run the application:
python app.py
To only ingest data without running the web server:
docker-compose run data-ingestion
To rebuild the application after changes:
docker-compose down
docker-compose build --no-cache
docker-compose up -d
ELASTICSEARCH_URL
: Elasticsearch connection URL (default:http://localhost:9200
)
The application creates an index with the following optimizations:
- Single shard: Optimized for small to medium datasets
- English analyzer: Better text analysis for English content
- Nested conversations: Proper handling of conversation arrays
- Keyword fields: Exact matching for emails and IDs
-
Check if Elasticsearch is running:
curl http://localhost:9200
-
Check container logs:
docker-compose logs elasticsearch
- Check data format in JSON files
- Verify file permissions
- Check ingestion logs:
docker-compose logs data-ingestion
For large datasets:
-
Increase Elasticsearch memory:
environment: - "ES_JAVA_OPTS=-Xms1g -Xmx1g"
-
Add more shards in index settings
-
Use bulk indexing for faster ingestion
This project is open source and available under the MIT License.