If you are looking for a simple challenge configuration that you can replicate to create a challenge on EvalAI, then you are at the right place. Follow the instructions given below to get started.
.
├── README.md
├── annotations # Contains the annotations for Dataset splits
│ ├── test_annotations_devsplit.json # Annotations of dev split
│ └── test_annotations_testsplit.json # Annotations for test split
├── challenge_data # Contains scripts to test the evalautaion script locally
│ ├── challenge_1 # Contains evaluation script for the challenge
| ├── __init__.py # Imports the main.py file for evaluation
| └── main.py # Challenge evaluation script
│ └── __init__.py # Imports the modules which involve evaluation script loading
├── challenge_config.yaml # Configuration file to define challenge setup
├── evaluation_script # Contains the evaluation script
│ ├── __init__.py # Imports the modules that involve annotations loading etc
│ └── main.py # Contains the main `evaluate()` method
├── logo.jpg # Logo image of the challenge
├── submission.json # Sample submission file
└── templates # Contains challenge related HTML templates
├── challenge_phase_1_description.html # Challenge Phase 1 description template
├── challenge_phase_2_description.html # Challenge Phase 2 description template
├── description.html # Challenge description template
├── evaluation_details.html # Contains description about how submissions will be evalauted for each challenge phase
├── submission_guidelines.html # Contains information about how to make submissions to the challenge
└── terms_and_conditions.html # Contains terms and conditions related to the challenge
├── worker # Contains the scripts to test evaluation script locally
│ ├── __init__.py # Imports the module that ionvolves loading evaluation script
│ └── run.py # Contains the code to run the evaluation locally
-
Use this repository as template.
-
Generate your github personal acccess token and copy it in clipboard.
-
Add the github personal access token in the forked repository's secrets with the name
AUTH_TOKEN
. -
Now, go to EvalAI to fetch the following details -
evalai_user_auth_token
- Go to profile page after logging in and click onGet your Auth Token
to copy your auth token.host_team_pk
- Go to host team page and copy theID
for the team you want to use for challenge creation.evalai_host_url
- Usehttps://eval.ai
for production server andhttps://staging.eval.ai
for staging server.
-
Create a branch with name
challenge
in the forked repository from themaster
branch. Note: Only changes inchallenge
branch will be synchronized with challenge on EvalAI. -
Add
evalai_user_auth_token
andhost_team_pk
ingithub/host_config.json
. -
Read EvalAI challenge creation documentation to know more about how you want to structure your challenge. Once you are ready, start making changes in the yaml file, HTML templates, evaluation script according to your need.
-
Commit the changes and push the
challenge
branch in the repository and wait for the build to complete. View the logs of your build. -
If challenge config contains errors then a
issue
will be opened automatically in the repository with the errors otherwise the challenge will be created on EvalAI. -
Go to Hosted Challenges to view your challenge. The challenge will be publicly available once EvalAI admin approves the challenge.
-
To update the challenge on EvalAI, make changes in the repository and push on
challenge
branch and wait for the build to complete.
To add custom dependency packages in the evaluation script, refer to this guide.
In order to test the evaluation script locally before uploading it to EvalAI server, please follow the below instructions -
-
Copy the evaluation script i.e
__init__.py
,main.py
and other relevant files fromevaluation_script/
directory tochallenge_data/challenge_1/
directory. -
Now, edit
challenge_phase
name,annotation file
name andsubmission file
name in theworker/run.py
file to the challenge phase codename (which you want to test for), annotation file name in theannotations/
folder (for specific phase) and corresponding submission file respectively. -
Run the command
python -m worker.run
from the directory whereannotations/
challenge_data/
andworker/
directories are present. If the command runs successfully, then the evaluation script works locally and will work on the server as well.
Use this when you want to test everything against a local EvalAI server before pushing to the real site.
-
Spin up EvalAI locally
cd <path-to-EvalAI> docker-compose up --build # --build only the first time or after code changes
Backend API →
http://localhost:8000
, Frontend →http://localhost:8888
. -
Register a self-hosted runner
- Repo → Settings ▸ Actions ▸ Runners ▸ New self-hosted runner.
- Select your architecture and paste the commands shown to install packages and configure your runners for your local machine
- If you want to reconfigure a pre-existing runner for a new repository :
- Go to Runners ▸ open menu ▸ Remove runner , then paste the command shown in your local terminal to detach.
- Then follow steps 1 and 2 for configuring runner for new repository.
-
Point
host_config.json
to localhosthost.docker.internal : Docker's built-in hostname that points to the Docker host (your machine). 8000 : Port where Backend API for the EvalAI server used to create challenges runs.
-
Create (or switch to) the
challenge
branch locally Commit your config / template / script changes here , as you would when creating a challenge using Github. -
Verify the result Go to Hosted Challenges on your local server and confirm your challenge appears and renders correctly.
host_config.json
file includes default placeholders like:
<evalai_user_auth_token>
<host_team_pk>
<evalai_host_url>
Please replace them with real values before pushing changes to avoid build errors.
Please feel free to open issues on our GitHub Repository or contact us at team@cloudcv.org if you have issues.