This project analyzes LLM capabilities to detect hallucinations in the given summary.
- data: Contains
ehrs
andsummaries
subfolders with ehr notes and summary files used for the detection. - single_prompts: Contains scripts and files for LLM detection using single prompts.
guidelines.txt
: Text file with guidelines for the detection process.output_format.json
: JSON file defining the output format for the annotations.gpt4o_detections.py
: Python script for processing single prompts using gpt4o.llama3_detections.py
: Python script for processing single prompts using llama3.
- .gitignore: Specifies files to be ignored by git.
- requirements.txt: Lists the Python dependencies for the project.
-
Clone the repository:
git clone https://github.com/TejasNaik1910/llm_evaluation.git
cd llm_evaluation -
Create and activate a virtual environment:
python3 -m venv myenv
source myenv/bin/activate # On Windows, usemyenv\Scripts\activate
-
Install the required dependencies: pip install -r requirements.txt
NOTE: STEP 2 AND STEP 3 ARE ONLY REQUIRED IF OPENAI MODULE IS NOT INSTALLED ON YOUR MACHINE.
-
Navigate to the
single_prompts
folder:
cd single_prompts
-
Before executing the scripts, please make the necessary code changes in them.
-
Next, run the
{model_name}_detections.py
script:
python3 {model_name}_detections.py
-
It will create the output JSON files within the
single_prompts/annotations/{model_name}
folder.