pull docker image
docker pull dockcross/linux-x64:latestinstall pypi deps
pip install -r requirements.txtfrom evaluator.data import read_problems, write_samples, generate_one_prompt
from user_impl_script import generate_one_completion
problems = read_problems()
samples = [
dict(task_id=task_id, completion=generate_one_completion(generate_one_prompt(problem))) for problem in problems
]
write_samples(samples)- Extract function name and address from binaries
python ext_idb_and_nameaddr.py- Extract multiple information of function from binaries
python ext_func.pyWe provide here scripts to infer locally deployed LLMs and call ChatGPT/GPT-4 via API.
CUDA_VISIBLE_DEVICES=0 python infer_llama.pypython evaluation.py --prediction_file ./queryllm/Llama-2-7b-chat-hf_prediction.json --problem_file ./problem_data.json