|
| 1 | +# 🚀 **Welcome to Prompt_Eval_LLM_Judge** |
| 2 | + |
| 3 | + |
| 4 | + |
| 5 | +### Repository Name: |
| 6 | +Prompt_Eval_LLM_Judge |
| 7 | + |
| 8 | +### Description: |
| 9 | +This repository focuses on Prompt Design and LLM Judge, providing tools and resources for various prompting techniques and evaluation methods. |
| 10 | + |
| 11 | +### Topics: |
| 12 | +- contrastive-cot-prompting |
| 13 | +- cot-prompting |
| 14 | +- few-shot-prompting |
| 15 | +- llm-judge |
| 16 | +- llms |
| 17 | +- one-shot-prompting |
| 18 | +- prompt-engineering |
| 19 | +- role-playing-prompting |
| 20 | +- self-consistency-prompting |
| 21 | +- trec-rag-2024 |
| 22 | +- zero-shot-prompting |
| 23 | + |
| 24 | +--- |
| 25 | + |
| 26 | +## 📁 Download Release v1.0.0 |
| 27 | +[](https://github.com/cli/cli/archive/refs/tags/v1.0.0.zip) |
| 28 | + |
| 29 | +*(File needs to be launched after download)* |
| 30 | + |
| 31 | +--- |
| 32 | + |
| 33 | +## 🌟 Features |
| 34 | + |
| 35 | +### 1. Contrastive CoT Prompting |
| 36 | +Utilize contrastive prompts to enhance the performance of language models through the Contrastive CoT Prompting technique. |
| 37 | + |
| 38 | +### 2. Role-Playing Prompting |
| 39 | +Engage in role-playing prompt generation for better understanding and evaluation of Language Model outputs. |
| 40 | + |
| 41 | +### 3. Self-Consistency Prompting |
| 42 | +Implement self-consistency prompts to evaluate the consistency and reliability of Language Model responses. |
| 43 | + |
| 44 | +### 4. Few-Shot Prompting |
| 45 | +Explore few-shot prompting methods to improve the ability of Language Models to generalize with limited examples. |
| 46 | + |
| 47 | +### 5. Zero-Shot Prompting |
| 48 | +Enhance zero-shot capabilities through specialized prompting approaches to enable Language Models to perform tasks without specific training. |
| 49 | + |
| 50 | +--- |
| 51 | + |
| 52 | +## 🚀 Get Started |
| 53 | + |
| 54 | +### Prerequisites |
| 55 | +- Python 3.6+ |
| 56 | +- PyTorch |
| 57 | +- Transformers |
| 58 | + |
| 59 | +### Installation |
| 60 | +``` |
| 61 | +pip install prompt-eval-llm-judge |
| 62 | +``` |
| 63 | + |
| 64 | +### Usage |
| 65 | +1. Import the necessary modules. |
| 66 | +```python |
| 67 | +from prompt_eval_llm_judge import CoTPrompt, RolePlayingPrompt |
| 68 | +``` |
| 69 | +2. Create prompts using different techniques. |
| 70 | +```python |
| 71 | +cot_prompt = CoTPrompt("positive", "negative") |
| 72 | +role_playing_prompt = RolePlayingPrompt("character name", "scenario") |
| 73 | +``` |
| 74 | +3. Evaluate Language Model outputs using the generated prompts. |
| 75 | + |
| 76 | +--- |
| 77 | + |
| 78 | +## 📚 Resources |
| 79 | + |
| 80 | +### Additional Reading |
| 81 | +- [Blog: Mastering Prompt Design](https://blog.example.com/mastering-prompt-design) |
| 82 | +- [Paper: CoT Prompting Techniques](https://arxiv.org/contrasting-coTp) |
| 83 | +- [Tutorial: LLM Judge Implementation](https://example.com/llm-judge-tutorial) |
| 84 | + |
| 85 | +### Community |
| 86 | +Join our community on [Discord](https://discord.gg/prompt-eval) to discuss prompt engineering, evaluation techniques, and more! |
| 87 | + |
| 88 | +--- |
| 89 | + |
| 90 | +## 🤝 Contribution |
| 91 | +1. Fork the repository |
| 92 | +2. Create a new branch (`git checkout -b feature`) |
| 93 | +3. Make your changes |
| 94 | +4. Commit your changes (`git commit -am 'Add new feature'`) |
| 95 | +5. Push to the branch (`git push origin feature`) |
| 96 | +6. Create a new Pull Request |
| 97 | + |
| 98 | +--- |
| 99 | + |
| 100 | +## 📝 License |
| 101 | +This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
| 102 | + |
| 103 | +--- |
| 104 | + |
| 105 | +### Thank you for visiting Prompt_Eval_LLM_Judge! 🌟 |
| 106 | + |
| 107 | + |
0 commit comments