This repository contains a collection of prompts and strategies for utilizing large language models (LLMs) effectively. It is organized into the following directories:
This directory contains prompts and methodologies for evaluating and comparing the performance of different LLMs on various tasks.
This directory provides guidelines and recommendations for writing effective prompts and interacting with LLMs to achieve optimal results.
This directory contains prompts and examples for using LLMs in everyday tasks and workflows.
This directory is a space for experimenting with novel prompt engineering techniques and exploring the capabilities of LLMs in different domains.
This directory outlines and explains different prompting methods, such as Chain-of-Thought and ReAct, for enhancing LLM performance.