This project is for experimenting with prompt engineering techniques for Large Language Models (LLMs) using the Gemini API.
src/: Python source codemain.py: Main script to run experiments.prompt_tech/: Core modules for the project.api.py: Handles interaction with the Gemini API.runner.py: Manages the execution of experiments and saving results.
prompts/: Store prompt templates and examples in text files.data/: Input data for your experiments (e.g., CSV files).results/: Output of your experiments (e.g., JSON files with model responses).tests/: Tests for your experiment code.
-
Install dependencies:
uv pip install -r requirements.txt
-
Set up your environment variables:
- Create a
.envfile in the root of the project. - Add your Gemini API key to the
.envfile:GEMINI_API_KEY="YOUR_API_KEY"
- Create a
-
Add your prompts:
- You can add new prompts and techniques to the
promptsdictionary insrc/main.py. - For more complex prompts, you can save them as text files in the
prompts/directory and read them in your code.
- You can add new prompts and techniques to the
-
Run the experiments:
python src/main.py
-
Analyze the results:
- The results of the experiments will be saved in
results/experiment_results.json. - This file will contain the prompt, the model's response, the technique used, and the latency for each experiment.
- The results of the experiments will be saved in