You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -24,11 +24,11 @@
24
24
25
25
26
26
27
-
prompt-ops is a Python package that **automatically optimizes prompts** for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for Llama models, improving performance and reliability.
27
+
prompt-ops is a Python package that **automatically optimizes prompts** for Llama models. It transforms prompts that work well with other LLMs into prompts that are optimized for LLM models, improving performance and reliability.
28
28
29
29
**Key Benefits:**
30
30
-**No More Trial and Error**: Stop manually tweaking prompts to get better results
31
-
-**Fast Optimization**: Get Llama-optimized prompts in minutes with template-based optimization
31
+
-**Fast Optimization**: Get model-optimized prompts in minutes with template-based optimization
32
32
-**Data-Driven Improvements**: Use your own examples to create prompts that work for your specific use case
33
33
-**Measurable Results**: Evaluate prompt performance with customizable metrics
34
34
@@ -66,7 +66,7 @@ To get started with prompt-ops, you'll need:
66
66
2.[**Prepare your dataset**](#preparing-your-data): Create a JSON file with query-response pairs for evaluation and optimization
67
67
3.**Configure optimization**: Set up a simple YAML file with your dataset and preferences (see [example configuration](configs/facility-simple.yaml))
68
68
4.[**Run optimization**](#step-4-run-optimization): Execute a single command to transform your prompt
69
-
5.[**Get results**](#prompt-transformation-example): Receive a Llama-optimized prompt with performance metrics
69
+
5.[**Get results**](#prompt-transformation-example): Receive a model-optimized prompt with performance metrics
0 commit comments