Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/skills/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,13 @@ Tools for financial analysis, blockchain interaction, and regulatory compliance.
| **[Wallet Screening](wallet_screening.md)** | `finance/wallet_screening` | Comprehensive risk assessment for Ethereum wallets. Checks sanctions lists (OFAC, FBI) and identifies interactions with malicious contracts (Mixers, Scams). |


## Optimization
Middleware skills that operate on text or state to increase performance, security, or efficiency.

| Skill | ID | Description |
| :--- | :--- | :--- |
| **[Prompt Token Rewriter](prompt_rewriter.md)** | `optimization/prompt_rewriter` | Aggressively compresses massive prompts or context histories while retaining semantic meaning to save tokens. |

---

## 📥 Installing Skills
Expand Down
48 changes: 48 additions & 0 deletions docs/skills/prompt_rewriter.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Prompt Token Rewriter

**Domain:** `optimization`
**Skill ID:** `optimization/prompt_rewriter`

A powerful middleware skill that acts as a deterministic compression logic gate for agents. It ingests a massive, bloated prompt or conversation history and "rewrites" it to use fewer tokens while aggressively retaining 100% of the semantic meaning and instructions.

This is critical for complex agents facing strict token constraints or high LLM API costs.

## Manifest Details

**Parameters Schema:**
* `raw_text` (string): The bloated, repetitive prompt or extensive conversation history to compress.
* `compression_aggression` (string): The level of compression: 'low', 'medium', or 'high'.

**Outputs Schema:**
* `compressed_text` (string): The aggressively shortened prompt retaining semantic constraints.
* `original_tokens` (integer): The approximate original length.
* `new_tokens` (integer): The approximate new length.
* `tokens_saved` (integer): The absolute number of tokens removed.

## Example Usage (Skill Chaining)

The agent invokes this tool automatically when faced with an excessively long context or when instructed to compress a payload. However, you can also use it as a manual middleware step:

```python
from skillware.core.loader import SkillLoader

# 1. Load the middleware
rewriter_bundle = SkillLoader.load_skill("optimization/prompt_rewriter")
rewriter = rewriter_bundle['module'].PromptRewriter()

# 2. Compress a prompt before sending to LLM
result = rewriter.execute({
"raw_text": "Hello, could you please make sure to read this documentation...",
"compression_aggression": "high"
})

print(f"Compressed: {result['compressed_text']}")
# Output: "read documentation..."
```

## Maintenance

To run tests specifically for this skill:
```bash
pytest tests/skills/optimization/test_prompt_rewriter.py
```
21 changes: 21 additions & 0 deletions docs/usage/gemini.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,3 +66,24 @@ for part in response.parts:
)
```
*(Note: As of Gemini SDK v0.8+, the exact import for `FunctionResponse` may vary. Using a dictionary structure is often more robust.)*

## 🔗 Skill Chaining (Middleware)

Skillware's modular design allows treating skills as deterministic offline logic blocks. For example, you can seamlessly chain the **Prompt Token Rewriter** to optimize context before hitting the LLM:

```python
# Load the middleware skill
rewriter = SkillLoader.load_skill("optimization/prompt_rewriter")
sys_prompt = "You are a very helpful assistant serving a bank..."

# Use python logic offline before starting the chat session
optimized_ctx_result = rewriter['module'].PromptRewriter().execute({
"raw_text": sys_prompt,
"compression_aggression": "high"
})

model = genai.GenerativeModel(
'gemini-2.5-flash',
system_instruction=optimized_ctx_result["compressed_text"]
)
```
30 changes: 30 additions & 0 deletions examples/prompt_compression_demo.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
from skillware.core.loader import SkillLoader


def run_demo():
print("Loading Prompt Token Rewriter...")
# Load the skill via the global loader just like an LLM agent would
skill_bundle = SkillLoader.load_skill("optimization/prompt_rewriter")
skill_instance = skill_bundle['module'].PromptRewriter()

massive_prompt = (
"Hello, could you please make sure to read this entirely? "
"The and a this that is very important. "
"I want you to kindly ensure that all elements are processed."
)

print(f"\n[RAW TEXT]: {massive_prompt}")

# Execute the offline compression logic
result = skill_instance.execute({
"raw_text": massive_prompt,
"compression_aggression": "high"
})

print(f"\n[COMPRESSED TEXT]: {result['compressed_text']}")
print(f"[REDUCTION]: {result['original_tokens']} tokens -> {result['new_tokens']} tokens")
print(f"[SAVED]: {result['tokens_saved']} tokens")


if __name__ == "__main__":
run_demo()
3 changes: 3 additions & 0 deletions skills/optimization/prompt_rewriter/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
from .skill import MyAwesomeSkill

__all__ = ["MyAwesomeSkill"]
28 changes: 28 additions & 0 deletions skills/optimization/prompt_rewriter/card.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
{
"name": "Prompt Token Rewriter",
"description": "Aggressively compresses prompts while retaining semantic meaning.",
"icon": "minimize",
"color": "purple",
"ui_schema": {
"type": "card",
"fields": [
{
"key": "tokens_saved",
"label": "Tokens Saved"
},
{
"key": "original_tokens",
"label": "Original Tokens"
},
{
"key": "new_tokens",
"label": "New Tokens"
},
{
"key": "compressed_text",
"label": "Compressed Payload",
"type": "markdown"
}
]
}
}
15 changes: 15 additions & 0 deletions skills/optimization/prompt_rewriter/instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# Cognition Instructions: Prompt Rewriter Middleware

You have access to the `optimization/prompt_rewriter` tool.
This tool is crucial for saving context window budget and drastically lowering operational costs during extensive loops.

## When to use this skill
If the user provides you with an extremely long prompt, a massive document, or asks you to prepare a long instruction payload for *another* agent or system, you MUST use this tool to compress and rewrite the text before proceeding.

## How to use it
1. Place the full, unedited text into the `raw_text` parameter.
2. Select your `compression_aggression`:
- `low`: Just drops massive whitespaces and line breaks. (Safe for strict code)
- `medium`: Strips conversational filler and normalizes structure. (Good for instructions)
- `high`: Aggressively drops articles, stop-words, and non-essential punctuation. (Best for machine-to-machine context)
3. Use the `compressed_text` returned by the tool as your new internal representation of the text. Do not output the uncompressed text anymore.
24 changes: 24 additions & 0 deletions skills/optimization/prompt_rewriter/manifest.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: "optimization/prompt_rewriter"
version: "0.1.0"
description: "A powerful middleware skill that aggressively compresses massive, bloated prompts into fewer tokens while retaining 100% of the semantic meaning and instructions."
category: "optimization"
parameters:
type: object
properties:
raw_text:
type: "string"
description: "The bloated, repetitive prompt or extensive conversation history to compress."
compression_aggression:
type: "string"
description: "The level of compression: 'low', 'medium', or 'high'."
enum: ["low", "medium", "high"]
required:
- raw_text
requirements: []
constitution: |
1. SEMANTIC INTEGRITY: Never remove keywords that change the core intent or instructions of the prompt.
2. PRIVACY: Do not store or transmit the prompt content to external logging services beyond the execution result.
3. DETERMINISM: Use consistent compression rules to ensure stable behavior across agent loops.
presentation:
icon: "minimize"
color: "#9b59b6"
62 changes: 62 additions & 0 deletions skills/optimization/prompt_rewriter/skill.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
import re
from typing import Any, Dict
from skillware.core.base_skill import BaseSkill


class PromptRewriter(BaseSkill):
"""
A skill that heuristically compresses a prompt by removing unnecessary whitespace,
low-value words, and optionally trimming grammar.
"""

@property
def manifest(self) -> Dict[str, Any]:
return {
"name": "optimization/prompt_rewriter",
"version": "0.1.0",
}

def _estimate_tokens(self, text: str) -> int:
"""Naive estimation since we want to avoid strict pip dependencies inside the skill."""
return max(1, len(text) // 4)

def execute(self, params: Dict[str, Any]) -> Any:
raw_text = params.get("raw_text", "")
aggression = params.get("compression_aggression", "medium").lower()

if not raw_text:
return {"error": "raw_text cannot be empty."}

original_tokens = self._estimate_tokens(raw_text)

# Level 1: Standardize Whitespace (Low Aggression)
compressed = re.sub(r'\s+', ' ', raw_text).strip()

# Level 2: Remove Filler Words (Medium Aggression)
if aggression in ["medium", "high"]:
fillers = [
"please", "could you", "would you", "kindly", "make sure to",
"ensure that", "I want you to", "can you"
]
for filler in fillers:
compressed = re.compile(re.escape(filler), re.IGNORECASE).sub("", compressed)
compressed = re.sub(r'\s+', ' ', compressed).strip()

# Level 3: Intense Vowel/Punctuation Dropping (High Aggression)
if aggression == "high":
# Remove non-essential punctuation
compressed = re.sub(r'[^\w\s\.\-]', '', compressed)
# Remove common extremely high frequency stop words naively
stop_words = [" a ", " an ", " the ", " is ", " that ", " this ", " and ", " to "]
for word in stop_words:
compressed = re.compile(word, re.IGNORECASE).sub(" ", compressed)
compressed = re.sub(r'\s+', ' ', compressed).strip()

new_tokens = self._estimate_tokens(compressed)

return {
"compressed_text": compressed,
"original_tokens": original_tokens,
"new_tokens": new_tokens,
"tokens_saved": original_tokens - new_tokens
}
45 changes: 45 additions & 0 deletions tests/skills/optimization/test_prompt_rewriter.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
from skillware.core.loader import SkillLoader


def get_skill():
bundle = SkillLoader.load_skill("optimization/prompt_rewriter")
return bundle['module'].PromptRewriter()


def test_manifest_schema():
skill = get_skill()
manifest = skill.manifest
assert manifest.get("name") == "optimization/prompt_rewriter"
assert manifest.get("version") == "0.1.0"


def test_rewriter_execution_low():
skill = get_skill()
params = {
"raw_text": "This is a very\n\n\nspaced out prompt.",
"compression_aggression": "low"
}
result = skill.execute(params)
assert result["compressed_text"] == "This is a very spaced out prompt."
assert result["original_tokens"] >= result["new_tokens"]


def test_rewriter_execution_high():
skill = get_skill()
params = {
"raw_text": "Please make sure to read this and analyze the data.",
"compression_aggression": "high"
}
result = skill.execute(params)
assert "Please" not in result["compressed_text"]
assert "make sure to" not in result["compressed_text"]
assert result["tokens_saved"] > 0
assert "new_tokens" in result
assert "original_tokens" in result


def test_empty_string():
skill = get_skill()
result = skill.execute({"raw_text": ""})
assert "error" in result
assert result["error"] == "raw_text cannot be empty."
Loading