Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion COMPARISON.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ Frameworks like Google DeepMind's **Antigravity** use `SKILL.md` files to provid
A critical architectural distinction is how Skillware treats logic execution versus "code generation."

* **The Code-Generation Approach**: Many platforms prompt the LLM to write code on the fly to solve a requested problem. This is expensive (you pay for output tokens every time), slow, and risky (the LLM executes unreviewed code).
* **The Skillware Approach**: Skillware relies on **Pre-Compiled Logic**. The LLM decides *which* tool to call (e.g., wallet_screening) and passes arguments. The heavy lifting happens deterministically in the Python `BaseSkill` implementation. This results in **zero-cost logic execution**, instant processing, and static, auditable code boundaries.
* **The Skillware Approach**: Skillware relies on **Pre-Compiled Logic**. The logical system decides *which* tool to call (e.g., wallet_screening) and passes arguments. The heavy lifting happens deterministically in the Python `BaseSkill` implementation. This results in **zero-cost logic execution**, instant processing, and static, auditable code boundaries.

---

Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@

## Mission

The AI ecosystem is fragmented. Developers often re-invent tool definitions, system prompts, and safety rules for every project. **Skillware** supplies a standard to package capabilities into self-contained units that work across **Gemini**, **Claude**, **GPT**, and **Llama**.
The AI ecosystem is fragmented. Developers often re-invent tool definitions, system prompts, and safety rules for every project. **Skillware** supplies a standard to package capabilities into self-contained units that work across **Gemini**, **Claude**, **Ollama**, **GPT**, and **Llama**.

A **Skill** in this framework provides everything an Agent needs to master a domain:

Expand Down Expand Up @@ -127,6 +127,7 @@ print(response.text)
* **[Core Logic & Philosophy](docs/introduction.md)**: Details on how Skillware decouples Logic, Cognition, and Governance.
* **[Usage Guide: Gemini](docs/usage/gemini.md)**: Integration with Google's GenAI SDK.
* **[Usage Guide: Claude](docs/usage/claude.md)**: Integration with Anthropic's SDK.
* **[Usage Guide: Ollama](docs/usage/ollama.md)**: Native integration for local models via Ollama.
* **[Skill Library](docs/skills/README.md)**: Available capabilities.

## Contributing
Expand All @@ -143,7 +144,7 @@ We actively encourage both humans and autonomous agents to contribute to this re

Skillware differs from the Model Context Protocol (MCP) or Anthropic's Skills repository in the following ways:

* **Model Agnostic**: Native adapters for Gemini, Claude, and OpenAI.
* **Model Agnostic**: Native adapters for Gemini, Claude, Ollama, and OpenAI.
* **Code-First**: Skills are executable Python packages, not just server specs.
* **Runtime-Focused**: Provides tools for the application, not just recipes for an IDE.

Expand Down
2 changes: 2 additions & 0 deletions docs/introduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@ This is Skillware's superpower. Every model (Gemini, Claude, GPT) speaks a diffe
The `SkillLoader` acts as an adapter.
* `SkillLoader.to_gemini_tool(skill)` -> Transmutes the manifest into Gemini's format.
* `SkillLoader.to_claude_tool(skill)` -> Transmutes the manifest into Claude's format.
* `SkillLoader.to_ollama_tool(skill)` -> Transmutes the manifest into Ollama/OpenAI's format.

### Step 3: Injection
When you initialize your agent, you pass the skill's **Instructions** into the System Prompt.
Expand Down Expand Up @@ -92,6 +93,7 @@ Skillware is designed to be the "Standard Library" for all agents.
| :--- | :--- |
| **Google Gemini** | Native `google.generativeai` support. Automatic type mapping. |
| **Anthropic Claude** | Native `anthropic` support. XML/JSON handling. |
| **Ollama** | Native `ollama` Python client support. Fully local JSON handling. |
| **OpenAI GPT** | (Planned) JSON Schema adapter. |
| **Local LLaMA** | (Planned) GBNF Grammar generation from manifests. |

Expand Down
97 changes: 97 additions & 0 deletions docs/usage/ollama.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,97 @@
# Using Ollama with Skillware

Skillware natively supports [Ollama](https://ollama.com/), enabling you to run open-source models completely locally while seamlessly utilizing skills. Ollama's tool-calling format is directly compatible with Skillware's manifest structure.

## Prerequisites

1. **Install Ollama:** Follow the instructions at [ollama.com](https://ollama.com/) to install Ollama on your machine.
2. **Pull a Model:** You need a model that supports tool calling. We recommend `llama3` or `llama3.1`.
```bash
ollama run llama3
```
3. **Install Python Client:** Install the official Ollama Python package.
```bash
pip install ollama
```

## Example Usage

Here is a simple example demonstrating how to load a skill and execute it using a local model running via Ollama.

```python
import json
import re
import ollama
from skillware.core.loader import SkillLoader
from skillware.core.env import load_env_file

# Load Env for API Keys if any needed by skills
load_env_file()

# 1. Load the Skill dynamically
SKILL_PATH = "finance/wallet_screening"
skill_bundle = SkillLoader.load_skill(SKILL_PATH)
WalletScreeningSkill = getattr(skill_bundle["module"], "WalletScreeningSkill")
wallet_skill = WalletScreeningSkill()

print(f"Loaded Skill: {skill_bundle['manifest']['name']}")

# 2. Build the System Prompt with Tool Rules
tool_description = SkillLoader.to_ollama_prompt(skill_bundle)

system_prompt = f"""You are an intelligent agent equipped with specialized capabilities.
To use a skill, you MUST output a JSON code block in the EXACT following format:
```json
{{
"tool": "the_tool_name",
"arguments": {{
"param_name": "value"
}}
}}
```
Wait for the system to return the result of the tool before proceeding.

Available skills:
{tool_description}
Instructions: {skill_bundle.get('instructions', '')}
"""

messages = [
{"role": "system", "content": system_prompt},
{"role": "user", "content": "Please screen this ethereum wallet: 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"}
]

# 3. Call the Ollama Model
model_name = "llama3"
print(f"🤖 Calling Ollama model: {model_name}...")
response = ollama.chat(
model=model_name,
messages=messages
)

message_content = response.get("message", {}).get("content", "")
print(f"\n[Model Output]:\n{message_content}")

# 4. Handle Text-based Tool Calls
tool_match = re.search(r"```json\s*({.*?})\s*```", message_content, re.DOTALL)
if tool_match:
tool_call = json.loads(tool_match.group(1))
fn_name = tool_call.get("tool")
fn_args = tool_call.get("arguments", {})

if fn_name == "finance/wallet_screening":
print(f"⚙️ Executing skill '{fn_name}' locally...")
api_result = wallet_skill.execute(fn_args)

# Give result back to model
messages.append({"role": "assistant", "content": message_content})
messages.append({
"role": "user",
"content": f"SYSTEM RESPONSE (Result from {fn_name}):\n```json\n{json.dumps(api_result)}\n```\nPlease continue."
})

print("\n🤖 Sending tool results back to Agent...")
final_resp = ollama.chat(model=model_name, messages=messages)
print("\n💬 Final Answer:")
print(final_resp.get("message", {}).get("content", ""))
```
128 changes: 128 additions & 0 deletions examples/ollama_skills_test.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
import os
import json
import re
import ollama
from skillware.core.loader import SkillLoader
from skillware.core.env import load_env_file
from skillware.core.base_skill import BaseSkill

# Load Env for API Keys if any needed by skills
load_env_file()

def load_and_initialize_skill(path):
bundle = SkillLoader.load_skill(path)
skill_class = None
for attr_name in dir(bundle["module"]):
attr = getattr(bundle["module"], attr_name)
if isinstance(attr, type) and issubclass(attr, BaseSkill) and attr is not BaseSkill:
skill_class = attr
break
if not skill_class:
raise ValueError(f"Could not find a valid Skill class in {path}")
return bundle, skill_class()

# 1. Load the 3 Skills dynamically
SKILL_PATHS = [
"finance/wallet_screening",
"office/pdf_form_filler",
"optimization/prompt_rewriter"
]

skills_registry = {}
tool_descriptions = []

print("Loading skills...")
for path in SKILL_PATHS:
bundle, skill_instance = load_and_initialize_skill(path)
name = bundle["manifest"]["name"]
skills_registry[name] = skill_instance

# Use the prompt adapter for Ollama
tool_text = SkillLoader.to_ollama_prompt(bundle)
tool_text += f"\n**Cognitive Instructions:**\n{bundle.get('instructions', '')}\n"
tool_descriptions.append(tool_text)

print(f"Loaded Skill: {name}")

# 2. Build the System Prompt tailored for text-based tool calling
combined_system_prompt = """You are an intelligent agent equipped with specialized capabilities (skills).
To use a skill, you MUST output a JSON code block in the EXACT following format and then STOP GENERATING. Do not add conversational text after the JSON block.

```json
{
"tool": "the_tool_name",
"arguments": {
"param_name": "value"
}
}
```

Wait until you receive the SYSTEM RESPONSE containing the tool execution results before proceeding. Once you have the results, provide your final answer to the user.

Here are the available skills and their instructions:
""" + "\n---\n".join(tool_descriptions)

# 3. Setup Ollama Chat
model_name = "llama3"
user_query = "Please screen this ethereum wallet: 0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045. Also, please rewrite this prompt for me: 'make me a cool image of a cat'."

print(f"\nUser: {user_query}")

messages = [
{"role": "system", "content": combined_system_prompt},
{"role": "user", "content": user_query}
]

print(f"\n🤖 Calling Ollama model: {model_name}...")

# 4. Handle Conversation & Tool Parsing Loop
for _ in range(5): # Max steps to prevent infinite loops
response = ollama.chat(
model=model_name,
messages=messages
)

message_content = response.get("message", {}).get("content", "")
print(f"\n[Model Output]:\n{message_content}")
messages.append({"role": "assistant", "content": message_content})

# Try to parse a tool call inside ```json ... ```
tool_match = re.search(r"```json\s*({.*?})\s*```", message_content, re.DOTALL)

if tool_match:
try:
tool_call = json.loads(tool_match.group(1))
fn_name = tool_call.get("tool")
fn_args = tool_call.get("arguments", {})

print(f"\n🤖 Agent invoked tool: {fn_name}")
print(f" Arguments: {fn_args}")

if fn_name in skills_registry:
print(f"⚙️ Executing skill '{fn_name}' locally...")
try:
api_result = skills_registry[fn_name].execute(fn_args)
result_str = json.dumps(api_result)
except Exception as e:
result_str = f"Error executing tool: {e}"

print(f"📤 Result generated ({len(result_str)} bytes)")

# Send the result back to the model masquerading as a system/user update
messages.append({
"role": "user",
"content": f"SYSTEM RESPONSE (Result from {fn_name}):\n```json\n{result_str}\n```\nPlease continue based on this result."
})
else:
print(f"Unknown function requested: {fn_name}")
messages.append({
"role": "user",
"content": f"SYSTEM ERROR: Tool '{fn_name}' not found."
})
except json.JSONDecodeError:
print("Failed to decode JSON from tool call block.")
messages.append({"role": "user", "content": "SYSTEM ERROR: Invalid JSON format. Please output valid JSON."})
else:
# If no tool block was found, assume the agent is done and providing final answer
print("\n💬 Final Answer reached. End of execution.")
break
28 changes: 28 additions & 0 deletions skillware/core/loader.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,3 +125,31 @@ def to_claude_tool(skill_bundle: Dict[str, Any]) -> Dict[str, Any]:
parameters = manifest.get("parameters", {})

return {"name": name, "description": description, "input_schema": parameters}

@staticmethod
def to_ollama_prompt(skill_bundle: Dict[str, Any]) -> str:
"""
Converts a skill manifest to a textual description suitable for a system prompt.
This allows older models (like Llama 3) running via Ollama without native tool-calling
API support to understand and utilize the skill via text generation.
"""
manifest = skill_bundle.get("manifest", {})
name = manifest.get("name", "unknown_tool")
description = manifest.get("description", "").strip()
parameters = manifest.get("parameters", {})

prompt = f"### Tool: `{name}`\n"
prompt += f"**Description:** {description}\n"
prompt += "**Parameters:**\n"

props = parameters.get("properties", {})
required = parameters.get("required", [])

if not props:
prompt += "- None\n"
else:
for k, v in props.items():
req_str = "Required" if k in required else "Optional"
prompt += f"- `{k}` ({v.get('type', 'any')}): {v.get('description', '')} [{req_str}]\n"

return prompt
59 changes: 59 additions & 0 deletions tests/skills/finance/test_wallet_screening.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import pytest
import os
from unittest.mock import patch, MagicMock
from skillware.core.loader import SkillLoader

def get_skill():
bundle = SkillLoader.load_skill("finance/wallet_screening")
# Initialize without needing real API keys
return bundle['module'].WalletScreeningSkill()

@patch("skills.finance.wallet_screening.skill.requests.get")
def test_wallet_screening_success(mock_get):
skill = get_skill()
skill.etherscan_api_key = "dummy_key"

# Mock responses
mock_eth_balance = MagicMock()
mock_eth_balance.json.return_value = {"status": "1", "result": "1000000000000000000"} # 1 ETH

mock_txs = MagicMock()
mock_txs.json.return_value = {"status": "1", "result": [
{"from": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045".lower(), "to": "0x123", "value": "500000000000000000", "isError": "0", "gasUsed": "21000", "gasPrice": "1000000000"}
]}

mock_price = MagicMock()
mock_price.json.return_value = {"ethereum": {"usd": 2000.0, "eur": 1800.0}}

# Configure mock side_effect based on URL/params
def get_side_effect(url, **kwargs):
if "action" in kwargs.get("params", {}):
if kwargs["params"]["action"] == "balance":
return mock_eth_balance
elif kwargs["params"]["action"] == "txlist":
return mock_txs
return mock_price

mock_get.side_effect = get_side_effect

result = skill.execute({"address": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"})

assert "error" not in result
assert "summary" in result
assert result["summary"]["balance_eth"] == 1.0
assert result["summary"]["balance_usd"] == 2000.0
assert "financial_analysis" in result
assert result["financial_analysis"]["value_out_eth"] == 0.5

def test_wallet_screening_invalid_address():
skill = get_skill()
result = skill.execute({"address": "invalid_addr"})
assert "error" in result
assert "Invalid Ethereum address" in result["error"]

def test_wallet_screening_missing_key():
skill = get_skill()
skill.etherscan_api_key = None
result = skill.execute({"address": "0xd8dA6BF26964aF9D7eEd9e03E53415D37aA96045"})
assert "error" in result
assert "Missing ETHERSCAN_API_KEY" in result["error"]
Loading
Loading