Skip to content

Commit 7b77f33

Browse files
Update example and remove unwated crewai examples
1 parent b360b00 commit 7b77f33

6 files changed

Lines changed: 51 additions & 283 deletions

File tree

examples/crewai_integration.py

Lines changed: 0 additions & 58 deletions
This file was deleted.

examples/crewai_semantic_search.py

Lines changed: 0 additions & 145 deletions
This file was deleted.

examples/semantic_search_example.py

Lines changed: 30 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -165,16 +165,6 @@ def example_search_tools():
165165
print(f" {tool.description}")
166166
print()
167167

168-
# Show OpenAI conversion
169-
print("Step 2: Converting to OpenAI function-calling format...")
170-
openai_tools = tools.to_openai()
171-
print(f"Created {len(openai_tools)} OpenAI function definitions:")
172-
for fn in openai_tools:
173-
func = fn["function"]
174-
param_names = list(func["parameters"].get("properties", {}).keys())
175-
print(f" - {func['name']}({', '.join(param_names[:3])}{'...' if len(param_names) > 3 else ''})")
176-
print()
177-
178168

179169
def example_search_tools_with_connector():
180170
"""Semantic search filtered by connector.
@@ -252,13 +242,16 @@ def example_utility_tools_semantic():
252242

253243

254244
def example_openai_agent_loop():
255-
"""Complete agent loop: semantic search -> OpenAI -> execute.
245+
"""Complete agent loop: semantic search -> LLM -> execute.
256246
257247
This demonstrates the full pattern for building an AI agent that
258-
discovers tools via semantic search and executes them via OpenAI.
248+
discovers tools via semantic search and executes them via an LLM.
249+
250+
Supports both OpenAI and Google Gemini (via its OpenAI-compatible API).
251+
Set OPENAI_API_KEY for OpenAI, or GOOGLE_API_KEY for Gemini.
259252
"""
260253
print("=" * 60)
261-
print("Example 5: OpenAI agent loop with semantic search")
254+
print("Example 5: LLM agent loop with semantic search")
262255
print("=" * 60)
263256
print()
264257

@@ -269,12 +262,29 @@ def example_openai_agent_loop():
269262
print()
270263
return
271264

272-
if not os.getenv("OPENAI_API_KEY"):
273-
print("Skipped: Set OPENAI_API_KEY to run this example.")
265+
# Support both OpenAI and Gemini (via OpenAI-compatible endpoint)
266+
openai_key = os.getenv("OPENAI_API_KEY")
267+
google_key = os.getenv("GOOGLE_API_KEY")
268+
269+
if openai_key:
270+
client = OpenAI()
271+
model = "gpt-4o-mini"
272+
provider = "OpenAI"
273+
elif google_key:
274+
client = OpenAI(
275+
api_key=google_key,
276+
base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
277+
)
278+
model = "gemini-2.5-flash"
279+
provider = "Gemini"
280+
else:
281+
print("Skipped: Set OPENAI_API_KEY or GOOGLE_API_KEY to run this example.")
274282
print()
275283
return
276284

277-
client = OpenAI()
285+
print(f"Using {provider} ({model})")
286+
print()
287+
278288
toolset = StackOneToolSet()
279289

280290
query = "list upcoming events"
@@ -285,7 +295,7 @@ def example_openai_agent_loop():
285295
print(f" - {tool.name}")
286296
print()
287297

288-
print("Step 2: Sending tools to OpenAI as function definitions...")
298+
print(f"Step 2: Sending tools to {provider} as function definitions...")
289299
openai_tools = tools.to_openai()
290300

291301
messages = [
@@ -294,14 +304,14 @@ def example_openai_agent_loop():
294304
]
295305

296306
response = client.chat.completions.create(
297-
model="gpt-4o-mini",
307+
model=model,
298308
messages=messages,
299309
tools=openai_tools,
300310
tool_choice="auto",
301311
)
302312

303313
if response.choices[0].message.tool_calls:
304-
print("Step 3: OpenAI chose to call these tools:")
314+
print(f"Step 3: {provider} chose to call these tools:")
305315
for tool_call in response.choices[0].message.tool_calls:
306316
print(f" - {tool_call.function.name}({tool_call.function.arguments})")
307317

@@ -312,7 +322,7 @@ def example_openai_agent_loop():
312322
f" Response keys: {list(result.keys()) if isinstance(result, dict) else type(result)}"
313323
)
314324
else:
315-
print(f"OpenAI responded with text: {response.choices[0].message.content}")
325+
print(f"{provider} responded with text: {response.choices[0].message.content}")
316326

317327
print()
318328

stackone_ai/semantic_search.py

Lines changed: 15 additions & 51 deletions
Original file line numberDiff line numberDiff line change
@@ -12,54 +12,18 @@
1212
This is the primary method used when integrating with OpenAI, LangChain, or CrewAI.
1313
The internal flow is:
1414
15-
::
16-
17-
User query (e.g. "create an employee")
18-
19-
20-
┌─────────────────────────────────────────────────────┐
21-
│ Step 1: Fetch ALL tools from linked accounts via MCP │
22-
│ (uses account_ids to scope the request) │
23-
└────────────────────────┬────────────────────────────┘
24-
25-
26-
┌─────────────────────────────────────────────────────┐
27-
│ Step 2: Extract available connectors from the │
28-
│ fetched tools (e.g. {bamboohr, hibob}) │
29-
└────────────────────────┬────────────────────────────┘
30-
31-
32-
┌─────────────────────────────────────────────────────┐
33-
│ Step 3: Query the semantic search API (/actions/ │
34-
│ search) with the natural language query │
35-
└────────────────────────┬────────────────────────────┘
36-
37-
38-
┌─────────────────────────────────────────────────────┐
39-
│ Step 4: Filter results — keep only connectors the │
40-
│ user has access to + apply min_score cutoff │
41-
│ │
42-
│ If not enough results, make per-connector │
43-
│ fallback queries for missing connectors │
44-
└────────────────────────┬────────────────────────────┘
45-
46-
47-
┌─────────────────────────────────────────────────────┐
48-
│ Step 5: Deduplicate by normalized action name │
49-
│ (strips API version suffixes, keeps highest │
50-
│ scoring version of each action) │
51-
└────────────────────────┬────────────────────────────┘
52-
53-
54-
┌─────────────────────────────────────────────────────┐
55-
│ Step 6: Match semantic results back to the fetched │
56-
│ tool definitions from Step 1 │
57-
│ Return Tools sorted by relevance score │
58-
└─────────────────────────────────────────────────────┘
59-
60-
Key point: tools are fetched first, semantic search runs second, and only
61-
tools that exist in the user's linked accounts AND match the semantic query
62-
are returned. This prevents suggesting tools the user cannot execute.
15+
1. Fetch ALL tools from linked accounts via MCP (uses account_ids to scope the request)
16+
2. Extract available connectors from the fetched tools (e.g. {bamboohr, hibob})
17+
3. Search EACH connector in parallel via the semantic search API (/actions/search)
18+
4. Collect results, sort by relevance score, apply top_k if specified
19+
5. Match semantic results back to the fetched tool definitions
20+
6. Return Tools sorted by relevance score
21+
22+
Key point: only the user's own connectors are searched — no wasted results
23+
from connectors the user doesn't have. Tools are fetched first, semantic
24+
search runs second, and only tools that exist in the user's linked
25+
accounts AND match the semantic query are returned. This prevents
26+
suggesting tools the user cannot execute.
6327
6428
If the semantic API is unavailable, the SDK falls back to a local
6529
BM25 + TF-IDF hybrid search over the fetched tools (unless
@@ -74,9 +38,9 @@
7438
definitions. This is useful for previewing results before committing
7539
to a full fetch.
7640
77-
When ``account_ids`` are provided, tools are fetched only to determine
78-
available connectors — results are then filtered to those connectors.
79-
Without ``account_ids``, results come from the full StackOne catalog.
41+
When ``account_ids`` are provided, each connector is searched in
42+
parallel (same as ``search_tools``). Without ``account_ids``, results
43+
come from the full StackOne catalog.
8044
8145
8246
3. ``utility_tools(semantic_client=...)`` — Agent-loop search + execute

0 commit comments

Comments
 (0)