diff --git a/docs/structsense_getting_started.md b/docs/structsense_getting_started.md index ee5d173..fcad931 100644 --- a/docs/structsense_getting_started.md +++ b/docs/structsense_getting_started.md @@ -47,6 +47,13 @@ export EXTERNAL_PDF_EXTRACTION_SERVICE=True ``` > The external API is assumed public (no auth) for now. +### LLM +#### LLM for Agents +We are using OpenRouter to access models like GPT for agents. However, Ollama can also serve as a substitute for OpenRouter when using open-source models such as Llama. + +#### Embedding configuration +In our default setup, Ollama is used for embedding generation. You can also use other models via OpenRouter for this purpose. + ## Running @@ -82,7 +89,6 @@ Disabled by default. Enable with: The `docker/` directory contains **Docker Compose** files for running the following components: - **Grobid** – for PDF extraction -- **Ollama** – In our setup, Ollama is used for embedding generation. However, it can also serve as a substitute for OpenRouter when using open-source models such as Llama for agents. OpenRouter, on the other hand, provides access to various proprietary models like GPT. - **Weaviate** – In our StructSense architecture, Weaviate acts as the vector database responsible for storing the ontology, effectively serving as the Ontology database. These Compose files allow you to quickly stand up a complete local **StructSense** stack. diff --git a/docs/structsense_overview.md b/docs/structsense_overview.md index 809c6ac..29b1ec5 100644 --- a/docs/structsense_overview.md +++ b/docs/structsense_overview.md @@ -14,4 +14,9 @@ It orchestrates specialized agents to collaborate, align to schemas/ontologies, ## Architecture ![StructSense Architecture](images/structsense_arch.png) +## Quickstart + +```bash +pip install structsense +```