Welcome to OllamaThinkTryOuts, a tutorial repository demonstrating the new Thinking feature from the Ollama project, powered by the DeepSeek R1 (1.5B) model π. This guide provides a professional setup for running Ollama in Google Colab and simulating a "think mode" for step-by-step reasoning via command-line interaction. Perfect for developers, researchers, and AI enthusiasts exploring local LLMs! π Features β¨
Local Model Execution: Run DeepSeek R1 (1.5B) locally in Colab using Ollama. Think Mode Simulation: Implements iterative reasoning with problem breakdown, step-by-step analysis, and concise answers. Colab-Optimized: Lightweight setup tailored for Colabβs free tier with GPU/TPU support.
Prerequisites π
Google Colab account with GPU/TPU enabled (recommended for performance). Internet access for downloading Ollama and models.
Installation π οΈ
- Set Up Colab Environment
Create a new Colab notebook. Enable GPU/TPU: Navigate to Runtime > Change runtime type. Select GPU (e.g., T4) or TPU and save.
Install colab-xterm for terminal access:!pip install colab-xterm %load_ext colabxterm
Open a terminal:%xterm
-
Install Ollama In the colab-xterm terminal, install Ollama: curl https://ollama.ai/install.sh | sh
-
Start Ollama Server Run the server in the background: ollama serve &
-
Pull DeepSeek R1 Model Download the DeepSeek R1 model (1.5B for free tier compatibility): ollama pull deepseek-r1:1.5b
-
Install Python Dependencies Install the required library: !pip install ollama
Usage π Simulating Think Mode Use the following Python script to simulate Ollamaβs Thinking feature, which breaks down prompts into logical steps, reasons through them, and provides a final answer: import ollama
def think_mode(prompt, model="deepseek-r1:1.5b"): """ Simulates Ollama's Thinking feature with DeepSeek R1. """ try: # Step 1: Problem Breakdown breakdown = ollama.chat( model=model, messages=[{"role": "user", "content": f"Break down the problem: {prompt}"}] )["message"]["content"]
# Step 2: Reasoning Process
reasoning = ollama.chat(
model=model,
messages=[{"role": "user", "content": f"Provide step-by-step reasoning for: {prompt}"}]
)["message"]["content"]
# Step 3: Final Answer
final = ollama.chat(
model=model,
messages=[{"role": "user", "content": f"Provide a concise final answer for: {prompt}"}]
)["message"]["content"]
return (f"**Step 1: Problem Breakdown** π\n{breakdown}\n\n"
f"**Step 2: Reasoning Process** π§ \n{reasoning}\n\n"
f"**Step 3: Final Answer** β
\n{final}")
except Exception as e:
return f"Error: {str(e)}. Ensure Ollama server is running."
prompt = input("Enter your prompt (e.g., Count the 'r's in 'strawberry'): ") print(think_mode(prompt))
Sample Output: Enter your prompt (e.g., Count the 'r's in 'strawberry'): Count the 'r's in 'strawberry'
Step 1: Problem Breakdown π
- Write out the word: s-t-r-a-w-b-e-r-r-y.
- Count each 'r' in the word.
- Sum the total number of 'r's.
Step 2: Reasoning Process π§
- s: No 'r'.
- t: No 'r'.
- r: First 'r'.
- a: No 'r'.
- w: No 'r'.
- b: No 'r'.
- e: No 'r'.
- r: Second 'r'.
- r: Third 'r'.
- y: No 'r'. Total: 3 'r's.
Step 3: Final Answer β The word 'strawberry' contains 3 'r's.
Workflow Diagram π The following diagram outlines the workflow for setting up Ollama with DeepSeek R1 in Colab:
Troubleshooting π
Ollama Server Issues: Check: ps aux | grep ollama. Restart: ollama serve &.
Model Not Loaded: Verify: ollama list. Repull: ollama pull deepseek-r1:1.5b.
Resource Limits: Colab free tier has ~12 GB RAM. Use deepseek-r1:1.5b or upgrade to Colab Pro.
Connection Errors: Ensure Ollama runs at http://localhost:11434. Restart Colab runtime if needed.
Contributing π€ We welcome contributions! Please submit issues or pull requests to https://github.com/Anakintano/OllamaThinkTryOuts. Follow the Ollama contributing guidelines for best practices. Acknowledgments π
Ollama for the powerful model-serving framework. DeepSeek R1 for efficient LLMs.
This project is licensed under the MIT License. See the LICENSE file for details.
Β© 2025 Aditya Saxena. All rights reserved.
