Skip to content

Anakintano/OllamaThinkTryOuts-

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 
Β 
Β 

Repository files navigation

DeepSeek-V3

Ollama Think TryOuts with deepseek-R1 πŸš€

Welcome to OllamaThinkTryOuts, a tutorial repository demonstrating the new Thinking feature from the Ollama project, powered by the DeepSeek R1 (1.5B) model πŸ‹. This guide provides a professional setup for running Ollama in Google Colab and simulating a "think mode" for step-by-step reasoning via command-line interaction. Perfect for developers, researchers, and AI enthusiasts exploring local LLMs! 🌟 Features ✨

Local Model Execution: Run DeepSeek R1 (1.5B) locally in Colab using Ollama. Think Mode Simulation: Implements iterative reasoning with problem breakdown, step-by-step analysis, and concise answers. Colab-Optimized: Lightweight setup tailored for Colab’s free tier with GPU/TPU support.

Prerequisites πŸ“‹

Google Colab account with GPU/TPU enabled (recommended for performance). Internet access for downloading Ollama and models.

Installation πŸ› οΈ

  1. Set Up Colab Environment

Create a new Colab notebook. Enable GPU/TPU: Navigate to Runtime > Change runtime type. Select GPU (e.g., T4) or TPU and save.

Install colab-xterm for terminal access:!pip install colab-xterm %load_ext colabxterm

Open a terminal:%xterm

  1. Install Ollama In the colab-xterm terminal, install Ollama: curl https://ollama.ai/install.sh | sh

  2. Start Ollama Server Run the server in the background: ollama serve &

  3. Pull DeepSeek R1 Model Download the DeepSeek R1 model (1.5B for free tier compatibility): ollama pull deepseek-r1:1.5b

  4. Install Python Dependencies Install the required library: !pip install ollama

Usage πŸš€ Simulating Think Mode Use the following Python script to simulate Ollama’s Thinking feature, which breaks down prompts into logical steps, reasons through them, and provides a final answer: import ollama

def think_mode(prompt, model="deepseek-r1:1.5b"): """ Simulates Ollama's Thinking feature with DeepSeek R1. """ try: # Step 1: Problem Breakdown breakdown = ollama.chat( model=model, messages=[{"role": "user", "content": f"Break down the problem: {prompt}"}] )["message"]["content"]

    # Step 2: Reasoning Process
    reasoning = ollama.chat(
        model=model,
        messages=[{"role": "user", "content": f"Provide step-by-step reasoning for: {prompt}"}]
    )["message"]["content"]

    # Step 3: Final Answer
    final = ollama.chat(
        model=model,
        messages=[{"role": "user", "content": f"Provide a concise final answer for: {prompt}"}]
    )["message"]["content"]

    return (f"**Step 1: Problem Breakdown** πŸ“\n{breakdown}\n\n"
            f"**Step 2: Reasoning Process** 🧠\n{reasoning}\n\n"
            f"**Step 3: Final Answer** βœ…\n{final}")
except Exception as e:
    return f"Error: {str(e)}. Ensure Ollama server is running."

Example usage

prompt = input("Enter your prompt (e.g., Count the 'r's in 'strawberry'): ") print(think_mode(prompt))

Sample Output: Enter your prompt (e.g., Count the 'r's in 'strawberry'): Count the 'r's in 'strawberry'

Step 1: Problem Breakdown πŸ“

  1. Write out the word: s-t-r-a-w-b-e-r-r-y.
  2. Count each 'r' in the word.
  3. Sum the total number of 'r's.

Step 2: Reasoning Process 🧠

  • s: No 'r'.
  • t: No 'r'.
  • r: First 'r'.
  • a: No 'r'.
  • w: No 'r'.
  • b: No 'r'.
  • e: No 'r'.
  • r: Second 'r'.
  • r: Third 'r'.
  • y: No 'r'. Total: 3 'r's.

Step 3: Final Answer βœ… The word 'strawberry' contains 3 'r's.

Workflow Diagram πŸ“ˆ The following diagram outlines the workflow for setting up Ollama with DeepSeek R1 in Colab:

image

Troubleshooting 🐞

Ollama Server Issues: Check: ps aux | grep ollama. Restart: ollama serve &.

Model Not Loaded: Verify: ollama list. Repull: ollama pull deepseek-r1:1.5b.

Resource Limits: Colab free tier has ~12 GB RAM. Use deepseek-r1:1.5b or upgrade to Colab Pro.

Connection Errors: Ensure Ollama runs at http://localhost:11434. Restart Colab runtime if needed.

Contributing 🀝 We welcome contributions! Please submit issues or pull requests to https://github.com/Anakintano/OllamaThinkTryOuts. Follow the Ollama contributing guidelines for best practices. Acknowledgments πŸ™Œ

Ollama for the powerful model-serving framework. DeepSeek R1 for efficient LLMs.

License πŸ“œ

This project is licensed under the MIT License. See the LICENSE file for details.

Copyright ©️

Β© 2025 Aditya Saxena. All rights reserved.

About

This tutorial explores the new Thinking feature from the official Ollama repository, showcasing step-by-step reasoning with the DeepSeek R1 (1.5B) model πŸ‹. Learn to set up Ollama in Google Colab, run DeepSeek R1, and simulate "think mode" for prompt processing.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors