Intelligent Temperature Data Analysis System Based on Large Language Models
Features β’ Quick Start β’ Documentation β’ Contributing
Edge-LLM is an intelligent temperature data analysis system for edge IoT devices based on large language models. It integrates local LLMs and OpenAI API, providing intelligent temperature data analysis and visualization capabilities.
- π€ Dual Model Support: Supports local LLMs (llama-cpp-python) and OpenAI API with flexible switching
- πΎ Multiple Data Sources: Supports JSON files and MySQL database with automatic reconnection
- π Intelligent Analysis: AI-driven temperature data analysis with automatic professional report generation
- π¨ Visualization Interface: Modern web interface based on Streamlit
- π Anomaly Detection: Intelligent anomaly detection based on statistical methods
- π Trend Analysis: Multi-dimensional trend analysis and prediction
- β JSON file data loading
- β MySQL database support (auto-reconnect)
- β Device data query and statistics
- β Time range filtering
- β Data caching mechanism
- β Anomaly temperature detection (Z-score method)
- β Trend analysis (moving average, volatility analysis)
- β Statistical analysis (mean, min, max, range, etc.)
- β Multi-device comprehensive analysis
- β Local LLM integration (llama-cpp-python)
- β OpenAI API support
- β Streaming output support
- β Multiple analysis types (comprehensive, anomaly, trend, recommendation)
- β Intelligent report generation
- β Device overview dashboard
- β Device detail analysis
- β Interactive data visualization (Plotly)
- β Real-time data refresh
- β Responsive design
edge-llm/
βββ config/ # Configuration files
β βββ config.yaml # Main configuration file
βββ data/ # Data files
β βββ temperature_data.json
βββ docs/ # Documentation
β βββ N_CTX_GUIDE.md # n_ctx configuration guide
β βββ OPENAI_SETUP.md # OpenAI configuration guide
β βββ REALTIME_UPDATE.md # Real-time update solution
β βββ README_DATABASE.md # Database integration guide
βββ models/ # Model files
β βββ qwen-0.6b.gguf # Local LLM (need to download separately)
βββ scripts/ # Scripts
β βββ init_database.py # Database initialization
β βββ data_writer.py # Data writing service
β βββ start_data_writer.* # Startup scripts
βββ src/ # Source code
β βββ analyzer.py # Comprehensive analysis service
β βββ data_loader.py # JSON data loader
β βββ db_connection.py # Database connection management
β βββ db_data_loader.py # Database data loader
β βββ data_processor.py # Data processing module
β βββ llm_service.py # LLM service (local/OpenAI)
βββ web/ # Web application
β βββ app.py # Streamlit application
βββ example_usage.py # Usage examples
βββ run_web.py # Web startup script
βββ requirements.txt # Python dependencies
βββ README.md # Project documentation
- Python 3.8+
- 4GB+ RAM (8GB+ recommended)
- MySQL 5.7+ (optional, for database mode)
git clone https://github.com/jonehoo/Edge-LLM.git
cd Edge-LLMpip install -r requirements.txtIf you want to use local LLM:
# Standard installation
pip install llama-cpp-python
# Windows pre-compiled version (recommended)
pip install llama-cpp-python --extra-index-url https://abetlen.github.io/llama-cpp-python/whl/cpu
# If not installed, the system will automatically use mock modeIf using local models, you need to download GGUF format model files:
# Place model files in models/ directory
# Example: models/qwen-0.6b.ggufNote: Model files are large and need to be downloaded separately. If the model doesn't exist, the system will automatically use mock mode.
If using database mode:
# 1. Create database
mysql -u root -p
CREATE DATABASE `edge-llm` CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
# 2. Initialize table structure
python scripts/init_database.py
# 3. Import historical data (optional)
python scripts/data_writer.py --initfrom src.analyzer import TemperatureAnalyzer
# Initialize analyzer
analyzer = TemperatureAnalyzer()
# Get device list
devices = analyzer.get_device_list()
# Analyze device
analysis = analyzer.analyze_device("sensor_001")
print(analysis['llm_analysis'])python example_usage.py# Method 1: Use startup script
python run_web.py
# Method 2: Use Streamlit directly
streamlit run web/app.pyAfter starting, visit: http://localhost:8501
Before first use, create configuration file:
# Copy example configuration
cp config/config.example.yaml config/config.yaml
# Edit config.yaml with actual settings
# Note: Do not commit config.yaml with sensitive information to GitEdit config/config.yaml for configuration:
data:
source: "json" # or "database"
file_path: "data/temperature_data.json"database:
host: "localhost"
port: 3306
user: "edge-llm"
password: "your_password_here"
database: "edge-llm"
charset: "utf8mb4"
connect_timeout: 10
read_timeout: 30
write_timeout: 30
max_retries: 3model:
type: "local"
path: "models/qwen-0.6b.gguf"
n_ctx: 4096 # Context window size
n_threads: 4 # Number of threadsmodel:
type: "openai"
openai:
api_key: "sk-your-api-key-here"
model: "gpt-3.5-turbo"
base_url: "https://api.openai.com/v1" # Optional, for proxyFor detailed configuration, please refer to:
from src.analyzer import TemperatureAnalyzer
# Initialize (using database)
analyzer = TemperatureAnalyzer(use_database=True)
# Get device list
devices = analyzer.get_device_list()
for device in devices:
print(f"{device['device_name']}: {device['readings_count']} readings")
# Analyze single device
analysis = analyzer.analyze_device(
device_id="sensor_001",
analysis_type="comprehensive" # comprehensive, anomaly, trend, recommendation
)
print(analysis['llm_analysis'])
# Stream analysis
for chunk in analyzer.analyze_device_stream("sensor_001"):
print(chunk, end='', flush=True)
# Get chart data
chart_data = analyzer.get_temperature_chart_data("sensor_001")from src.data_loader import TemperatureDataLoader
from src.data_processor import TemperatureDataProcessor
# Initialize
loader = TemperatureDataLoader("data/temperature_data.json")
processor = TemperatureDataProcessor(loader)
# Load data
data = loader.load_data()
# Get statistics
stats = loader.get_statistics("sensor_001")
# Detect anomalies
anomalies = processor.detect_anomalies("sensor_001", threshold=3.0)
# Trend analysis
trend = processor.get_trend_analysis("sensor_001", window_size=5)
# Convert to DataFrame
df = processor.to_dataframe("sensor_001")Comprehensive analysis service that integrates data loading, processing, and LLM analysis.
analyzer = TemperatureAnalyzer(
use_database=False, # Whether to use database
model_type="local", # Model type: "local" or "openai"
model_path="models/qwen-0.6b.gguf",
openai_api_key=None, # OpenAI API key
openai_model="gpt-3.5-turbo",
n_ctx=4096, # Context window size
n_threads=4 # Number of threads
)Main Methods:
get_device_list()- Get device listanalyze_device(device_id, analysis_type)- Analyze deviceanalyze_device_stream(device_id, analysis_type)- Stream analysisget_device_overview(device_id)- Get device overviewget_temperature_chart_data(device_id)- Get chart dataget_dataframe(device_id)- Get DataFrame
LLM service supporting local models and OpenAI.
from src.llm_service import LLMService
llm = LLMService(
model_type="local", # or "openai"
model_path="models/qwen-0.6b.gguf",
n_ctx=4096,
openai_api_key="sk-...",
openai_model="gpt-3.5-turbo"
)
# Generate text
text = llm.generate("Analyze temperature data...")
# Stream generation
for chunk in llm.generate_stream("Analyze temperature data..."):
print(chunk, end='')
# Analyze temperature data
analysis = llm.analyze_temperature_data(data_summary, "comprehensive")Database connection management with auto-reconnect support.
from src.db_connection import DatabaseConnection
db = DatabaseConnection(
host="localhost",
port=3306,
user="edge-llm",
password="edge-llm",
database="edge-llm",
max_retries=3
)
# Execute query
results = db.execute_query("SELECT * FROM devices")
# Execute update
db.execute_update("INSERT INTO readings ...")# 1. Create database
mysql -u root -p -e "CREATE DATABASE \`edge-llm\` CHARACTER SET utf8mb4;"
# 2. Initialize table structure
python scripts/init_database.py
# 3. Import historical data
python scripts/data_writer.py --init# Windows
scripts\start_data_writer.bat
# Linux/Mac
python scripts/data_writer.py --interval 60For detailed instructions, please refer to: Database Integration Guide
- Get API key: Visit OpenAI Platform
- Edit configuration file:
model:
type: "openai"
openai:
api_key: "sk-your-api-key"
model: "gpt-3.5-turbo" # or "gpt-4", "gpt-4-turbo-preview", etc.- Restart application
For detailed instructions, please refer to: OpenAI Configuration Guide
Display device list and basic information, quickly view status and statistics of all devices.
Features:
- Device list and basic information
- Quick statistics
- Device status overview
Detailed device analysis page with statistics, trend analysis, anomaly detection, and AI intelligent analysis reports.
Features:
- Detailed statistics
- Latest readings display
- Trend analysis charts
- Anomaly detection results
- AI Intelligent Analysis Report (with streaming support)
Multiple analysis types supporting single device or all devices, AI-generated professional analysis reports.
Features:
- Comprehensive analysis
- Anomaly analysis
- Trend analysis
- Recommendations
- Support for single device or all devices
Interactive data visualization using Plotly for rich chart displays.
Features:
- Temperature trend charts (Plotly)
- Temperature and humidity dual-axis charts
- Raw data tables
- Interactive charts
Issue: Shows "Using mock mode"
Solution:
- Check if model file exists:
models/qwen-0.6b.gguf - Confirm
llama-cpp-pythonis installed:pip install llama-cpp-python - Check if model path in configuration is correct
- System will automatically use mock mode, basic functions still available
Issue: MySQL server has gone away
Solution:
- Check if database service is running
- Check if connection configuration is correct
- System has implemented auto-reconnect mechanism, will retry automatically
- Check log files for detailed error information
Issue: OpenAI connection failed
Solution:
- Check if API key is correct
- Check network connection
- Check if API quota is exhausted
- If using proxy, check
base_urlconfiguration
Issue: Port occupied or startup failed
Solution:
# Use different port
streamlit run web/app.py --server.port 8502
# Check port usage
netstat -ano | findstr :8501 # Windows
lsof -i :8501 # Linux/MacContributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
- Code style: Follow PEP 8
- Commit messages: Use clear commit messages
- Testing: Ensure new features have corresponding tests
- Documentation: Update relevant documentation
This project is licensed under the MIT License.
- llama-cpp-python - Local LLM support
- Streamlit - Web application framework
- Plotly - Data visualization
- PyMySQL - MySQL database connection
Important: Before committing code, please ensure:
- β
Do not commit
config/config.yamlwith real API keys - β
Use
config/config.example.yamlas template - β
Add sensitive information to
.gitignore - β Use environment variables for sensitive configuration (recommended)
# Recommended: Use environment variables
export OPENAI_API_KEY="your-api-key"
# Or in .env file (already added to .gitignore)- Project URL: GitHub
- Issue Reports: Issues
- Feature Suggestions: Discussions
Scan the QR code to follow our WeChat official account for the latest updates, tutorials, and community discussions:
If this project helps you, please give it a Star β




