The infDB is a user-friendly, platform-independent, and open-source data infrastructure as a foundation for energy system analyses. It enables complex evaluations by combining various tools through standardized interfaces, fostering an open and interoperable ecosystem.
| Category | Badges |
|---|---|
| License | |
| Documentation | |
| Development | |
| Community |
infDB (Infrastructure and Energy Database) offers a flexible and easy-to-configure data infrastructure with essential services, minimizing the effort required for data management. By providing standardized interfaces and APIs, infDB streamlines collaboration in energy modeling and analysis, enabling users to focus on insights rather than data handling.
For instance, it can be used for the following applications:
- Energy System Modeling
- Infrastructure Planning
- Scenario Analysis
- Geospatial Analysis
The infDB architecture is composed of three coordinated layers as shown in the figure below:
- PostgreSQL – foundational geospatial and semantic building data (center)
- Services – preconfigured platform services (top)
- Tools – external connected software and scripts. (right)
The PostgreSQL database is the basis and extended by services and tools. More information of each layer is described below.
The PostgreSQL and all services are dockerized for a modular and flexible application.

The foundation is a PostgreSQL database enhanced with TimescaleDB, PostGIS, PGRouting, and the 3D City Database:
-
TimescaleDB: Scalable time-series storage (weather, load, generation) with hypertables, compression, optional continuous aggregates.
-
PostGIS: Spatial/geographic objects (buildings, parcels, networks) with geometry queries, projections, and spatial indexing.
-
PGRouting: Network routing algorithms (shortest path, reachability) on road and infrastructure graphs for mobility and grid analysis.
-
3D City Database: Virtual 3D city model storage (buildings, terrain, infrastructure) with CityGML support, spatial indexing, and semantic queries for detailed urban analysis.
Integrated, preconfigured services extending the 3D City Database:
- pgAdmin: Web UI for inspecting schemas, running SQL, managing roles; auto-configured credentials.
- FastAPI: REST endpoints (/city, /weather) with OpenAPI docs and validated access to 3D, geospatial, and time-series data.
- Jupyter: Notebook environment (dependencies and env vars preloaded) for exploratory queries, ETL prototypes, reproducible analysis.
- QWC2: Web mapping client for 2D/3D visualization, layer styling, spatial inspection, quick dataset validation.
- PostgREST: Auto-generated REST API over PostgreSQL schemas (tables, views, RPC) using DB roles for auth; rapid, lightweight data access without extra backend code.
- pygeoapi: OGC API (Features/Coverages/Processes) server exposing PostGIS data via standards-based JSON & HTML endpoints for interoperable geospatial discovery and querying.
These services provide core functionalities and support a seamless path from ingestion to analysis and visualization.
Tools are external software, scripts, or workflows that connect to infDB through its standardized APIs and database schemas, enabling specialized analysis and processing capabilities.
The following tools are currently integrated with infDB:
- infDB-loader: Containerized solution for automated ingestion of public open data for Germany
- infDB-basedata: Containerized pipeline for data transformation, validation, and enrichment
- pylovo: Python tool for generating synthetic low-voltage distribution grids
- EnTiSe: Python tool for energy time series generation and management
Additional community-developed or domain-specific tools can be easily integrated through infDB's standardized APIs and database schemas.
To get started, follow these steps below. For more information in detail read the https://infdb.readthedocs.io/.
If you are happy with the preconfiguration and default passwords, then just follow these four steps (see detailed instructions in the corresponding sections below):
Important: We strongly recommend executing all commands on macOS or Linux.
Windows users: Please install Ubuntu as Windows Subsystem for Linux (WSL) from the Microsoft Store. After installation, launch the Linux terminal by searching for "Ubuntu" in your applications.
The infDB provides a modular folder structure that allows managing multiple database instances independently. Each instance represents a separate deployment with its own data, configuration, and services—ideal for handling different regions, projects, or environments.
infdb/
├── data/
├── infdb-demo/
├── sonthofen/
├── ...
└── muenchen/
The recommended structure places all instance data in a shared data/ folder while keeping each instance's configuration and tools in separate directories (e.g., infdb-demo/, sonthofen/, muenchen/). This approach simplifies backups, migrations, and multi-instance management.
First of all, create the main infdb directory and navigate into it:
# linux
mkdir infdb
cd infdbThen, you can access the repository either with SSH or HTTPS as you like:
SSH vs HTTPS:
-
SSH (Secure Shell): Uses cryptographic key pairs for authentication. Once set up, you won't need to enter credentials for each operation. Recommended for frequent Git operations.
# Replace "infdb-demo" by name of instance git clone git@git-ce.rwth-aachen.de:need/NEED-infdb.git infdb-demo -
HTTPS: Uses username and password (or personal access token) for authentication. Simpler to set up initially but may require credentials for each operation unless you configure credential caching. or using https
# Replace "infdb-demo" by name of instance git clone https://git-ce.rwth-aachen.de/need/NEED-infdb.git infdb-demo
Both methods are secure and work identically for cloning, pushing, and pulling. Your choice depends on your workflow preferences and environment constraints.
Navigate to the instance directory:
cd infdb-demoThe startup script simplifies the startup process if you dont want to execute each single step as shown below separately and are happy with the default configurations and passwords:
bash infdb-startup.shIn order to start the tools of the use case Linear Heat Density, please use the following script:
bash tools/run_linear-heat-density.shBefore starting infDB, you need to configure it:
-
Copy the configuration template:
cp configs/config-infdb.yml.template configs/config-infdb.yml
-
Edit the configuration file at
configs/config-infdb.ymlto customize your infDB instance settings (database credentials, ports, paths, etc.).Note: If you're using the default configuration, you can skip editing and proceed directly to generating the configuration files.
base: name: infdb-demo path: base: "../data/{base/name}/" network_name: "infdb-{base/name}_network" services: postgres: status: active user: infdb_user password: infdb db: infdb exposed_port: 54328 epsg: 25832 path: base: "{base/path/base}/postgres/" compose_file: "services/postgres/compose.yml" ...
After completing the configuration, generate the necessary configuration files by running:
docker compose -f services/infdb-setup/compose.yml upAfter the configuration files are generated, you can start all infDB services with:
docker compose -f compose.yml up -dHint: If compose.yml is not found, you either forgot to run the command above or something went wrong. Please check the logs of the setup service.
Hint: The infDB will be run as long as you stop it manually as described below even when the machine is restarted.
To stop all running infDB services and remove them, execute:
docker compose -f compose.yml down -vFor detailed information about each tool, their usage, configuration options, and examples, please refer to the tools/Readme.md file.
# on linux and macos
PGPASSWORD='citydb_password' psql -h localhost -p 5432 -U citydb_user -d citydb.pg_service.conf for QGIS to connect to InfDB via service
[infdb]
host=localhost
port=5432
dbname=citydb
user=citydb_user
password=citydb_password
sslmode=disable
# on linux and macos by installation script
curl -LsSf https://astral.sh/uv/install.sh | sh
# or by pip
pip install uv# linux and macos
uv sync# linux and macos
source .venv/bin/activate
# windows
venv\Scripts\activate# linux and macos
git fetch origin
git reset --hard origin/develop
git clean -fdx- src/: Main application package
- api/: API endpoints (cityRouter.py, weatherRouter.py)
- core/: Core application code (dbConfig.py, etc.)
- db/: Database models and repositories
- models/: SQLModel classes for database entities
- repositories/: Data access layer for database operations
- exceptions/: Custom exception classes
- externals/: External API integrations (e.g., weather API)
- schemas/: Data schemas and validation
- services/: Business logic services
- main.py: Application entry point
- docs/: Documentation
- architecture/: System architecture documentation
- contributing/: Contribution guidelines and code of conduct
- development/: Developer guides and workflows
- guidelines/: Project guidelines and standards
- operations/: Operational guides and CI/CD documentation
- source/: Source files for documentation
- img/: Images used in documentation
- dockers/: Docker configuration files
- tests/: Test suite
- unit/: Unit tests for individual components
- integration/: Tests for component interactions
- e2e/: End-to-end tests for the application
- conftest.py: Pytest configuration and fixtures
- Set up the environment following the installation instructions.
- Open an issue to discuss new features, bugs, or changes.
- Create a new branch for each feature or bug fix based on an issue.
- Implement the changes following the coding guidelines.
- Write tests for new functionality or bug fixes.
- Run tests to ensure the code works as expected.
- Create a merge request to integrate your changes.
- Address review comments and update your code as needed.
- Merge the changes after approval.
The CI/CD workflow is set up using GitLab CI/CD. The workflow runs tests, checks code style, and builds the documentation on every push to the repository. You can view workflow results directly in the repository's CI/CD section. For detailed information about the CI/CD workflow, see the CI/CD Guide.
The following resources are available to help developers understand and contribute to the project:
The Coding Guidelines document outlines the coding standards and best practices for the project. Start here when trying to understand the project as a developer.
The Architecture Documentation provides an overview of the system architecture, including the database schema, components, and integration points.
- Development Setup Guide: Comprehensive instructions for setting up a development environment
- Contribution Workflow: Step-by-step process for contributing to the project
- API Development Guide: Information for developers who want to use or extend the API
- Database Schema Documentation: Detailed information about the database schema
- Contributing Guide: Guidelines for contributing to the project
- Code of Conduct: Community standards and expectations
- Release Procedure: Process for creating new releases
- CI/CD Guide: Detailed information about the CI/CD workflow
Everyone is invited to develop this repository with good intentions. Please follow the workflow described in the CONTRIBUTING.md.
This repository follows consistent coding styles. Refer to CONTRIBUTING.md and the Coding Guidelines for detailed standards.
Pre-commit hooks are configured to check code quality before commits, helping enforce standards.
The changelog is maintained in the CHANGELOG.md file. It lists all changes made to the repository. Follow instructions there to document any updates.
The code of this repository is licensed under the MIT License (MIT).
See LICENSE for rights and obligations.
See the Cite this repository function or CITATION.cff for citation of this repository.
Copyright: TU Munich - ENS | MIT
Patrick Buchenberg
Chair of Renewable and Sustainable Energy System - Technical University of Munich (TUM). Email: patrick.buchenberg@tum.de https://www.epe.ed.tum.de/ens/staff/ensteam/patrick-buchenberg/
