From b3df754b7c016e88ea92aed4ab0c421d6cd6b473 Mon Sep 17 00:00:00 2001 From: Lasim Date: Sun, 6 Jul 2025 19:57:47 +0200 Subject: [PATCH 1/3] Enhance email logging, add OAuth implementation guide, improve plugin logging, clarify roles and permissions, and provide detailed database setup instructions for self-hosting DeployStack. --- docs/deploystack/auth.mdx | 232 ++++++++ docs/deploystack/development/backend/api.mdx | 121 ++++ .../development/backend/cloud-credentials.mdx | 509 ++++++++++++++++ .../development/backend/database-sqlite.mdx | 448 +++++++++++++++ .../development/backend/database-turso.mdx | 421 ++++++++++++++ .../development/backend/database.mdx | 306 +++++++--- .../backend/environment-variables.mdx | 18 +- .../development/backend/global-settings.mdx | 10 +- .../development/backend/logging.mdx | 543 ++++++++++++++++++ docs/deploystack/development/backend/mail.mdx | 92 ++- .../deploystack/development/backend/oauth.mdx | 536 +++++++++++++++++ .../development/backend/plugins.mdx | 38 +- .../deploystack/development/backend/roles.mdx | 73 ++- docs/deploystack/development/backend/test.mdx | 69 ++- .../development/frontend/index.mdx | 117 +++- docs/deploystack/roles.mdx | 3 + .../self-hosted/database-setup.mdx | 167 ++++++ 17 files changed, 3554 insertions(+), 149 deletions(-) create mode 100644 docs/deploystack/auth.mdx create mode 100644 docs/deploystack/development/backend/cloud-credentials.mdx create mode 100644 docs/deploystack/development/backend/database-sqlite.mdx create mode 100644 docs/deploystack/development/backend/database-turso.mdx create mode 100644 docs/deploystack/development/backend/logging.mdx create mode 100644 docs/deploystack/development/backend/oauth.mdx create mode 100644 docs/deploystack/self-hosted/database-setup.mdx diff --git a/docs/deploystack/auth.mdx b/docs/deploystack/auth.mdx new file mode 100644 index 0000000..c415de0 --- /dev/null +++ b/docs/deploystack/auth.mdx @@ -0,0 +1,232 @@ +--- +title: Authentication Methods +description: Available authentication methods in DeployStack, including email registration and GitHub OAuth, with configuration instructions for administrators. +--- + +# Authentication Methods + +DeployStack supports multiple authentication methods to provide flexibility for different user preferences and organizational requirements. This document outlines the available authentication options and how to configure them. + +## Available Authentication Methods + +### Email Registration & Login + +Email-based authentication is the primary authentication method in DeployStack. Users can register with their email address and password, and subsequently log in using these credentials. + +**Features:** +- Secure password hashing using Argon2 +- Email verification (when email sending is enabled) +- Password reset functionality +- Profile management + +**User Experience:** +1. Users register with email, password, and optional personal information +2. Email verification may be required (depending on configuration) +3. Users can log in using email or username +4. Password reset available via email (when email sending is enabled) + +### GitHub OAuth + +GitHub OAuth provides a convenient way for users to authenticate using their existing GitHub accounts. This method is particularly useful for development teams and organizations already using GitHub. + +**Features:** +- Single sign-on with GitHub +- Automatic email verification (GitHub emails are considered verified) +- Profile information imported from GitHub +- Secure OAuth 2.0 flow + +**User Experience:** +1. Users click "Login with GitHub" button +2. Redirected to GitHub for authorization +3. Upon approval, automatically logged into DeployStack +4. Profile information (name, email) imported from GitHub + +## Authentication Configuration + +### Global Authentication Settings + +Administrators can control authentication behavior through global settings: + +| Setting | Description | Default | +|---------|-------------|---------| +| **Enable Login** | Master switch for all authentication methods | `true` | +| **Enable Email Registration** | Allow new users to register via email | `true` | +| **GitHub OAuth Enabled** | Enable GitHub OAuth authentication | `false` | + +### Email Authentication Configuration + +Email authentication is always available but requires SMTP configuration for full functionality: + +**Required for Full Functionality:** +- SMTP server configuration (for email verification and password reset) +- Email sending enabled in global settings + +**Configuration Steps:** +1. Navigate to **Global Settings** → **SMTP Mail Settings** +2. Configure SMTP server details: + - Host (e.g., `smtp.gmail.com`) + - Port (e.g., `587`) + - Username and Password + - Security settings +3. Enable email sending in **Global Settings** → **Global Configuration** + +### GitHub OAuth Configuration + +GitHub OAuth requires setup both in GitHub and DeployStack: + +**GitHub Setup:** +1. Go to GitHub → Settings → Developer settings → OAuth Apps +2. Create a new OAuth App with: + - **Application name**: Your DeployStack instance name + - **Homepage URL**: Your DeployStack frontend URL + - **Authorization callback URL**: `https://your-domain.com/api/auth/github/callback` +3. Note the **Client ID** and **Client Secret** + +**DeployStack Configuration:** +1. Navigate to **Global Settings** → **GitHub OAuth Configuration** +2. Configure the following settings: + - **Client ID**: From your GitHub OAuth App + - **Client Secret**: From your GitHub OAuth App (encrypted) + - **Enabled**: Set to `true` to activate GitHub OAuth + - **Callback URL**: Must match the URL configured in GitHub + - **Scope**: OAuth permissions (default: `user:email`) + +**Configuration Example:** +``` +Client ID: abc123def456 +Client Secret: [encrypted] +Enabled: true +Callback URL: https://your-deploystack.com/api/auth/github/callback +Scope: user:email +``` + +## User Roles and First User + +### First User (Global Administrator) + +The first user registered in DeployStack automatically becomes the **Global Administrator** with full system access. This ensures there's always at least one administrator who can manage the system. + +**Important Notes:** +- The first user **must** be created via email registration +- GitHub OAuth cannot be used to create the first user +- This prevents accidental creation of admin accounts via OAuth + +### Subsequent Users + +All users registered after the first user receive the **Global User** role by default, regardless of authentication method used. + +**Role Assignment:** +- **Email Registration**: `global_user` role +- **GitHub OAuth**: `global_user` role +- **Role Changes**: Only global administrators can modify user roles + +## Security Considerations + +### Email Authentication Security + +- Passwords are hashed using Argon2 with secure parameters +- Email verification prevents unauthorized account creation +- Password reset tokens are time-limited and single-use +- Session management handled by Lucia v3 + +### GitHub OAuth Security + +- OAuth 2.0 standard with state parameter for CSRF protection +- GitHub emails are considered verified +- Secure token exchange and validation +- No GitHub credentials stored in DeployStack + +### Account Linking + +When a user with an existing email account uses GitHub OAuth with the same email address: +- The GitHub account is automatically linked to the existing account +- User can subsequently use either authentication method +- No duplicate accounts are created + +## Troubleshooting + +### Email Authentication Issues + +**Email verification not working:** +- Check SMTP configuration in Global Settings +- Verify email sending is enabled +- Check server logs for email delivery errors + +**Password reset not working:** +- Ensure SMTP is configured and email sending is enabled +- Verify the reset link hasn't expired (tokens are time-limited) + +### GitHub OAuth Issues + +**"GitHub OAuth is not enabled" error:** +- Check that GitHub OAuth is enabled in Global Settings +- Verify Client ID and Client Secret are configured +- Ensure callback URL matches GitHub OAuth App configuration + +**"GitHub email not available" error:** +- User's GitHub email must be public and verified +- Check GitHub account email settings +- Ensure OAuth scope includes `user:email` + +**First user creation blocked:** +- This is expected behavior - first user must use email registration +- Use email registration to create the initial administrator account + +### General Authentication Issues + +**Login disabled:** +- Check that "Enable Login" is set to `true` in Global Settings +- Verify database is properly configured and accessible + +**Registration disabled:** +- Check that "Enable Email Registration" is set to `true` for email signup +- Verify GitHub OAuth is enabled and configured for GitHub login + +## API Endpoints + +For developers and integrations, DeployStack provides REST API endpoints for authentication: + +### Email Authentication +- `POST /api/auth/email/register` - User registration +- `POST /api/auth/email/login` - User login +- `POST /api/auth/email/forgot-password` - Password reset request +- `POST /api/auth/email/reset-password` - Password reset confirmation + +### GitHub OAuth +- `GET /api/auth/github/login` - Initiate GitHub OAuth flow +- `GET /api/auth/github/callback` - OAuth callback handler +- `GET /api/auth/github/status` - Check if GitHub OAuth is enabled + +### General Authentication +- `POST /api/auth/logout` - User logout +- `GET /api/users/me` - Get current user profile +- `PUT /api/auth/profile/update` - Update user profile + +## Best Practices + +### For Administrators + +1. **Always configure the first user via email** to ensure proper admin access +2. **Set up SMTP early** to enable email verification and password reset +3. **Use strong OAuth secrets** and keep them secure +4. **Regularly review user accounts** and roles +5. **Monitor authentication logs** for security issues + +### For Users + +1. **Use strong passwords** for email authentication +2. **Verify your email address** when using email registration +3. **Keep GitHub account secure** when using OAuth +4. **Use the same email address** across authentication methods for account linking + +### For Organizations + +1. **Choose authentication methods** that align with your security policies +2. **Consider GitHub OAuth** for development teams already using GitHub +3. **Implement proper access controls** through user roles +4. **Document authentication procedures** for your team +5. **Plan for account recovery** scenarios + +--- + +For technical implementation details, see the [Backend Authentication Documentation](/deploystack/development/backend/api) and [Global Settings Management](/deploystack/global-settings). diff --git a/docs/deploystack/development/backend/api.mdx b/docs/deploystack/development/backend/api.mdx index b26b114..8c18770 100644 --- a/docs/deploystack/development/backend/api.mdx +++ b/docs/deploystack/development/backend/api.mdx @@ -89,6 +89,127 @@ When the server is running (`npm run dev`), you can access: 4. Select the generated `api-spec.json` file 5. All API endpoints will be imported with proper documentation +## Route File Structure Rules + +**IMPORTANT**: Every new API endpoint must be created in a separate file following the established directory structure pattern. Do not add route definitions directly to `src/routes/index.ts`. + +### File Structure Requirements + +1. **Separate Files**: Each route or group of related routes must be in its own file +2. **Directory Organization**: Group related routes in directories (e.g., `/auth/`, `/users/`, `/health/`) +3. **Import Pattern**: Routes are imported and registered in `src/routes/index.ts` +4. **Consistent Naming**: Use descriptive names that match the route purpose + +### Correct File Structure + +``` +services/backend/src/routes/ +├── index.ts # Main routes registration (imports only) +├── health/ +│ └── index.ts # Health check endpoints +├── auth/ +│ ├── loginEmail.ts # Email login endpoint +│ ├── registerEmail.ts # Email registration endpoint +│ └── logout.ts # Logout endpoint +├── db/ +│ ├── status.ts # Database status endpoint +│ └── setup.ts # Database setup endpoint +├── users/ +│ └── index.ts # User management endpoints +└── teams/ + └── index.ts # Team management endpoints +``` + +### Route File Template + +Each route file should follow this pattern: + +```typescript +import { type FastifyInstance } from 'fastify' +import { z } from 'zod' +import { zodToJsonSchema } from 'zod-to-json-schema' + +// Define your schemas +const responseSchema = z.object({ + // Your response structure +}); + +export default async function yourRoute(server: FastifyInstance) { + server.get('/your-endpoint', { + schema: { + tags: ['Your Category'], + summary: 'Brief description', + description: 'Detailed description', + response: { + 200: zodToJsonSchema(responseSchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + } + } + }, async () => { + // Your route logic + return { /* your response */ } + }); +} +``` + +### Registration in index.ts + +Import and register your route in `src/routes/index.ts`: + +```typescript +// Import your route +import yourRoute from './your-directory' + +export const registerRoutes = (server: FastifyInstance): void => { + server.register(async (apiInstance) => { + // Register your route + await apiInstance.register(yourRoute); + + // Other route registrations... + }, { prefix: '/api' }); +} +``` + +### ❌ What NOT to Do + +```typescript +// DON'T: Add routes directly to index.ts +export const registerRoutes = (server: FastifyInstance): void => { + server.register(async (apiInstance) => { + // ❌ BAD: Inline route definition + apiInstance.get('/my-endpoint', { + schema: { /* ... */ } + }, async () => { + return { message: 'This should be in a separate file!' } + }); + }, { prefix: '/api' }); +} +``` + +### ✅ What TO Do + +```typescript +// ✅ GOOD: Import and register separate route files +import myEndpointRoute from './my-endpoint' + +export const registerRoutes = (server: FastifyInstance): void => { + server.register(async (apiInstance) => { + // ✅ GOOD: Register imported route + await apiInstance.register(myEndpointRoute); + }, { prefix: '/api' }); +} +``` + +### Benefits of This Structure + +1. **Maintainability**: Each endpoint is self-contained and easy to find +2. **Scalability**: Adding new endpoints doesn't clutter the main routes file +3. **Testing**: Individual route files can be tested in isolation +4. **Code Organization**: Related functionality is grouped together +5. **Team Collaboration**: Multiple developers can work on different routes without conflicts + ## Adding Documentation to Routes To add OpenAPI documentation to your routes, define your request body and response schemas using Zod. Then, use the `zodToJsonSchema` utility to convert these Zod schemas into the JSON Schema format expected by Fastify. diff --git a/docs/deploystack/development/backend/cloud-credentials.mdx b/docs/deploystack/development/backend/cloud-credentials.mdx new file mode 100644 index 0000000..5129f20 --- /dev/null +++ b/docs/deploystack/development/backend/cloud-credentials.mdx @@ -0,0 +1,509 @@ +--- +title: Cloud Credentials Management +description: Comprehensive guide to implementing and managing cloud provider credentials in DeployStack backend with encryption, validation, and role-based access control. +--- + +# Cloud Credentials Management + +DeployStack provides a secure cloud credentials management system that allows teams to store and manage cloud provider credentials for deployments. This system features encryption, role-based access control, and provider validation. + +## Architecture Overview + +The cloud credentials system consists of several key components: + +- **Provider Configuration**: Defines supported cloud providers and their required fields +- **Encryption Service**: Handles secure storage of credential values +- **Validation System**: Validates credential data against provider schemas +- **Role-Based Access**: Different response formats based on user permissions +- **API Layer**: RESTful endpoints for credential management + +## Database Schema + +### Team Cloud Credentials Table + +```sql +CREATE TABLE team_cloud_credentials ( + id TEXT PRIMARY KEY, + team_id TEXT NOT NULL REFERENCES teams(id) ON DELETE CASCADE, + provider_id TEXT NOT NULL, + name TEXT NOT NULL, + comment TEXT, + credentials TEXT NOT NULL, -- Encrypted JSON + created_by TEXT NOT NULL REFERENCES authUser(id), + created_at INTEGER NOT NULL, + updated_at INTEGER NOT NULL, + UNIQUE(team_id, provider_id, name) +); +``` + +### Key Features + +- **Team Isolation**: Credentials are scoped to specific teams +- **Provider Support**: Multiple cloud providers per team +- **Encrypted Storage**: All credential values are encrypted +- **Audit Trail**: Tracks creation and modification metadata +- **Unique Constraints**: Prevents duplicate credential names per provider/team + +## Provider Configuration + +Cloud providers are configured in `services/backend/config/cloud-providers.ts`: + +```typescript +export interface CloudProvider { + id: string; + name: string; + description: string; + fields: CloudProviderField[]; + enabled: boolean; +} + +export interface CloudProviderField { + key: string; + label: string; + type: 'text' | 'password' | 'textarea'; + required: boolean; + secret: boolean; + placeholder?: string; + description?: string; + validation?: { + pattern?: string; + minLength?: number; + maxLength?: number; + }; +} +``` + +### Example Provider Configuration + +```typescript +{ + id: 'aws', + name: 'Amazon Web Services', + description: 'AWS cloud platform credentials', + fields: [ + { + key: 'access_key_id', + label: 'Access Key ID', + type: 'text', + required: true, + secret: false, + placeholder: 'AKIAIOSFODNN7EXAMPLE' + }, + { + key: 'secret_access_key', + label: 'Secret Access Key', + type: 'password', + required: true, + secret: true, + validation: { + minLength: 40 + } + } + ], + enabled: true +} +``` + +## Encryption System + +### Storage Format + +Credentials are stored as encrypted JSON with metadata: + +```typescript +interface StoredCredentials { + [fieldKey: string]: { + value: string; // Encrypted value + secret: boolean; // Field type from provider config + updatedAt: string; // ISO timestamp + }; +} +``` + +### Encryption Process + +1. **Field Validation**: Validate against provider schema +2. **Individual Encryption**: Each field value encrypted separately +3. **Metadata Storage**: Include field type and timestamp +4. **JSON Serialization**: Store as encrypted JSON string + +### Security Features + +- **AES-256-GCM**: Industry-standard encryption algorithm +- **Separate Keys**: Encryption keys managed separately from data +- **Field-Level**: Each credential field encrypted individually +- **No Plaintext**: Credential values never stored in plaintext + +## Role-Based Access Control + +The cloud credentials system uses **team-contextual permissions** rather than global permissions. For detailed role information and permission matrices, see [Role-Based Access Control](/deploystack/development/backend/roles). + +### Access Levels + +| User Type | Access Level | Field Information | Credential Values | +|-----------|--------------|-------------------|-------------------| +| **Global Admin** | Metadata only | ✅ Field types & status | ❌ No values shown | +| **Team Admin** | Full CRUD | ✅ Field types & status | 🔒 Placeholders for non-secret | +| **Team User** | Read-only basic | ❌ No field details | ❌ No values shown | +| **Non-member** | No access | ❌ Blocked | ❌ Blocked | + +### Key Security Features + +- **Team Isolation**: Users can only access credentials from teams they belong to +- **No Secret Exposure**: Secret values are never returned in API responses +- **Role-Based Responses**: API responses vary based on user's role within the team +- **Global Admin Limitations**: Even global admins cannot see credential values + +## API Implementation + +### Service Layer + +The `CloudCredentialsService` provides the core business logic: + +```typescript +export class CloudCredentialsService { + // Role-specific methods + async getTeamCredentials(teamId: string): Promise + async getTeamCredentialsGlobalAdmin(teamId: string): Promise + async getTeamCredentialsBasic(teamId: string): Promise + + // CRUD operations + async createCredentials(teamId: string, userId: string, input: CreateCloudCredentialRequest): Promise + async updateCredentials(credentialId: string, teamId: string, input: UpdateCloudCredentialRequest): Promise + async deleteCredentials(credentialId: string, teamId: string): Promise + + // Internal methods + async getDecryptedCredentials(credentialId: string, teamId: string): Promise | null> +} +``` + +### Route Implementation + +Routes automatically detect user role and call appropriate service methods: + +```typescript +// Check user permissions and role +const roleService = new RoleService(); +const hasAdminPermissions = await roleService.userHasPermission(userId, 'cloud_credentials.edit'); +const userRole = await roleService.getUserRole(userId); +const isGlobalAdmin = userRole?.id === 'global_admin'; + +let credentials; +if (hasAdminPermissions && !isGlobalAdmin) { + // Team admin - full details with placeholders + credentials = await cloudCredentialsService.getTeamCredentials(teamId); +} else if (isGlobalAdmin) { + // Global admin - metadata only, no values + credentials = await cloudCredentialsService.getTeamCredentialsGlobalAdmin(teamId); +} else { + // Team member - basic details only + credentials = await cloudCredentialsService.getTeamCredentialsBasic(teamId); +} +``` + +## API Endpoints + +### List Cloud Providers + +```http +GET /api/teams/:teamId/cloud-providers +Authorization: Required (cloud_credentials.view permission) +``` + +Returns available cloud providers with their field schemas. + +### List Team Credentials + +```http +GET /api/teams/:teamId/cloud-credentials +Authorization: Required (cloud_credentials.view permission) +``` + +Returns credentials list with response format based on user role. + +### Create Credentials + +```http +POST /api/teams/:teamId/cloud-credentials +Authorization: Required (cloud_credentials.create permission) +Content-Type: application/json + +{ + "providerId": "aws", + "name": "Production AWS", + "comment": "Production environment credentials", + "credentials": { + "access_key_id": "AKIAIOSFODNN7EXAMPLE", + "secret_access_key": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY" + } +} +``` + +### Update Credentials + +```http +PUT /api/teams/:teamId/cloud-credentials/:credentialId +Authorization: Required (cloud_credentials.edit permission) +``` + +Supports partial updates of name, comment, and credential values. + +### Delete Credentials + +```http +DELETE /api/teams/:teamId/cloud-credentials/:credentialId +Authorization: Required (cloud_credentials.delete permission) +``` + +Permanently removes credentials from the team. + +## Validation System + +### Create vs Update Validation + +The system provides two validation functions to handle different scenarios: + +#### Full Validation (Create) + +```typescript +export function validateCredentialData( + providerId: string, + credentials: Record +): ValidationResult { + const provider = getCloudProvider(providerId); + if (!provider) { + return { valid: false, errors: ['Invalid provider ID'] }; + } + + const errors: string[] = []; + + // Check ALL required fields + for (const field of provider.fields) { + if (field.required && !credentials[field.key]) { + errors.push(`${field.label} is required`); + } + } + + // Validate all provided fields + for (const [key, value] of Object.entries(credentials)) { + const field = provider.fields.find(f => f.key === key); + if (field?.validation) { + // Apply validation rules + } + } + + return { valid: errors.length === 0, errors }; +} +``` + +#### Partial Validation (Update) + +```typescript +export function validateCredentialDataForUpdate( + providerId: string, + credentials: Record +): ValidationResult { + const provider = getCloudProvider(providerId); + if (!provider) { + return { valid: false, errors: ['Invalid provider ID'] }; + } + + const errors: string[] = []; + + // Only validate fields that are actually provided + for (const field of provider.fields) { + const value = credentials[field.key]; + + // Skip validation if field is not provided + if (value === null || value === undefined) { + continue; + } + + // Validate provided fields + if (field.required && value.trim() === '') { + errors.push(`${field.label} cannot be empty`); + } + + // Apply format validation if present + if (field.validation) { + // Apply validation rules + } + } + + return { valid: errors.length === 0, errors }; +} +``` + +### Update Process + +When updating credentials, the system: + +1. **Validates only provided fields** using `validateCredentialDataForUpdate` +2. **Retrieves existing credentials** from encrypted storage +3. **Merges updates** with existing values +4. **Re-encrypts the complete credential set** + +This allows partial updates without requiring users to re-submit secret values. + +### Validation Rules + +- **Required Fields**: Enforced based on provider configuration +- **Field Types**: Text, password, textarea validation +- **Format Validation**: Pattern matching, length constraints +- **Provider Schema**: Validates against defined field structure +- **Partial Updates**: Only validates fields being updated + +## Error Handling + +### Common Error Scenarios + +```typescript +// Provider not found +throw new Error('Invalid provider ID'); + +// Validation failure +throw new Error(`Validation failed: ${validation.errors.join(', ')}`); + +// Duplicate name +throw new Error('A credential set with this name already exists for this provider'); + +// Not found +return null; // Handled as 404 in routes +``` + +### Error Response Format + +```json +{ + "success": false, + "error": "Validation failed", + "details": ["Access Key ID is required", "Secret Access Key must be at least 40 characters"] +} +``` + +## Security Considerations + +### Data Protection + +- **Encryption at Rest**: All credential values encrypted before storage +- **No Plaintext Logs**: Credential values never logged in plaintext +- **Secure Transmission**: HTTPS required for all API calls +- **Access Control**: Role-based response filtering + +### Best Practices + +- **Principle of Least Privilege**: Users see only necessary information +- **Audit Logging**: Track all credential operations +- **Regular Rotation**: Encourage credential rotation +- **Secure Defaults**: Safe fallbacks for all operations + +## Adding New Providers + +### 1. Define Provider Configuration + +Add new provider to `cloud-providers.ts`: + +```typescript +{ + id: 'new-provider', + name: 'New Cloud Provider', + description: 'Description of the provider', + fields: [ + { + key: 'api_key', + label: 'API Key', + type: 'password', + required: true, + secret: true + } + ], + enabled: true +} +``` + +### 2. Update Provider Registry + +Add to the providers array and export: + +```typescript +export const CLOUD_PROVIDERS: CloudProvider[] = [ + AWS_PROVIDER, + RENDER_PROVIDER, + NEW_PROVIDER, // Add here +]; +``` + +### 3. Test Integration + +- Validate field schemas work correctly +- Test encryption/decryption of new field types +- Verify API responses include new provider +- Test credential creation and validation + +## Troubleshooting + +### Common Issues + +#### Encryption Errors +- Verify encryption service is properly configured +- Check that encryption keys are available +- Ensure proper error handling for encryption failures + +#### Validation Failures +- Check provider configuration matches expected format +- Verify required fields are properly marked +- Test validation rules with sample data + +#### Permission Errors +- Confirm user has required permissions +- Check role assignments are correct +- Verify middleware is properly applied to routes + +### Debug Commands + +```typescript +// Test provider configuration +const provider = getCloudProvider('aws'); +console.log('Provider config:', provider); + +// Test validation +const validation = validateCredentialData('aws', testCredentials); +console.log('Validation result:', validation); + +// Check user permissions +const hasPermission = await roleService.userHasPermission(userId, 'cloud_credentials.view'); +console.log('Has permission:', hasPermission); +``` + +## Performance Considerations + +### Optimization Strategies + +- **Lazy Loading**: Load provider configurations on demand +- **Caching**: Cache provider configurations in memory +- **Batch Operations**: Support bulk credential operations +- **Pagination**: Implement pagination for large credential lists + +### Monitoring + +- **API Response Times**: Monitor credential API performance +- **Encryption Overhead**: Track encryption/decryption performance +- **Database Queries**: Optimize credential lookup queries +- **Memory Usage**: Monitor provider configuration memory usage + +## Future Enhancements + +### Planned Features + +- **Credential Sharing**: Share credentials between teams +- **Expiration Dates**: Set expiration dates for credentials +- **Usage Tracking**: Track which deployments use which credentials +- **Backup/Restore**: Export/import encrypted credential backups +- **Integration Testing**: Test credentials against actual providers + +### Extension Points + +- **Custom Providers**: Plugin system for custom cloud providers +- **Validation Plugins**: Custom validation rules for specific providers +- **Encryption Backends**: Support for different encryption systems +- **Audit Plugins**: Custom audit logging implementations diff --git a/docs/deploystack/development/backend/database-sqlite.mdx b/docs/deploystack/development/backend/database-sqlite.mdx new file mode 100644 index 0000000..1bc0324 --- /dev/null +++ b/docs/deploystack/development/backend/database-sqlite.mdx @@ -0,0 +1,448 @@ +--- +title: SQLite Database Development Guide +description: Technical implementation details and best practices for SQLite integration in DeployStack Backend development. +--- + +# SQLite Database Development Guide + +## Overview + +SQLite is the default database for DeployStack development and small to medium deployments. It provides excellent performance, zero configuration, and a simple file-based architecture that makes it ideal for development, testing, and single-server deployments. + +> **Setup Instructions**: For initial SQLite configuration, see the [Database Setup Guide](/deploystack/self-hosted/database-setup#sqlite). + +## Technical Architecture + +### File-Based Database + +SQLite stores the entire database in a single file, making it extremely portable and easy to manage: + +- **Database File**: `persistent_data/database/deploystack.db` +- **Zero Configuration**: No server setup or network configuration required +- **ACID Compliance**: Full transaction support with rollback capabilities +- **Cross-Platform**: Works identically across all operating systems + +### Direct Driver Integration + +DeployStack uses the `better-sqlite3` driver for optimal SQLite performance: + +```typescript +import { drizzle } from 'drizzle-orm/better-sqlite3'; +import Database from 'better-sqlite3'; + +// Direct file-based connection +const sqlite = new Database(dbPath); +const db = drizzle(sqlite, { schema }); +``` + +## Performance Characteristics + +### Advantages + +**Fast Local Operations**: +- No network latency for database operations +- Direct file system access for maximum speed +- Excellent read performance for concurrent operations + +**Simple Deployment**: +- Single file contains entire database +- No separate database server required +- Easy backup and restore operations + +**Development Friendly**: +- Instant startup with no configuration +- Easy to reset and recreate for testing +- Perfect for local development workflows + +### Limitations + +**Single Server Only**: +- Cannot be shared across multiple application instances +- No built-in replication or clustering +- Limited to single-server deployments + +**Concurrent Write Limitations**: +- Single writer at a time (multiple readers supported) +- Write operations are serialized +- May become a bottleneck under heavy write loads + +## Development Workflow + +### Local Development Setup + +SQLite is the recommended database for local development: + +```bash +# SQLite requires no additional setup +DB_TYPE=sqlite + +# Optional: Custom database path +SQLITE_DB_PATH=persistent_data/database/my-custom.db +``` + +### Database File Management + +**Default Location**: `services/backend/persistent_data/database/deploystack.db` + +**Directory Structure**: +``` +services/backend/ +├── persistent_data/ +│ ├── database/ +│ │ └── deploystack.db # Main database file +│ └── db.selection.json # Database type selection +``` + +### Testing with SQLite + +SQLite is excellent for testing due to its simplicity: + +```typescript +// Test setup - create temporary database +const testDb = new Database(':memory:'); // In-memory for speed +// or +const testDb = new Database('test.db'); // File-based for persistence + +// Run migrations +await migrate(drizzle(testDb), { migrationsFolder: './migrations' }); + +// Run tests +// ... + +// Cleanup +testDb.close(); +``` + +## Global Settings Integration + +### Batch Operations + +SQLite excels at batch operations and can handle large global settings initialization efficiently: + +- **Large Batches**: Can insert all 17+ global settings in a single transaction +- **No Parameter Limits**: Unlike D1, SQLite has no practical parameter limits +- **Transaction Safety**: All settings created atomically + +### Performance Benefits + +```typescript +// SQLite can handle large batch operations efficiently +await db.transaction(async (tx) => { + // Insert all settings in a single transaction + await tx.insert(globalSettings).values(allSettingsData); + await tx.insert(globalSettingGroups).values(allGroupsData); +}); +``` + +## Database Inspection and Debugging + +### SQLite CLI + +The SQLite command-line interface is the primary tool for database inspection: + +```bash +# Open database +sqlite3 services/backend/persistent_data/database/deploystack.db + +# Common commands +.tables # List all tables +.schema tablename # Show table schema +.headers on # Show column headers +.mode column # Format output in columns + +# Query examples +SELECT * FROM globalSettings LIMIT 10; +SELECT COUNT(*) FROM users; +.quit # Exit +``` + +### GUI Tools + +**DB Browser for SQLite** (Recommended): +- Download: [https://sqlitebrowser.org/](https://sqlitebrowser.org/) +- Visual table browsing and editing +- Query execution with syntax highlighting +- Schema visualization + +**Other Options**: +- **SQLiteStudio**: Cross-platform SQLite manager +- **DBeaver**: Universal database tool with SQLite support +- **VS Code Extensions**: SQLite Viewer, SQLite3 Editor + +### Programmatic Inspection + +```typescript +// Get database info +const info = db.prepare("PRAGMA database_list").all(); +const tables = db.prepare("SELECT name FROM sqlite_master WHERE type='table'").all(); + +// Check table structure +const schema = db.prepare("PRAGMA table_info(globalSettings)").all(); + +// Performance analysis +const stats = db.prepare("PRAGMA compile_options").all(); +``` + +## Backup and Recovery + +### File-Based Backup + +SQLite's file-based nature makes backup extremely simple: + +```bash +# Simple file copy (when database is not in use) +cp persistent_data/database/deploystack.db backup/deploystack-$(date +%Y%m%d).db + +# Using SQLite backup command (safe during operation) +sqlite3 persistent_data/database/deploystack.db ".backup backup/deploystack-$(date +%Y%m%d).db" +``` + +### Automated Backup Script + +```bash +#!/bin/bash +# backup-sqlite.sh + +DB_PATH="persistent_data/database/deploystack.db" +BACKUP_DIR="backup" +DATE=$(date +%Y%m%d_%H%M%S) + +mkdir -p $BACKUP_DIR + +# Create backup +sqlite3 $DB_PATH ".backup $BACKUP_DIR/deploystack-$DATE.db" + +# Keep only last 7 days of backups +find $BACKUP_DIR -name "deploystack-*.db" -mtime +7 -delete + +echo "Backup created: $BACKUP_DIR/deploystack-$DATE.db" +``` + +### Recovery + +```bash +# Restore from backup +cp backup/deploystack-20250103.db persistent_data/database/deploystack.db + +# Or using SQLite restore +sqlite3 persistent_data/database/deploystack.db ".restore backup/deploystack-20250103.db" +``` + +## Performance Optimization + +### Indexing Strategy + +SQLite benefits greatly from proper indexing: + +```sql +-- Example indexes for common queries +CREATE INDEX idx_users_email ON users(email); +CREATE INDEX idx_global_settings_key ON globalSettings(key); +CREATE INDEX idx_sessions_user_id ON sessions(user_id); +CREATE INDEX idx_teams_created_at ON teams(created_at); +``` + +### PRAGMA Settings + +Optimize SQLite performance with PRAGMA settings: + +```typescript +// Performance optimizations +db.pragma('journal_mode = WAL'); // Write-Ahead Logging +db.pragma('synchronous = NORMAL'); // Balanced safety/performance +db.pragma('cache_size = 1000000'); // 1GB cache +db.pragma('temp_store = MEMORY'); // Use memory for temp tables +``` + +### Connection Pooling + +While SQLite doesn't need traditional connection pooling, you can optimize connection usage: + +```typescript +// Reuse single connection +const sqlite = new Database(dbPath, { + readonly: false, + fileMustExist: false, + timeout: 5000, + verbose: process.env.NODE_ENV === 'development' ? (message) => { + server.log.debug({ operation: 'sqlite_query' }, message); + } : undefined +}); + +// Enable WAL mode for better concurrency +sqlite.pragma('journal_mode = WAL'); +``` + +## Migration Considerations + +### SQLite-Specific Features + +SQLite has some unique characteristics for migrations: + +```sql +-- SQLite doesn't support all ALTER TABLE operations +-- Instead of ALTER COLUMN, you need to recreate the table + +-- Example: Adding a column (supported) +ALTER TABLE users ADD COLUMN phone TEXT; + +-- Example: Changing column type (not supported directly) +-- Requires table recreation: +CREATE TABLE users_new ( + id TEXT PRIMARY KEY, + email TEXT NOT NULL, + name TEXT NOT NULL, + age INTEGER -- Changed from TEXT to INTEGER +); + +INSERT INTO users_new SELECT id, email, name, CAST(age AS INTEGER) FROM users; +DROP TABLE users; +ALTER TABLE users_new RENAME TO users; +``` + +### Migration Best Practices + +1. **Test Migrations**: Always test on a copy of production data +2. **Backup Before Migration**: Create backup before applying migrations +3. **Use Transactions**: Wrap migrations in transactions for rollback capability +4. **Check Constraints**: Verify foreign key constraints after table recreation + +## Troubleshooting + +### Common Issues + +**"Database is locked"** +- **Cause**: Another process has the database open +- **Solution**: Ensure only one application instance accesses the database +- **Prevention**: Use WAL mode for better concurrency + +**"No such table" errors** +- **Cause**: Migrations haven't been applied +- **Solution**: Run `npm run db:up` or restart the server +- **Check**: Verify migration files exist in `drizzle/migrations_sqlite/` + +**Poor performance** +- **Cause**: Missing indexes or suboptimal queries +- **Solution**: Add appropriate indexes and optimize queries +- **Analysis**: Use `EXPLAIN QUERY PLAN` to analyze query performance + +**File corruption** +- **Cause**: Unexpected shutdown or disk issues +- **Solution**: Restore from backup +- **Prevention**: Use WAL mode and regular backups + +### Debugging Queries + +```typescript +// Enable query logging +const db = drizzle(sqlite, { + schema, + logger: { + logQuery: (query, params) => { + server.log.debug({ operation: 'sqlite_query', query, params }, 'Executing query'); + } + } +}); + +// Analyze query performance +const explain = db.prepare('EXPLAIN QUERY PLAN SELECT * FROM users WHERE email = ?').all('test@example.com'); +server.log.debug({ operation: 'sqlite_explain', explain }, 'Query execution plan'); +``` + +## Production Considerations + +### When to Use SQLite in Production + +**Good For**: +- Single-server applications +- Read-heavy workloads +- Small to medium datasets (< 1TB) +- Applications with predictable load patterns +- Embedded applications + +**Consider Alternatives When**: +- Multiple application servers needed +- High concurrent write requirements +- Need for real-time replication +- Distributed deployment requirements + +### Production Optimizations + +```typescript +// Production SQLite configuration +const sqlite = new Database(dbPath, { + readonly: false, + fileMustExist: true, + timeout: 10000 +}); + +// Production PRAGMA settings +sqlite.pragma('journal_mode = WAL'); +sqlite.pragma('synchronous = NORMAL'); +sqlite.pragma('cache_size = 2000000'); // 2GB cache +sqlite.pragma('mmap_size = 268435456'); // 256MB memory-mapped I/O +sqlite.pragma('optimize'); // Optimize database +``` + +### Monitoring + +```typescript +// Monitor database size and performance +const stats = { + fileSize: fs.statSync(dbPath).size, + pageCount: db.prepare('PRAGMA page_count').get(), + pageSize: db.prepare('PRAGMA page_size').get(), + walSize: fs.existsSync(dbPath + '-wal') ? fs.statSync(dbPath + '-wal').size : 0 +}; + +server.log.info({ operation: 'sqlite_monitoring', stats }, 'Database statistics'); +``` + +## Integration with DeployStack Features + +### Global Settings + +SQLite provides optimal performance for global settings: +- **Fast initialization**: All settings created in single transaction +- **No batching needed**: No parameter limits to worry about +- **Immediate consistency**: All changes immediately visible + +### Plugin System + +Plugins work seamlessly with SQLite: +- **Table creation**: Plugin tables created through standard migrations +- **Data operations**: Full SQL feature support +- **Performance**: Excellent performance for plugin data operations + +### Migration System + +SQLite migration advantages: +- **Fast execution**: Local file operations are very fast +- **Transaction safety**: Full rollback support for failed migrations +- **Simple debugging**: Easy to inspect database state during development + +## Future Considerations + +### Scaling Beyond SQLite + +When you outgrow SQLite, DeployStack makes migration easy: + +1. **Export Data**: Use SQLite's `.dump` command +2. **Transform Schema**: Convert to target database format +3. **Update Configuration**: Change database type in setup +4. **Import Data**: Load data into new database + +### Hybrid Approaches + +Consider hybrid approaches for scaling: +- **Read Replicas**: Use D1 or Turso for global read access +- **Caching Layer**: Add Redis for frequently accessed data +- **Microservices**: Split into multiple services with separate databases + +--- + +For general database concepts and cross-database functionality, see the [Database Development Guide](/deploystack/development/backend/database). + +For initial setup and configuration, see the [Database Setup Guide](/deploystack/self-hosted/database-setup). + +For comparison with other databases, see the [Cloudflare D1 Development Guide](/deploystack/development/backend/database-d1). diff --git a/docs/deploystack/development/backend/database-turso.mdx b/docs/deploystack/development/backend/database-turso.mdx new file mode 100644 index 0000000..e05e12d --- /dev/null +++ b/docs/deploystack/development/backend/database-turso.mdx @@ -0,0 +1,421 @@ +--- +title: Turso Database Development +description: Complete guide to using Turso distributed SQLite database with DeployStack Backend, including setup, configuration, and best practices. +--- + +# Turso Database Development + +## Overview + +Turso is a distributed SQLite database service that provides global replication and edge performance. It's built on libSQL, an open-source fork of SQLite that adds additional features while maintaining full SQLite compatibility. + +DeployStack integrates with Turso using the official `@libsql/client` driver through Drizzle ORM, providing excellent performance and developer experience. + +## Key Features + +- **Global Replication**: Automatic multi-region database replication +- **Edge Performance**: Low-latency access from anywhere in the world +- **SQLite Compatibility**: Full compatibility with SQLite syntax and features +- **Scalability**: Automatic scaling based on usage patterns +- **libSQL Protocol**: Enhanced SQLite with additional networking capabilities + +## Setup and Configuration + +### Prerequisites + +1. **Turso Account**: Sign up at [turso.tech](https://turso.tech) +2. **Turso CLI**: Install the Turso CLI tool +3. **Database Creation**: Create a Turso database instance + +### Installing Turso CLI + +```bash +# macOS (Homebrew) +brew install tursodatabase/tap/turso + +# Linux/macOS (curl) +curl -sSfL https://get.tur.so/install.sh | bash + +# Windows (PowerShell) +powershell -c "irm get.tur.so/install.ps1 | iex" +``` + +### Creating a Turso Database + +```bash +# Login to Turso +turso auth login + +# Create a new database +turso db create deploystack-dev + +# Get the database URL +turso db show deploystack-dev --url + +# Create an authentication token +turso db tokens create deploystack-dev +``` + +### Environment Configuration + +Add the following environment variables to your `.env` file: + +```bash +# Turso Configuration +TURSO_DATABASE_URL=libsql://your-database-name-your-org.turso.io +TURSO_AUTH_TOKEN=your_auth_token_here +``` + +**Important Notes:** +- The database URL should start with `libsql://` +- Keep your auth token secure and never commit it to version control +- Use different databases for different environments (dev/staging/prod) + +## Database Setup in DeployStack + +### Using the Setup API + +Once your environment variables are configured, use the DeployStack setup API: + +```bash +# Setup Turso database +curl -X POST http://localhost:3000/api/db/setup \ + -H "Content-Type: application/json" \ + -d '{"type": "turso"}' +``` + +### Verification + +Check that the database is properly configured: + +```bash +# Check database status +curl http://localhost:3000/api/db/status +``` + +Expected response: +```json +{ + "configured": true, + "initialized": true, + "dialect": "turso" +} +``` + +## Development Workflow + +### Schema Development + +Turso uses the same SQLite schema as other database types. All schema changes are made in `src/db/schema.sqlite.ts`: + +```typescript +// Example: Adding a new table +export const projects = sqliteTable('projects', { + id: text('id').primaryKey(), + name: text('name').notNull(), + description: text('description'), + userId: text('user_id').references(() => authUser.id), + createdAt: integer('created_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), + updatedAt: integer('updated_at', { mode: 'timestamp' }).notNull().$defaultFn(() => new Date()), +}); +``` + +### Migration Generation + +Generate migrations using the standard Drizzle commands: + +```bash +# Generate migration files +npm run db:generate + +# Apply migrations (automatic on server start) +npm run db:up +``` + +### Database Operations + +All standard Drizzle operations work with Turso: + +```typescript +// Example: Querying data +const users = await db.select().from(schema.authUser).all(); + +// Example: Inserting data +await db.insert(schema.authUser).values({ + id: 'user_123', + username: 'john_doe', + email: 'john@example.com', + // ... other fields +}); + +// Example: Complex queries with joins +const usersWithTeams = await db + .select() + .from(schema.authUser) + .leftJoin(schema.teamMembers, eq(schema.authUser.id, schema.teamMembers.userId)) + .where(eq(schema.authUser.active, true)); +``` + +## Performance Considerations + +### Connection Management + +Turso connections are managed automatically by the libSQL client: + +- **Connection Pooling**: Automatic connection pooling for optimal performance +- **Keep-Alive**: Connections are kept alive to reduce latency +- **Automatic Reconnection**: Handles network interruptions gracefully + +### Query Optimization + +- **Prepared Statements**: Use prepared statements for repeated queries +- **Batch Operations**: Group multiple operations when possible +- **Indexing**: Add appropriate indexes for frequently queried columns + +```typescript +// Example: Batch operations +await db.batch([ + db.insert(schema.authUser).values(user1), + db.insert(schema.authUser).values(user2), + db.insert(schema.authUser).values(user3), +]); +``` + +### Regional Performance + +- **Edge Locations**: Turso automatically routes queries to the nearest edge location +- **Read Replicas**: Read operations are served from local replicas +- **Write Consistency**: Writes are replicated globally with eventual consistency + +## Best Practices + +### Environment Management + +```bash +# Development +TURSO_DATABASE_URL=libsql://deploystack-dev-your-org.turso.io +TURSO_AUTH_TOKEN=dev_token_here + +# Staging +TURSO_DATABASE_URL=libsql://deploystack-staging-your-org.turso.io +TURSO_AUTH_TOKEN=staging_token_here + +# Production +TURSO_DATABASE_URL=libsql://deploystack-prod-your-org.turso.io +TURSO_AUTH_TOKEN=prod_token_here +``` + +### Security + +- **Token Rotation**: Regularly rotate authentication tokens +- **Environment Isolation**: Use separate databases for each environment +- **Access Control**: Use Turso's built-in access control features +- **Encryption**: Data is encrypted in transit and at rest + +### Monitoring + +```bash +# Monitor database usage +turso db show deploystack-prod + +# View recent activity +turso db shell deploystack-prod --command ".stats" + +# Check replication status +turso db locations deploystack-prod +``` + +## Debugging and Troubleshooting + +### Common Issues + +**Connection Errors** +``` +Error: Failed to connect to Turso database +``` +- Verify `TURSO_DATABASE_URL` is correct and starts with `libsql://` +- Check that `TURSO_AUTH_TOKEN` is valid and not expired +- Ensure network connectivity to Turso servers + +**Authentication Errors** +``` +Error: Authentication failed +``` +- Regenerate the auth token: `turso db tokens create your-database` +- Verify the token has proper permissions +- Check that the token matches the database + +**Migration Errors** +``` +Error: Migration failed to apply +``` +- Check migration SQL syntax is valid SQLite +- Verify no conflicting schema changes +- Review migration order and dependencies + +### Debug Logging + +Enable detailed logging to troubleshoot issues: + +```bash +# Enable debug logging +LOG_LEVEL=debug npm run dev +``` + +Look for Turso-specific log entries: +``` +[INFO] Creating Turso connection +[INFO] LibSQL client created +[INFO] Turso database instance created successfully +``` + +### Database Inspection + +```bash +# Connect to database shell +turso db shell your-database + +# Run SQL queries +turso db shell your-database --command "SELECT * FROM authUser LIMIT 5" + +# Export database +turso db dump your-database --output backup.sql +``` + +### Performance Monitoring + +```bash +# Check database statistics +turso db show your-database + +# Monitor query performance +turso db shell your-database --command "EXPLAIN QUERY PLAN SELECT * FROM authUser" +``` + +## Advanced Features + +### Multi-Region Setup + +```bash +# Create database with specific regions +turso db create deploystack-global --location lax,fra,nrt + +# Check current locations +turso db locations deploystack-global + +# Add more locations +turso db locations add deploystack-global syd +``` + +### Database Branching + +```bash +# Create a branch for development +turso db create deploystack-feature --from-db deploystack-main + +# Switch between branches +turso db shell deploystack-feature +``` + +### Backup and Restore + +```bash +# Create backup +turso db dump deploystack-prod --output backup-$(date +%Y%m%d).sql + +# Restore from backup +turso db shell deploystack-dev < backup-20250104.sql +``` + +## Integration with DeployStack Features + +### Global Settings + +Turso works seamlessly with DeployStack's global settings system: + +- **Batch Operations**: Efficient batch creation of settings +- **Encryption**: Settings are encrypted before storage +- **Performance**: Optimized for Turso's distributed architecture + +### Plugin System + +Plugins can extend the database schema with Turso: + +```typescript +// Example plugin with Turso-optimized tables +class MyPlugin implements Plugin { + databaseExtension: DatabaseExtension = { + tableDefinitions: { + 'my_table': { + id: (builder) => builder('id').primaryKey(), + data: (builder) => builder('data').notNull(), + // Optimized for Turso's replication + region: (builder) => builder('region'), + created_at: (builder) => builder('created_at') + } + } + }; +} +``` + +### Authentication + +Lucia authentication works perfectly with Turso: + +- **Session Management**: Distributed session storage +- **User Data**: Global user data replication +- **Performance**: Fast authentication checks worldwide + +## Migration from Other Databases + +### From SQLite + +Since Turso is SQLite-compatible, migration is straightforward: + +1. **Export SQLite data**: `sqlite3 database.db .dump > export.sql` +2. **Import to Turso**: `turso db shell your-database < export.sql` +3. **Update environment variables**: Switch to Turso configuration +4. **Test application**: Verify all functionality works + +### From D1 (if previously used) + +1. **Export D1 data**: Use Wrangler to export data +2. **Convert to SQLite format**: Ensure compatibility +3. **Import to Turso**: Load data into Turso database +4. **Update configuration**: Switch database type to Turso + +## Cost Optimization + +### Usage Monitoring + +```bash +# Check current usage +turso db show your-database + +# Monitor over time +turso org show +``` + +### Optimization Strategies + +- **Query Efficiency**: Optimize queries to reduce database load +- **Connection Reuse**: Leverage connection pooling +- **Regional Placement**: Choose regions close to your users +- **Data Archiving**: Archive old data to reduce storage costs + +## Support and Resources + +- **Turso Documentation**: [docs.turso.tech](https://docs.turso.tech) +- **libSQL Documentation**: [github.com/libsql/libsql](https://github.com/libsql/libsql) +- **Community Discord**: [discord.gg/turso](https://discord.gg/turso) +- **GitHub Issues**: [github.com/tursodatabase/turso-cli](https://github.com/tursodatabase/turso-cli) + +## Next Steps + +1. **Set up your Turso database** following the configuration steps above +2. **Configure environment variables** in your `.env` file +3. **Run the database setup** using the DeployStack API +4. **Start developing** with global SQLite performance +5. **Monitor and optimize** your database usage + +For more information about database management in DeployStack, see the [Database Management Guide](/deploystack/development/backend/database). diff --git a/docs/deploystack/development/backend/database.mdx b/docs/deploystack/development/backend/database.mdx index b2cb130..07f9109 100644 --- a/docs/deploystack/development/backend/database.mdx +++ b/docs/deploystack/development/backend/database.mdx @@ -1,108 +1,156 @@ --- title: Database Management -description: SQLite and PostgreSQL database setup, schema management, migrations, and plugin database extensions for DeployStack Backend development. +description: Multi-database support with SQLite and Turso using environment-based configuration and Drizzle ORM for DeployStack Backend development. --- # Database Management ## Overview -DeployStack uses SQLite with Drizzle ORM for database operations. This combination provides excellent performance, type safety, and a modern, developer-friendly experience without the need for external database dependencies. +DeployStack supports multiple database types through an environment-based configuration system using Drizzle ORM. The system provides excellent performance, type safety, and a modern, developer-friendly experience with support for: + +- **SQLite** - Local file-based database (default for development) +- **Turso** - Distributed SQLite database with global replication + +All databases use the same SQLite syntax and schema, ensuring consistency across different deployment environments. ## Database Setup and Configuration -The backend server provides API endpoints for managing the initial database setup and checking its status. +The backend uses an environment-based configuration system where database credentials are provided via environment variables, and the database type is selected through the setup API. + +> **Setup Instructions**: For step-by-step setup instructions, see the [Database Setup Guide](/deploystack/self-hosted/database-setup). + +> **Database-Specific Guides**: For detailed technical information about specific databases, see: +> - [SQLite Development Guide](/deploystack/development/backend/database-sqlite) +> - [Turso Development Guide](/deploystack/development/backend/database-turso) + +### Environment Variables + +Configure your chosen database type by setting the appropriate environment variables: + +#### SQLite Configuration +```bash +# Optional - defaults to persistent_data/database/deploystack.db +SQLITE_DB_PATH=persistent_data/database/deploystack.db +``` + +#### Turso Configuration +```bash +TURSO_DATABASE_URL=libsql://your-database-url +TURSO_AUTH_TOKEN=your_auth_token +``` ### Database Status -You can check the current status of the database (whether it's configured and initialized) using the following endpoint: +Check the current status of the database configuration and initialization: - **Endpoint:** `GET /api/db/status` - **Method:** `GET` -- **Response:** A JSON object indicating the database `configured` status (boolean), `initialized` status (boolean), and current `dialect` (e.g., "sqlite" or "postgres", or null if not configured). +- **Response:** JSON object with database status information + +```json +{ + "configured": true, + "initialized": true, + "dialect": "sqlite" +} +``` ### Initial Database Setup -To perform the initial setup of the database, use the following endpoint: +Perform the initial database setup by selecting your database type: - **Endpoint:** `POST /api/db/setup` - **Method:** `POST` -- **Request Body:** A JSON object specifying the database type and configuration. +- **Request Body:** JSON object specifying the database type -**For SQLite:** -The server will automatically manage the database file location. The request body should be: +#### Setup Examples +**SQLite Setup:** ```json { "type": "sqlite" } ``` -The SQLite database file will be created and stored at: `services/backend/persistent_data/database/deploystack.db`. - -**Important:** All database files must be stored within the `persistent_data` directory to ensure proper data persistence and backup capabilities. - -**For PostgreSQL:** -The request body should be: - +**Turso Setup:** ```json { - "type": "postgres", - "connectionString": "postgresql://username:password@host:port/mydatabase" + "type": "turso" } ``` -Replace the `connectionString` with your actual PostgreSQL connection URI. - -**Note:** The database setup is now complete in a single API call. After successful setup, all database-dependent services (global settings, plugins, etc.) are automatically initialized and ready to use immediately. No server restart is required. - #### API Response -The setup endpoint returns a JSON response indicating the success status and whether a restart is required: - -**Successful Setup (No Restart Required):** +The setup endpoint returns a JSON response indicating success and restart requirements: +**Successful Setup:** ```json { "message": "Database setup successful. All services have been initialized and are ready to use.", - "restart_required": false + "restart_required": false, + "database_type": "sqlite" } ``` -**Successful Setup (Restart Required - Fallback):** - +**Setup with Restart Required (Fallback):** ```json { "message": "Database setup successful, but some services may require a server restart to function properly.", - "restart_required": true + "restart_required": true, + "database_type": "sqlite" } ``` -In most cases, the setup will complete successfully without requiring a restart. The `restart_required: true` response is a fallback for edge cases where the automatic re-initialization fails. +### Database Selection File -### Database Configuration File +The chosen database type is stored in: +- `services/backend/persistent_data/db.selection.json` -The choice of database (SQLite or PostgreSQL) and its specific configuration (like the connection string for PostgreSQL) is stored in a JSON file located at: +This file is automatically created and managed by the setup API. Manual editing is not recommended. -- `services/backend/persistent_data/db.selection.json` +Example content: +```json +{ + "type": "sqlite", + "selectedAt": "2025-01-02T18:22:15.000Z", + "version": "1.0" +} +``` + +## Architecture + +### Key Components + +- **Drizzle ORM**: Type-safe ORM with native driver support +- **Native Drivers**: + - `better-sqlite3` for SQLite + - `@libsql/client` for Turso +- **Unified Schema**: Single schema definition works across all database types +- **Environment Configuration**: Database credentials via environment variables -This file is automatically managed by the setup API. You typically do not need to edit it manually. +### Database Drivers -## Key Components +The system uses native Drizzle drivers for optimal performance: -- **SQLite**: Embedded SQL database engine -- **Drizzle ORM**: Type-safe ORM for TypeScript -- **Drizzle Kit**: Schema migration tool for Drizzle ORM +```typescript +// SQLite +import { drizzle } from 'drizzle-orm/better-sqlite3'; + +// Turso +import { drizzle } from 'drizzle-orm/libsql'; +``` ## Database Structure -The database schema is defined in `src/db/schema.sqlite.ts`. This is the **single source of truth** for all database schema definitions. It contains: +The database schema is defined in `src/db/schema.sqlite.ts`. This is the **single source of truth** for all database schema definitions and works across all supported database types. -1. Base schema tables (core application) +The schema contains: +1. Core application tables 2. Plugin table definitions (populated dynamically) 3. Proper foreign key relationships and constraints -**Important**: Only `schema.sqlite.ts` should be edited for schema changes. The previous `schema.ts` file has been removed to eliminate confusion. +**Important**: Only `schema.sqlite.ts` should be edited for schema changes. All databases use SQLite syntax. ## Making Schema Changes @@ -124,8 +172,6 @@ Follow these steps to add or modify database tables: }); ``` - **Note**: Tables are automatically exported and available - no need to manually add them to a base schema object. - 2. **Generate Migration** Run the migration generation command: @@ -134,21 +180,19 @@ Follow these steps to add or modify database tables: npm run db:generate ``` - This will create SQL migration files in `drizzle/migrations/` based on your schema changes. + This creates SQL migration files in `drizzle/migrations_sqlite/` that work across all database types. 3. **Review Migrations** - Examine the generated SQL files in `drizzle/migrations/` to ensure they match your intended changes. + Examine the generated SQL files in `drizzle/migrations_sqlite/` to ensure they match your intended changes. 4. **Apply Migrations** - Either: - - Restart the application (migrations are applied on startup) - - Run migrations directly: + Migrations are automatically applied on server startup. You can also run them manually: - ```bash - npm run db:up - ``` + ```bash + npm run db:up + ``` 5. **Use the New Schema** @@ -162,68 +206,158 @@ Follow these steps to add or modify database tables: }); ``` +## Migration Management + +- **Unified Migrations**: Single `migrations_sqlite` folder works for all database types +- **Automatic Tracking**: Migrations tracked in `__drizzle_migrations` table +- **Incremental Application**: Only new migrations are applied +- **Transaction Safety**: Migrations applied in transactions for consistency + +### Migration Compatibility + +All databases use SQLite syntax, ensuring migration compatibility: +- **SQLite**: Direct execution +- **Turso**: libSQL protocol with SQLite syntax + +## Global Settings Integration + +During database setup, DeployStack automatically initializes global settings that configure the application. This process is database-aware and handles database-specific limitations: + +### Automatic Initialization + +The global settings system: +- **Loads setting definitions** from all modules in `src/global-settings/` +- **Creates setting groups** for organizing configuration options +- **Initializes default values** for all settings with proper encryption +- **Handles database limitations** through automatic batching + +### Database-Specific Handling + +**SQLite**: Settings are created in large batches for optimal performance + +**Turso**: Uses efficient batch operations with libSQL protocol + +> **Global Settings Documentation**: For detailed information about global settings, see the [Global Settings Guide](/deploystack/development/backend/global-settings). + ## Plugin Database Extensions Plugins can add their own tables through the `databaseExtension` property: -1. Define tables in the plugin's `schema.ts` file -2. Include tables in the plugin's `databaseExtension.tables` array +1. Define tables in the plugin's schema file +2. Include tables in the plugin's `databaseExtension.tableDefinitions` 3. Implement `onDatabaseInit()` for seeding or initialization -Tables defined by plugins are automatically created when the plugin is loaded and initialized. +Plugin tables are automatically created and work across all database types. + +### Plugin Global Settings + +Plugins can also contribute global settings that are automatically integrated during database setup: + +```typescript +// Example plugin with global settings +class MyPlugin implements Plugin { + globalSettingsExtension: GlobalSettingsExtension = { + groups: [{ id: 'my_plugin', name: 'My Plugin Settings' }], + settings: [ + { + key: 'myPlugin.feature.enabled', + defaultValue: true, + type: 'boolean', + groupId: 'my_plugin' + } + ] + }; +} +``` + +## Development Workflow -## Migration Management +1. **Environment Setup**: Configure environment variables for your chosen database +2. **Database Selection**: Use `/api/db/setup` to select and initialize database +3. **Schema Changes**: Modify `src/db/schema.sqlite.ts` +4. **Generate Migrations**: Run `npm run db:generate` +5. **Apply Changes**: Restart server or run `npm run db:up` +6. **Update Code**: Use the modified schema in your application -- Migrations are tracked in a `__drizzle_migrations` table -- Only new migrations are applied when the server starts -- Migrations are applied in a transaction to ensure consistency +## Database-Specific Considerations -## Development Workflow +### SQLite +- **File Location**: `persistent_data/database/deploystack.db` +- **Performance**: Excellent for development and small to medium deployments +- **Backup**: Simple file-based backup -1. Make schema changes in `src/db/schema.sqlite.ts` -2. Generate migrations with `npm run db:generate` -3. Restart the server to apply migrations -4. Update application code to use the modified schema +### Turso +- **Global Replication**: Multi-region database replication +- **Edge Performance**: Low-latency access worldwide +- **libSQL Protocol**: Enhanced SQLite with additional features +- **Scaling**: Automatic scaling based on usage ## Best Practices +### Schema Design - Use meaningful column names and consistent naming conventions -- Add appropriate indexes for columns that will be frequently queried +- Add appropriate indexes for frequently queried columns - Include proper foreign key constraints for relational data -- Add explicit types for all columns -- Always use migrations for schema changes in development and production -- **Important**: All schema changes should be made in `src/db/schema.sqlite.ts` as it is the single source of truth for Drizzle Kit migration generation -- Never manually create migration files - always use `npm run db:generate` to ensure proper migration structure +- Always use migrations for schema changes -## Inspecting the Database +### Environment Management +- Keep database credentials in environment variables +- Use different databases for different environments (dev/staging/prod) +- Never commit database credentials to version control -You can inspect the SQLite database directly using various tools: +### Migration Safety +- Always review generated migrations before applying +- Test migrations in development before production +- Keep migrations small and focused +- Never manually edit migration files -- **SQLite CLI**: +## Inspecting Databases - ```bash - sqlite3 services/backend/persistent_data/database/deploystack.db - ``` +### SQLite +```bash +# Using SQLite CLI +sqlite3 services/backend/persistent_data/database/deploystack.db - (Assuming the command is run from the project root directory) +# Using DB Browser for SQLite (GUI) +# Download from: https://sqlitebrowser.org/ +``` -- **Visual Tools**: [DB Browser for SQLite](https://sqlitebrowser.org/) or VSCode extensions like SQLite Viewer +### Turso +```bash +# Using Turso CLI +turso db shell your-database -## Troubleshooting +# Using libSQL shell +# Available at: https://github.com/libsql/libsql +``` -### Database Setup Issues +## Troubleshooting -- **Setup fails with re-initialization error**: If the setup endpoint returns `restart_required: true`, you can manually restart the server to complete the setup process -- **Database already configured**: If you get a 409 error, the database has already been set up. Use the status endpoint to check the current configuration -- **Services not working after setup**: Check the server logs for any initialization errors. In rare cases, a manual restart may be needed +### Setup Issues +- **Configuration Error**: Verify environment variables are set correctly +- **Network Issues**: Check connectivity for Turso +- **Permissions**: Ensure API tokens have proper permissions ### Migration Issues +- **Migration Conflicts**: Check for duplicate or conflicting migrations +- **Schema Drift**: Ensure all environments use the same migrations +- **Rollback**: Manually revert problematic migrations if needed -- If you get a "table already exists" error, check if you've already applied the migration -- For complex schema changes, you may need to create multiple migrations -- To reset the database, delete the `services/backend/persistent_data/database/deploystack.db` file and restart the server +### Performance Issues +- **SQLite**: Check file system performance and disk space +- **Turso**: Monitor regional performance and connection latency ### Plugin Issues +- **Missing Tables**: Ensure plugins are loaded before database initialization +- **Schema Conflicts**: Check for table name conflicts between plugins +- **Initialization Errors**: Review plugin database extension implementations + +## Future Database Support + +The environment-based architecture makes it easy to add support for additional databases: + +- **PostgreSQL**: Planned for future release +- **MySQL**: Possible future addition +- **Other SQLite-compatible databases**: Can be added with minimal changes -- **Plugins not working after setup**: Plugins with database extensions should automatically receive database access after setup. Check server logs for plugin re-initialization messages -- **Plugin database tables missing**: Ensure plugins are properly loaded before database setup, or restart the server if tables are missing +The unified schema approach ensures that adding new database types requires minimal changes to existing application code. diff --git a/docs/deploystack/development/backend/environment-variables.mdx b/docs/deploystack/development/backend/environment-variables.mdx index 080fef6..a98facc 100644 --- a/docs/deploystack/development/backend/environment-variables.mdx +++ b/docs/deploystack/development/backend/environment-variables.mdx @@ -76,8 +76,6 @@ GITHUB_CLIENT_ID=your-github-client-id GITHUB_CLIENT_SECRET=your-github-client-secret GITHUB_REDIRECT_URI=http://localhost:3000/api/auth/github/callback -# Application -DEPLOYSTACK_BACKEND_VERSION=0.20.5 ``` #### `.env.local` (Local Overrides) @@ -188,8 +186,7 @@ The production Dockerfile creates a default `.env` file with basic settings: ```dockerfile # Create a default .env file RUN echo "NODE_ENV=production" > .env && \ - echo "PORT=3000" >> .env && \ - echo "DEPLOYSTACK_BACKEND_VERSION=${DEPLOYSTACK_BACKEND_VERSION:-$(node -e "console.log(require('./package.json').version)")}" >> .env + echo "PORT=3000" >> .env # Start with env file CMD ["node", "--env-file=.env", "dist/index.js"] @@ -397,7 +394,6 @@ NODE_ENV=development # Application Specific (use DEPLOYSTACK_ prefix) DEPLOYSTACK_FRONTEND_URL=http://localhost:5173 -DEPLOYSTACK_BACKEND_VERSION=0.20.5 DEPLOYSTACK_ENCRYPTION_SECRET=secret # Third-party Services (use service name prefix) @@ -417,10 +413,10 @@ ENABLE_DEBUG_MODE=false ```typescript // Add to your server startup -console.log('Environment Variables:') -console.log('PORT:', process.env.PORT) -console.log('NODE_ENV:', process.env.NODE_ENV) -console.log('FRONTEND_URL:', process.env.DEPLOYSTACK_FRONTEND_URL) +logger.info('Environment Variables:') +logger.info('PORT:', process.env.PORT) +logger.info('NODE_ENV:', process.env.NODE_ENV) +logger.info('FRONTEND_URL:', process.env.DEPLOYSTACK_FRONTEND_URL) // Or create a debug endpoint (development only) if (process.env.NODE_ENV === 'development') { @@ -455,16 +451,14 @@ The backend displays a startup banner with key environment information: ```typescript // src/utils/banner.ts export const displayStartupBanner = (port: number): void => { - const version = process.env.DEPLOYSTACK_BACKEND_VERSION || '0.1.0' const environment = process.env.NODE_ENV || 'development' - console.log(` + logger.info(` ╔══════════════════════════════════════════════════════════════════════════════╗ ║ 🚀 DeployStack Backend ║ ║ ║ ║ Running on port ${port} ║ ║ Environment: ${environment} ║ - ║ Version: ${version} ║ ╚══════════════════════════════════════════════════════════════════════════════╝ `) } diff --git a/docs/deploystack/development/backend/global-settings.mdx b/docs/deploystack/development/backend/global-settings.mdx index 7524e75..7486582 100644 --- a/docs/deploystack/development/backend/global-settings.mdx +++ b/docs/deploystack/development/backend/global-settings.mdx @@ -401,17 +401,17 @@ const smtpSettings = await GlobalSettings.getGroupValuesWithFullKeys('smtp'); ```typescript // Check if setting exists and has a value if (await GlobalSettings.isSet('smtp.host')) { - console.log('SMTP host is configured'); + logger.info('SMTP host is configured'); } // Check if setting is empty if (await GlobalSettings.isEmpty('api.key')) { - console.log('API key needs to be configured'); + logger.warn('API key needs to be configured'); } // Check if setting exists in database (regardless of value) if (await GlobalSettings.exists('feature.new_ui')) { - console.log('New UI feature flag exists'); + logger.info('New UI feature flag exists'); } ``` @@ -427,7 +427,7 @@ try { headers: { 'Authorization': `Bearer ${requiredApiKey}` } }); } catch (error) { - console.error('Required setting missing:', error.message); + logger.error('Required setting missing:', error.message); // Handle missing configuration } ``` @@ -900,7 +900,7 @@ try { } // Use the setting } catch (error) { - console.error('Failed to retrieve setting:', error); + logger.error('Failed to retrieve setting:', error); // Handle the error appropriately } ``` diff --git a/docs/deploystack/development/backend/logging.mdx b/docs/deploystack/development/backend/logging.mdx new file mode 100644 index 0000000..95e78d1 --- /dev/null +++ b/docs/deploystack/development/backend/logging.mdx @@ -0,0 +1,543 @@ +--- +title: Backend Logging & Log Level Configuration +description: Complete guide to configuring and using log levels in the DeployStack backend for development and production environments. +sidebar: Backend Development +--- + +import { Callout } from 'fumadocs-ui/components/callout'; +import { CodeBlock } from 'fumadocs-ui/components/codeblock'; + +# Backend Log Level Configuration + +The DeployStack backend uses **Pino** logger with **Fastify** for high-performance, structured logging. This guide covers everything you need to know about configuring and using log levels effectively. + +## Overview + +DeployStack's logging system is built on industry best practices: + +- **Pino Logger**: Ultra-fast JSON logger for Node.js +- **Fastify Integration**: Native logging support with request correlation +- **Environment-based Configuration**: Automatic log level adjustment based on NODE_ENV +- **Structured Logging**: JSON output for production, pretty-printed for development + +## Available Log Levels + +Log levels are ordered by severity (lowest to highest): + +| Level | Numeric Value | Description | When to Use | +|-------|---------------|-------------|-------------| +| `trace` | 10 | Very detailed debugging | Tracing function calls, variable states | +| `debug` | 20 | Debugging information | Development debugging, detailed flow | +| `info` | 30 | General information | Important events, startup messages | +| `warn` | 40 | Warning messages | Recoverable errors, deprecation notices | +| `error` | 50 | Error conditions | Caught exceptions, failed operations | +| `fatal` | 60 | Fatal errors | Unrecoverable errors, application crashes | + +## Configuration + +### Environment Variables + +Set the log level using the `LOG_LEVEL` environment variable: + +```bash +# Development - show debug information +LOG_LEVEL=debug npm run dev + +# Production - show info and above +LOG_LEVEL=info npm run start + +# Troubleshooting - show everything +LOG_LEVEL=trace npm run dev + +# Quiet mode - only errors and fatal +LOG_LEVEL=error npm run start +``` + +### Default Behavior + +The logger automatically adjusts based on your environment: + +```typescript +// From src/fastify/config/logger.ts +export const loggerConfig: FastifyServerOptions['logger'] = { + level: process.env.LOG_LEVEL || (process.env.NODE_ENV === 'production' ? 'info' : 'debug'), + transport: process.env.NODE_ENV !== 'production' + ? { + target: 'pino-pretty', + options: { + colorize: true, + translateTime: 'SYS:standard', + ignore: 'pid,hostname' + } + } + : undefined +} +``` + +**Default Levels:** +- **Development**: `debug` (shows debug, info, warn, error, fatal) +- **Production**: `info` (shows info, warn, error, fatal) + +## Log Output Formats + +### Development Format (Pretty-printed) + +``` +[2025-07-03 10:48:06.636 +0200] INFO: ✅ Database initialization completed +[2025-07-03 10:48:06.640 +0200] DEBUG: 🔄 Starting global settings initialization... +[2025-07-03 10:48:06.645 +0200] ERROR: ❌ Failed to connect to external service +``` + +### Production Format (JSON) + +```json +{"level":30,"time":"2025-07-03T08:48:06.636Z","pid":1234,"hostname":"server","msg":"Database initialization completed"} +{"level":20,"time":"2025-07-03T08:48:06.640Z","pid":1234,"hostname":"server","msg":"Starting global settings initialization..."} +{"level":50,"time":"2025-07-03T08:48:06.645Z","pid":1234,"hostname":"server","msg":"Failed to connect to external service"} +``` + +## Logger Parameter Injection Pattern + +DeployStack follows a consistent pattern for passing logger instances to services and utilities. This ensures proper structured logging throughout the application while maintaining the Fastify logger chain for request correlation. + +### ✅ DO: Pass Logger as Parameter to Services + +```typescript +// ✅ Good - Services accept logger as parameter +class EmailService { + static async sendEmail(options: SendEmailOptions, logger: FastifyBaseLogger): Promise { + logger.debug({ + operation: 'send_email', + recipient: options.to, + template: options.template + }, 'Sending email'); + + try { + // ... email sending logic + logger.info({ + operation: 'send_email', + messageId: result.messageId, + recipients: result.recipients + }, 'Email sent successfully'); + + return result; + } catch (error) { + logger.error({ + operation: 'send_email', + error, + recipient: options.to, + template: options.template + }, 'Failed to send email'); + throw error; + } + } +} + +// ✅ Good - Database functions accept logger parameter +export async function initializeDatabase(logger: FastifyBaseLogger): Promise { + logger.info({ + operation: 'initialize_database' + }, 'Database initialization started'); + + try { + // ... database initialization logic + logger.info({ + operation: 'initialize_database' + }, 'Database initialized successfully'); + return true; + } catch (error) { + logger.error({ + operation: 'initialize_database', + error + }, 'Failed to initialize database'); + return false; + } +} +``` + +### ✅ DO: Pass Logger from Calling Context + +```typescript +// ✅ Good - Pass logger from server context +await initializeDatabase(server.log); + +// ✅ Good - Pass logger from request context in routes +server.post('/api/send-email', async (request, reply) => { + const result = await EmailService.sendEmail(emailOptions, request.log); + return result; +}); + +// ✅ Good - Pass logger in plugin initialization +await pluginManager.initializePlugins(server.log); +``` + +### ✅ DO: Use Child Loggers for Persistent Context + +```typescript +// ✅ Good - Create child logger with persistent context +class UserService { + static async processUser(userId: string, logger: FastifyBaseLogger) { + const childLogger = logger.child({ userId, service: 'UserService' }); + + childLogger.debug('Starting user processing'); + childLogger.info('User data retrieved'); + childLogger.debug('Processing completed'); + } +} +``` + +### ❌ DON'T: Create Separate Logger Utilities + +```typescript +// ❌ Bad - Don't create separate logger utility files +// utils/logger.ts +export const logger = createLogger(); + +// ❌ Bad - Don't import logger utilities in services +import { logger } from '../utils/logger'; + +class SomeService { + static async doSomething() { + logger.info('This bypasses the Fastify logger chain'); + } +} +``` + +### ❌ DON'T: Use console.* in Services + +```typescript +// ❌ Bad - console.* bypasses the logging system +class DatabaseService { + static async connect() { + console.log('Connecting to database...'); // No structured logging + console.error('Connection failed:', error); // No context objects + } +} + +// ✅ Good - Use passed logger parameter +class DatabaseService { + static async connect(logger: FastifyBaseLogger) { + logger.info({ + operation: 'database_connect' + }, 'Connecting to database...'); + + logger.error({ + operation: 'database_connect', + error, + connectionString: config.url + }, 'Connection failed'); + } +} +``` + +## Developer Best Practices + +### ✅ DO: Use Proper Log Levels + +```typescript +// ✅ Good - Use appropriate log levels +server.log.debug('🔄 Starting database initialization...'); +server.log.info('✅ Database connection established'); +server.log.warn('⚠️ Using fallback configuration'); +server.log.error('❌ Failed to process request:', error); +server.log.fatal('💀 Critical system failure:', error); +``` + +### ❌ DON'T: Use Manual Prefixes + +```typescript +// ❌ Bad - Manual prefixes defeat the purpose +server.log.info('🔄 [DEBUG] Starting database initialization...'); +server.log.info('✅ [INFO] Database connection established'); + +// ✅ Good - Use proper log levels instead +server.log.debug('🔄 Starting database initialization...'); +server.log.info('✅ Database connection established'); +``` + +### ✅ DO: Use the Fastify Logger + +```typescript +// ✅ Good - Use server.log for consistent formatting +server.log.info('User authenticated successfully'); + +// ❌ Bad - console.log bypasses the logging system +console.log('User authenticated successfully'); +``` + +### ✅ DO: Add Context Objects + +Context objects make your logs searchable, filterable, and much more useful for debugging. Always include relevant context that helps identify what happened, where, and to whom. + +```typescript +// ✅ Good - Structured logging with context +server.log.info({ + userId: 'user123', + action: 'login', + ipAddress: '192.168.1.1', + userAgent: 'Mozilla/5.0...', + operation: 'user_authentication' +}, 'User login successful'); + +// ✅ Good - Error logging with full context +server.log.error({ + error: err, + userId: 'user123', + operation: 'database_query', + table: 'users', + queryType: 'SELECT' +}, 'Database operation failed'); + +// ✅ Good - Service operations with context +server.log.debug({ + recipient: 'user@example.com', + template: 'welcome', + messageId: 'abc123', + operation: 'send_email' +}, 'Email sent successfully'); + +// ✅ Good - API operations with context +server.log.warn({ + endpoint: '/api/users', + method: 'POST', + statusCode: 429, + clientIp: '192.168.1.1', + operation: 'rate_limit_exceeded' +}, 'Rate limit exceeded for client'); +``` + +**Best Practices for Context Objects:** + +- **Always include `operation`**: A consistent field that describes what operation was being performed +- **Add identifiers**: Include relevant IDs (userId, orderId, sessionId, etc.) for easy filtering +- **Include request context**: IP addresses, user agents, request IDs for web requests +- **Add timing information**: Duration, timestamps, or performance metrics when relevant +- **Use consistent naming**: Stick to camelCase and consistent field names across your application + +**Examples of Good Context Properties:** +- `operation`: What was happening (e.g., 'send_email', 'user_login', 'database_query') +- `userId`, `sessionId`, `requestId`: Identifiers for tracking +- `duration`, `responseTime`: Performance metrics +- `statusCode`, `method`, `endpoint`: HTTP-related context +- `table`, `queryType`: Database-related context +- `recipient`, `template`, `messageId`: Email-related context + +### ✅ DO: Use Child Loggers for Context + +```typescript +// ✅ Good - Child logger with persistent context +function processUser(userId: string) { + const childLogger = server.log.child({ userId }); + + childLogger.debug('Starting user processing'); + childLogger.info('User data retrieved'); + childLogger.debug('Processing completed'); +} +``` + +## Common Logging Patterns + +### Database Operations + +```typescript +// Database initialization +server.log.debug('🔄 Initializing database connection...'); +server.log.info('✅ Database connected successfully'); + +// Query operations +server.log.debug('Executing user query', { userId, query: 'SELECT * FROM users' }); +server.log.warn('Slow query detected', { duration: '2.5s', query: 'complex_query' }); +``` + +### Authentication & Security + +```typescript +// Authentication events +server.log.info('User login attempt', { email: user.email, ipAddress }); +server.log.warn('Failed login attempt', { email, ipAddress, reason: 'invalid_password' }); +server.log.error('Security violation detected', { ipAddress, action: 'brute_force' }); +``` + +### API Requests + +```typescript +// Request processing (handled automatically by Fastify) +// Custom business logic logging +server.log.debug('Processing payment request', { amount, currency, userId }); +server.log.info('Payment processed successfully', { transactionId, amount }); +server.log.error('Payment failed', { error, userId, amount }); +``` + +### Plugin System + +```typescript +// Plugin lifecycle +server.log.debug('Loading plugin', { pluginId, version }); +server.log.info('Plugin loaded successfully', { pluginId }); +server.log.warn('Plugin deprecated', { pluginId, deprecationDate }); +server.log.error('Plugin failed to load', { pluginId, error }); +``` + +## Fixing Console.log Issues + + +**Important**: Replace all `console.log` statements with proper Pino logger calls to ensure consistent formatting and log level filtering. + + +### Problem: Inconsistent Log Output + +```typescript +// ❌ Problem - Mixed logging approaches +console.log('✅ [GlobalSettingsInitService] Operation completed'); // No timestamp +server.log.info('✅ Database initialization completed'); // With timestamp +``` + +### Solution: Use Proper Logger + +```typescript +// ✅ Solution - Consistent logging +class GlobalSettingsInitService { + private static logger = server.log.child({ service: 'GlobalSettingsInitService' }); + + static async loadSettings() { + this.logger.debug('Loading settings definitions...'); + this.logger.info('Settings loaded successfully'); + } +} +``` + +### Passing Logger to Classes + +```typescript +// ✅ Good - Pass logger instance to classes +class PluginManager { + constructor(private logger: FastifyBaseLogger) {} + + async loadPlugin(pluginId: string) { + this.logger.debug('Loading plugin', { pluginId }); + this.logger.info('Plugin loaded successfully', { pluginId }); + } +} + +// Usage +const pluginManager = new PluginManager(server.log.child({ component: 'PluginManager' })); +``` + +## Environment-Specific Configuration + +### Development Environment + +```bash +# .env file for development +NODE_ENV=development +LOG_LEVEL=debug +``` + +**Features:** +- Pretty-printed, colorized output +- Shows debug and trace information +- Includes timestamps and emojis +- Easier to read during development + +### Production Environment + +```bash +# Production environment variables +NODE_ENV=production +LOG_LEVEL=info +``` + +**Features:** +- Structured JSON output +- Optimized for log aggregation +- Excludes debug information +- Better performance + +### Testing Environment + +```bash +# Testing environment +NODE_ENV=test +LOG_LEVEL=error +``` + +**Features:** +- Minimal log output during tests +- Only shows errors and fatal messages +- Reduces test noise + +## Troubleshooting + +### Debug Mode Not Working + +If debug logs aren't showing: + +1. **Check LOG_LEVEL**: Ensure it's set to `debug` or `trace` +2. **Check NODE_ENV**: Development mode enables debug by default +3. **Restart Server**: Environment changes require restart + +```bash +# Force debug mode +LOG_LEVEL=debug npm run dev +``` + +### Performance Issues + +If logging is impacting performance: + +1. **Increase Log Level**: Use `info` or `warn` in production +2. **Remove Debug Logs**: Clean up excessive debug statements +3. **Use Async Logging**: Pino handles this automatically + +### Log Aggregation + +For production log management: + +```typescript +// Add correlation IDs for request tracking +server.addHook('onRequest', async (request) => { + request.log = request.log.child({ + requestId: request.id, + userAgent: request.headers['user-agent'] + }); +}); +``` + +## Migration Guide + +### From Manual Prefixes + +```typescript +// Before +server.log.info('🔄 [DEBUG] Starting operation...'); +server.log.info('✅ [INFO] Operation completed'); +server.log.info('❌ [ERROR] Operation failed'); + +// After +server.log.debug('🔄 Starting operation...'); +server.log.info('✅ Operation completed'); +server.log.error('❌ Operation failed'); +``` + +### From Console.log + +```typescript +// Before +console.log('User logged in:', userId); +console.error('Database error:', error); + +// After +server.log.info('User logged in', { userId }); +server.log.error('Database error', { error, userId }); +``` + +## Summary + +- **Use proper log levels** instead of manual prefixes +- **Replace console.log** with server.log for consistency +- **Add structured context** to make logs searchable +- **Configure LOG_LEVEL** via environment variables +- **Use child loggers** for persistent context +- **Follow the patterns** shown in this guide + +With proper log level configuration, you'll have a production-ready logging system that scales from development to enterprise deployments. diff --git a/docs/deploystack/development/backend/mail.mdx b/docs/deploystack/development/backend/mail.mdx index 037c7fd..0900891 100644 --- a/docs/deploystack/development/backend/mail.mdx +++ b/docs/deploystack/development/backend/mail.mdx @@ -75,9 +75,17 @@ const result = await EmailService.sendEmail({ }); if (result.success) { - console.log('Email sent successfully:', result.messageId); + request.log.info({ + messageId: result.messageId, + recipients: result.recipients, + operation: 'send_email' + }, 'Email sent successfully'); } else { - console.error('Failed to send email:', result.error); + request.log.error({ + error: result.error, + recipients: result.recipients, + operation: 'send_email' + }, 'Failed to send email'); } ``` @@ -310,22 +318,32 @@ To customize the layout, modify the files in `src/email/templates/layouts/`: ### Test SMTP Connection ```typescript -const status = await EmailService.testConnection(); +const status = await EmailService.testConnection(request.log); if (status.success) { - console.log('SMTP connection successful'); + request.log.info({ + operation: 'test_smtp_connection' + }, 'SMTP connection successful'); } else { - console.error('SMTP connection failed:', status.error); + request.log.error({ + error: status.error, + operation: 'test_smtp_connection' + }, 'SMTP connection failed'); } ``` ### Check SMTP Configuration ```typescript -const status = await EmailService.getSmtpStatus(); +const status = await EmailService.getSmtpStatus(request.log); if (status.configured) { - console.log('SMTP is configured'); + request.log.info({ + operation: 'check_smtp_status' + }, 'SMTP is configured'); } else { - console.error('SMTP not configured:', status.error); + request.log.error({ + error: status.error, + operation: 'check_smtp_status' + }, 'SMTP not configured'); } ``` @@ -333,14 +351,18 @@ if (status.configured) { ```typescript // Call this after updating SMTP settings -await EmailService.refreshConfiguration(); +await EmailService.refreshConfiguration(request.log); ``` ### Get Available Templates ```typescript -const templates = EmailService.getAvailableTemplates(); -console.log('Available templates:', templates); +const templates = EmailService.getAvailableTemplates(request.log); +request.log.info({ + templates, + templateCount: templates.length, + operation: 'get_available_templates' +}, 'Available templates retrieved'); // Output: ['welcome', 'password-reset', 'notification'] ``` @@ -354,9 +376,17 @@ const validation = await EmailService.validateTemplate('welcome', { }); if (validation.valid) { - console.log('Template is valid'); + request.log.info({ + template: 'welcome', + operation: 'validate_template' + }, 'Template is valid'); } else { - console.error('Template validation failed:', validation.errors); + request.log.error({ + template: 'welcome', + errors: validation.errors, + missingVariables: validation.missingVariables, + operation: 'validate_template' + }, 'Template validation failed'); } ``` @@ -382,7 +412,10 @@ if (!result.success) { break; default: // Handle other errors - console.error('Email failed:', result.error); + request.log.error({ + error: result.error, + operation: 'send_email' + }, 'Email failed'); } } ``` @@ -469,7 +502,11 @@ export class UserService { }); if (!emailResult.success) { - console.error('Failed to send welcome email:', emailResult.error); + request.log.error({ + error: emailResult.error, + userId: user.id, + operation: 'send_welcome_email' + }, 'Failed to send welcome email'); // Don't fail registration if email fails } @@ -534,7 +571,12 @@ export class DeploymentService { }); if (!emailResult.success) { - console.error('Failed to send deployment notification:', emailResult.error); + request.log.error({ + error: emailResult.error, + deploymentId, + userId: user.id, + operation: 'send_deployment_notification' + }, 'Failed to send deployment notification'); } } } @@ -585,22 +627,30 @@ Enable debug logging for email operations: process.env.DEBUG_EMAIL = 'true'; // Or log email results -const result = await EmailService.sendEmail({...}); -console.log('Email result:', result); +const result = await EmailService.sendEmail({...}, request.log); +request.log.debug({ + success: result.success, + messageId: result.messageId, + recipients: result.recipients, + operation: 'send_email' +}, 'Email result'); ``` ### Testing SMTP Configuration ```typescript // Test SMTP connection before sending emails -const connectionTest = await EmailService.testConnection(); +const connectionTest = await EmailService.testConnection(request.log); if (!connectionTest.success) { - console.error('SMTP test failed:', connectionTest.error); + request.log.error({ + error: connectionTest.error, + operation: 'test_smtp_connection' + }, 'SMTP test failed'); return; } // Proceed with sending emails -const emailResult = await EmailService.sendEmail({...}); +const emailResult = await EmailService.sendEmail({...}, request.log); ``` ## Best Practices diff --git a/docs/deploystack/development/backend/oauth.mdx b/docs/deploystack/development/backend/oauth.mdx new file mode 100644 index 0000000..d328ac2 --- /dev/null +++ b/docs/deploystack/development/backend/oauth.mdx @@ -0,0 +1,536 @@ +--- +title: OAuth Implementation Guide +description: Developer guide for implementing OAuth providers in DeployStack +--- + +# OAuth Implementation Guide + +This guide explains how to implement OAuth providers in DeployStack's backend. The system is designed to support multiple OAuth providers with a consistent pattern. + +## Architecture Overview + +DeployStack uses the following libraries for OAuth implementation: + +- **[Arctic](https://arctic.js.org/)** - OAuth 2.0 client library for various providers +- **[Lucia](https://lucia-auth.com/)** - Authentication library for session management +- **Global Settings** - Database-driven configuration for OAuth providers + +## Current Implementation: GitHub OAuth + +The GitHub OAuth implementation serves as a reference for adding other providers. + +### File Structure + +``` +services/backend/src/ +├── routes/auth/ +│ ├── github.ts # GitHub OAuth routes +│ ├── githubStatus.ts # GitHub OAuth status endpoint +│ └── schemas.ts # OAuth validation schemas +├── global-settings/ +│ └── github-oauth.ts # GitHub OAuth global settings +└── lib/ + └── lucia.ts # Lucia authentication setup +``` + +## Adding a New OAuth Provider + +Follow these steps to add a new OAuth provider (e.g., Google): + +### 1. Install Provider Support + +First, ensure Arctic supports your provider: + +```bash +# Arctic supports many providers out of the box +# Check: https://arctic.js.org/providers +``` + +### 2. Create Global Settings + +Create a new global settings file for your provider: + +```typescript +// services/backend/src/global-settings/google-oauth.ts +import { z } from 'zod'; +import type { GlobalSettingDefinition } from './types'; + +export const GoogleOAuthSettingsSchema = z.object({ + enabled: z.boolean().default(false), + clientId: z.string().min(1, 'Client ID is required'), + clientSecret: z.string().min(1, 'Client Secret is required'), + callbackUrl: z.string().url('Must be a valid URL'), + scope: z.string().default('openid email profile'), +}); + +export type GoogleOAuthSettings = z.infer; + +export const googleOAuthSettings: GlobalSettingDefinition[] = [ + { + key: 'google_oauth_enabled', + type: 'boolean', + defaultValue: 'false', + description: 'Enable Google OAuth authentication', + group_id: 'auth', + }, + { + key: 'google_oauth_client_id', + type: 'string', + defaultValue: '', + description: 'Google OAuth Client ID', + group_id: 'auth', + }, + { + key: 'google_oauth_client_secret', + type: 'string', + defaultValue: '', + description: 'Google OAuth Client Secret', + group_id: 'auth', + is_encrypted: true, + }, + { + key: 'google_oauth_callback_url', + type: 'string', + defaultValue: 'http://localhost:3000/api/auth/google/callback', + description: 'Google OAuth callback URL', + group_id: 'auth', + }, + { + key: 'google_oauth_scope', + type: 'string', + defaultValue: 'openid email profile', + description: 'Google OAuth scopes (comma-separated)', + group_id: 'auth', + }, +]; +``` + +### 3. Add Provider to Global Settings Index + +Update the global settings index: + +```typescript +// services/backend/src/global-settings/index.ts +import { googleOAuthSettings } from './google-oauth'; + +// Add to the settings array +export const allGlobalSettings = [ + ...existingSettings, + ...googleOAuthSettings, +]; + +// Add helper function +export async function getGoogleOAuthConfiguration(): Promise { + const enabled = await getSetting('google_oauth_enabled'); + if (enabled !== 'true') return null; + + const clientId = await getSetting('google_oauth_client_id'); + const clientSecret = await getSetting('google_oauth_client_secret'); + const callbackUrl = await getSetting('google_oauth_callback_url'); + const scope = await getSetting('google_oauth_scope'); + + if (!clientId || !clientSecret) return null; + + return GoogleOAuthSettingsSchema.parse({ + enabled: true, + clientId, + clientSecret, + callbackUrl, + scope, + }); +} +``` + +### 4. Create OAuth Routes + +Create the OAuth routes file: + +```typescript +// services/backend/src/routes/auth/google.ts +import type { FastifyInstance, FastifyReply } from 'fastify'; +import { z } from 'zod'; +import { zodToJsonSchema } from 'zod-to-json-schema'; +import { getLucia } from '../../lib/lucia'; +import { getDb, getSchema } from '../../db'; +import { eq } from 'drizzle-orm'; +import { generateId } from 'lucia'; +import { generateState } from 'arctic'; +import { GlobalSettingsInitService } from '../../global-settings'; + +// Define callback schema +const GoogleCallbackSchema = z.object({ + code: z.string(), + state: z.string(), +}); + +type GoogleCallbackInput = z.infer; + +export default async function googleAuthRoutes(fastify: FastifyInstance) { + // Route to initiate Google login + fastify.get('/login', async (_request, reply: FastifyReply) => { + // Check if login is enabled + const isLoginEnabled = await GlobalSettingsInitService.isLoginEnabled(); + if (!isLoginEnabled) { + return reply.status(403).send({ + error: 'Login is currently disabled by administrator.' + }); + } + + // Check if Google OAuth is enabled and configured + const googleConfig = await GlobalSettingsInitService.getGoogleOAuthConfiguration(); + if (!googleConfig) { + return reply.status(403).send({ + error: 'Google OAuth is not enabled or not properly configured.' + }); + } + + const state = generateState(); + + // Create Google OAuth instance + const { Google } = await import('arctic'); + const googleAuth = new Google( + googleConfig.clientId, + googleConfig.clientSecret, + googleConfig.callbackUrl + ); + + const scopes = googleConfig.scope.split(',').map(s => s.trim()); + const url = await googleAuth.createAuthorizationURL(state, scopes); + + // Store state in cookie + reply.setCookie('oauth_state', state, { + path: '/', + httpOnly: true, + secure: process.env.NODE_ENV === 'production', + maxAge: 60 * 10, // 10 minutes + sameSite: 'lax', + }); + + return reply.redirect(url.toString()); + }); + + // Route to handle Google callback + fastify.get<{ Querystring: GoogleCallbackInput }>('/callback', async (request, reply: FastifyReply) => { + // Validate state parameter + const storedState = request.cookies?.oauth_state; + const { code, state } = request.query; + + if (!storedState || !state || storedState !== state) { + return reply.status(400).send({ error: 'Invalid OAuth state.' }); + } + + // Clear state cookie + reply.setCookie('oauth_state', '', { maxAge: -1, path: '/' }); + + try { + const googleConfig = await GlobalSettingsInitService.getGoogleOAuthConfiguration(); + if (!googleConfig) { + return reply.status(403).send({ error: 'Google OAuth not configured.' }); + } + + // Create Google OAuth instance + const { Google } = await import('arctic'); + const googleAuth = new Google( + googleConfig.clientId, + googleConfig.clientSecret, + googleConfig.callbackUrl + ); + + // Exchange code for tokens + const tokens = await googleAuth.validateAuthorizationCode(code); + + // Fetch user information + const googleUserResponse = await fetch('https://www.googleapis.com/oauth2/v2/userinfo', { + headers: { + Authorization: `Bearer ${tokens.accessToken()}` + } + }); + + if (!googleUserResponse.ok) { + return reply.status(400).send({ error: 'Failed to fetch Google user information.' }); + } + + const googleUser = await googleUserResponse.json(); + + // Extract user email + const userEmail = googleUser.email; + if (!userEmail) { + return reply.status(400).send({ error: 'Google email not available.' }); + } + + // Get database and schema + const db = getDb(); + const schema = getSchema(); + const authUserTable = schema.authUser; + + // Check if user already exists with this Google ID + const existingUser = await (db as any) + .select() + .from(authUserTable) + .where(eq(authUserTable.google_id, googleUser.id.toString())) + .limit(1); + + if (existingUser.length > 0) { + // Existing user - create session + const userId = existingUser[0].id; + const sessionId = generateId(40); + const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24 * 30); + + const authSessionTable = schema.authSession; + await (db as any).insert(authSessionTable).values({ + id: sessionId, + user_id: userId, + expires_at: expiresAt.getTime() + }); + + const sessionCookie = getLucia().createSessionCookie(sessionId); + reply.setCookie(sessionCookie.name, sessionCookie.value, sessionCookie.attributes); + + const frontendUrl = await GlobalSettingsInitService.getPageUrl(); + return reply.redirect(frontendUrl); + } + + // Check for existing user by email + const userWithSameEmail = await (db as any) + .select() + .from(authUserTable) + .where(eq(authUserTable.email, userEmail.toLowerCase())) + .limit(1); + + if (userWithSameEmail.length > 0) { + // Link Google account to existing user + const existingUserId = userWithSameEmail[0].id; + await (db as any) + .update(authUserTable) + .set({ google_id: googleUser.id.toString() }) + .where(eq(authUserTable.id, existingUserId)); + + // Create session + const session = await getLucia().createSession(existingUserId, {}); + const sessionCookie = getLucia().createSessionCookie(session.id); + reply.setCookie(sessionCookie.name, sessionCookie.value, sessionCookie.attributes); + + const frontendUrl = await GlobalSettingsInitService.getPageUrl(); + return reply.redirect(frontendUrl); + } + + // Prevent first user creation via OAuth + const allUsers = await (db as any).select().from(authUserTable).limit(1); + if (allUsers.length === 0) { + return reply.status(403).send({ + error: 'The first user must be created via email registration.' + }); + } + + // Create new user + const newUserId = generateId(15); + const newUserData = { + id: newUserId, + username: googleUser.email.split('@')[0] || `google_user_${newUserId}`, + email: userEmail.toLowerCase(), + auth_type: 'google', + first_name: googleUser.given_name || null, + last_name: googleUser.family_name || null, + google_id: googleUser.id.toString(), + role_id: 'global_user', + email_verified: true, + }; + + await (db as any).insert(authUserTable).values(newUserData); + + // Create default team + try { + const { TeamService } = await import('../../services/teamService'); + await TeamService.createDefaultTeamForUser(newUserId, newUserData.username); + } catch (teamError) { + // Don't fail login if team creation fails + } + + // Create session + const sessionId = generateId(40); + const expiresAt = new Date(Date.now() + 1000 * 60 * 60 * 24 * 30); + + const authSessionTable = schema.authSession; + await (db as any).insert(authSessionTable).values({ + id: sessionId, + user_id: newUserId, + expires_at: expiresAt.getTime() + }); + + const sessionCookie = getLucia().createSessionCookie(sessionId); + reply.setCookie(sessionCookie.name, sessionCookie.value, sessionCookie.attributes); + + const frontendUrl = await GlobalSettingsInitService.getPageUrl(); + return reply.redirect(frontendUrl); + + } catch (error) { + fastify.log.error(error, 'Error during Google OAuth callback:'); + return reply.status(500).send({ error: 'An unexpected error occurred during Google login.' }); + } + }); +} +``` + +### 5. Create Status Endpoint + +Create a status endpoint for the provider: + +```typescript +// services/backend/src/routes/auth/googleStatus.ts +import type { FastifyInstance } from 'fastify'; +import { z } from 'zod'; +import { zodToJsonSchema } from 'zod-to-json-schema'; +import { GlobalSettingsInitService } from '../../global-settings'; + +const GoogleStatusResponseSchema = z.object({ + enabled: z.boolean(), + configured: z.boolean(), + callbackUrl: z.string().optional(), +}); + +export default async function googleStatusRoutes(fastify: FastifyInstance) { + fastify.get('/status', { + schema: { + tags: ['Authentication'], + summary: 'Get Google OAuth status', + description: 'Returns the current status and configuration of Google OAuth', + response: { + 200: zodToJsonSchema(GoogleStatusResponseSchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + } + } + }, async (_request, reply) => { + const googleConfig = await GlobalSettingsInitService.getGoogleOAuthConfiguration(); + + return reply.send({ + enabled: googleConfig !== null, + configured: googleConfig !== null && !!googleConfig.clientId && !!googleConfig.clientSecret, + callbackUrl: googleConfig?.callbackUrl, + }); + }); +} +``` + +### 6. Register Routes + +Add the new routes to your route registration: + +```typescript +// services/backend/src/routes/auth/index.ts +import googleAuthRoutes from './google'; +import googleStatusRoutes from './googleStatus'; + +export default async function authRoutes(fastify: FastifyInstance) { + // Register Google OAuth routes + await fastify.register(googleAuthRoutes, { prefix: '/google' }); + await fastify.register(googleStatusRoutes, { prefix: '/google' }); +} +``` + +### 7. Update Database Schema + +Add the provider-specific field to your user schema: + +```typescript +// services/backend/src/db/schema.sqlite.ts +export const authUser = sqliteTable('authUser', { + // ... existing fields + google_id: text('google_id').unique(), + // ... other fields +}); +``` + +### 8. Generate Database Migration + +Run the migration generation command: + +```bash +cd services/backend +npm run db:generate +``` + +## Provider-Specific Considerations + +### Google OAuth + +- **Scopes**: Use `openid email profile` for basic user information +- **User Info Endpoint**: `https://www.googleapis.com/oauth2/v2/userinfo` +- **Email**: Always available in the user info response + +### Microsoft OAuth + +- **Scopes**: Use `openid email profile` or `User.Read` +- **User Info Endpoint**: `https://graph.microsoft.com/v1.0/me` +- **Email**: Available as `mail` or `userPrincipalName` + +### Facebook OAuth + +- **Scopes**: Use `email public_profile` +- **User Info Endpoint**: `https://graph.facebook.com/me?fields=id,name,email` +- **Email**: Requires explicit permission and may not always be available + +## Best Practices + +### Security + +1. **State Parameter**: Always validate the state parameter to prevent CSRF attacks +2. **Secure Cookies**: Use secure, httpOnly cookies for state storage +3. **HTTPS**: Always use HTTPS in production +4. **Scope Minimization**: Request only the scopes you actually need + +### Error Handling + +1. **Graceful Degradation**: Handle cases where email is not available +2. **User Feedback**: Provide clear error messages for common issues +3. **Logging**: Log errors for debugging but don't expose sensitive information + +### Database Design + +1. **Provider IDs**: Store provider-specific user IDs for account linking +2. **Email Verification**: Mark OAuth emails as verified by default +3. **Account Linking**: Allow users to link multiple OAuth providers + +### Testing + +1. **Mock Providers**: Use mock OAuth providers for testing +2. **State Validation**: Test state parameter validation +3. **Error Scenarios**: Test various error conditions + +## Common Issues + +### Email Not Available + +Some providers may not provide email addresses. Handle this gracefully: + +```typescript +if (!userEmail) { + return reply.status(400).send({ + error: 'Email address is required but not provided by the OAuth provider.' + }); +} +``` + +### Account Conflicts + +Handle cases where a user tries to link an OAuth account that's already linked: + +```typescript +if (existingUser.length > 0 && existingUser[0].id !== currentUserId) { + return reply.status(409).send({ + error: 'This OAuth account is already linked to another user.' + }); +} +``` + +### Session Creation Issues + +If you encounter session creation issues, use the manual session creation approach as shown in the GitHub implementation. + +## Resources + +- [Arctic Documentation](https://arctic.js.org/) +- [Lucia Documentation](https://lucia-auth.com/) +- [OAuth 2.0 RFC](https://tools.ietf.org/html/rfc6749) +- [OpenID Connect](https://openid.net/connect/) diff --git a/docs/deploystack/development/backend/plugins.mdx b/docs/deploystack/development/backend/plugins.mdx index a934faf..47dee33 100644 --- a/docs/deploystack/development/backend/plugins.mdx +++ b/docs/deploystack/development/backend/plugins.mdx @@ -152,8 +152,12 @@ function isSQLiteDB(db: AnyDatabase): db is BetterSQLite3Database { * /api/plugin/my-custom-plugin/ */ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDatabase | null): Promise { + // Note: In actual plugin development, you should receive a logger instance + // For this example, we'll show the pattern you should follow + const logger = routeManager.getLogger(); // Assuming this method exists + if (!db) { - console.warn(`[${routeManager.getPluginId()}] Database not available, skipping routes.`); + logger?.warn(`Database not available, skipping routes.`); return; } @@ -162,7 +166,7 @@ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDa const table = currentSchema[tableNameInSchema]; if (!table) { - console.error(`[${routeManager.getPluginId()}] Table ${tableNameInSchema} not found in schema!`); + logger?.error(`Table ${tableNameInSchema} not found in schema!`); return; } @@ -231,7 +235,7 @@ export async function registerRoutes(routeManager: PluginRouteManager, db: AnyDa return { id, ...body }; }); - console.log(`[${routeManager.getPluginId()}] Routes registered successfully under ${routeManager.getNamespace()}`); + logger?.info(`Routes registered successfully under ${routeManager.getNamespace()}`); } ``` @@ -284,15 +288,16 @@ class MyCustomPlugin implements Plugin { tableDefinitions: myCustomPluginTableDefinitions, // Optional initialization function for seeding data - onDatabaseInit: async (db: AnyDatabase) => { - console.log(`[${this.meta.id}] Initializing database...`); + onDatabaseInit: async (db: AnyDatabase, logger?: FastifyBaseLogger) => { + // Note: In actual implementation, logger should be passed from PluginManager + logger?.info(`Initializing database...`); const currentSchema = getSchema(); const tableNameInSchema = `${this.meta.id}_my_custom_entities`; const table = currentSchema[tableNameInSchema]; if (!table) { - console.error(`[${this.meta.id}] Table ${tableNameInSchema} not found in schema!`); + logger?.error(`Table ${tableNameInSchema} not found in schema!`); return; } @@ -311,7 +316,7 @@ class MyCustomPlugin implements Plugin { } if (currentCount === 0) { - console.log(`[${this.meta.id}] Seeding initial data...`); + logger?.info(`Seeding initial data...`); const dataToSeed = { id: 'initial-entity', name: 'Initial Entity', @@ -323,16 +328,17 @@ class MyCustomPlugin implements Plugin { } else { await (db as NodePgDatabase).insert(table as PgTable).values(dataToSeed); } - console.log(`[${this.meta.id}] Seeded initial data`); + logger?.info(`Seeded initial data`); } }, }; // Plugin initialization (non-route initialization only) - async initialize(db: AnyDatabase | null) { - console.log(`[${this.meta.id}] Initializing...`); + async initialize(db: AnyDatabase | null, logger?: FastifyBaseLogger) { + // Note: In actual implementation, logger should be passed from PluginManager + logger?.info(`Initializing...`); // Non-route initialization only - routes are registered via registerRoutes method - console.log(`[${this.meta.id}] Initialized successfully`); + logger?.info(`Initialized successfully`); } // Register plugin routes using the isolated route manager @@ -342,8 +348,9 @@ class MyCustomPlugin implements Plugin { } // Optional shutdown method for cleanup - async shutdown() { - console.log(`[${this.meta.id}] Shutting down...`); + async shutdown(logger?: FastifyBaseLogger) { + // Note: In actual implementation, logger should be passed from PluginManager + logger?.info(`Shutting down...`); // Perform any cleanup needed } } @@ -577,8 +584,9 @@ class MyAwesomePlugin implements Plugin { }; // ... rest of your plugin implementation (databaseExtension, initialize, etc.) - async initialize(app: FastifyInstance, db: AnyDatabase | null) { - console.log(`[${this.meta.id}] Initializing...`); + async initialize(app: FastifyInstance, db: AnyDatabase | null, logger?: FastifyBaseLogger) { + // Note: In actual implementation, logger should be passed from PluginManager + logger?.info(`Initializing...`); // You can try to access your plugin's settings here if needed during init, // using GlobalSettingsService.get('myAwesomePlugin.features.enableSuperFeature') diff --git a/docs/deploystack/development/backend/roles.mdx b/docs/deploystack/development/backend/roles.mdx index f414733..d7b1a91 100644 --- a/docs/deploystack/development/backend/roles.mdx +++ b/docs/deploystack/development/backend/roles.mdx @@ -40,6 +40,8 @@ The RBAC system provides fine-grained access control through roles and permissio - `team.members.view` - View team members - `team.members.manage` - Manage team member roles +**Note**: Global administrators have special access to view cloud credentials metadata across all teams, but cannot perform CRUD operations or view credential values. Cloud credentials management is team-contextual. + ### Global User (`global_user`) - **Description**: Standard user with basic profile access @@ -154,6 +156,71 @@ When a user registers: | `team.members.view` | View team members | | `team.members.manage` | Manage team member roles | +#### Cloud Credentials Permissions (Team-Contextual) + +Cloud credentials are team-scoped resources with role-based access control. Unlike other permissions, cloud credentials access is determined by team membership and role, not global permissions. + +**Access Control Matrix:** + +| Role | Team Access | Can See | Can Do | Secret Values | +|------|-------------|---------|---------|---------------| +| `global_admin` | Any team | Metadata only (name, provider, dates) | List/View only | ❌ Never | +| `team_admin` | Own teams only | Metadata + non-secret field values | Full CRUD | ❌ Never | +| `team_user` | Own teams only | Metadata only (name, provider, dates) | Read only | ❌ Never | +| `global_user` | No access | Nothing | Nothing | ❌ Never | + +**Security Rules:** + +- **Team Membership Required**: Only team members can access team's cloud credentials (except global admins) +- **Secret Values Protected**: No role can view secret credential values via API +- **Team Isolation**: Users can only access credentials from teams they belong to +- **Role-Based Responses**: API responses vary based on user's role within the team + +**Response Examples:** + +```typescript +// Team Admin Response (can see non-secret values) +{ + "fields": { + "access_key_id": { + "hasValue": true, + "secret": false, + "value": "AKIATEST123456789" // ✅ Non-secret shown + }, + "secret_access_key": { + "hasValue": true, + "secret": true + // ❌ No "value" field - secret never shown + } + } +} + +// Team User Response (metadata only) +{ + "id": "cred123", + "name": "Production AWS", + "provider": { "name": "Amazon Web Services" }, + "createdAt": "2025-01-01T00:00:00Z" + // ❌ No "fields" object - no values shown +} + +// Global Admin Response (metadata only, any team) +{ + "fields": { + "access_key_id": { + "hasValue": true, + "secret": false + // ❌ No "value" field - admin sees no values + }, + "secret_access_key": { + "hasValue": true, + "secret": true + // ❌ No "value" field - admin sees no values + } + } +} +``` + ### Team API Endpoints #### Get User's Teams @@ -733,15 +800,15 @@ UPDATE authUser SET role_id = 'global_admin' WHERE id = (SELECT id FROM authUser ```typescript // Check user's current role and permissions const userRole = await roleService.getUserRole(userId); -console.log('User role:', userRole); +logger.info('User role:', userRole); // Check specific permission const hasPermission = await roleService.userHasPermission(userId, 'users.edit'); -console.log('Has permission:', hasPermission); +logger.info('Has permission:', hasPermission); // List all roles const allRoles = await roleService.getAllRoles(); -console.log('All roles:', allRoles); +logger.info('All roles:', allRoles); ``` ## Future Enhancements diff --git a/docs/deploystack/development/backend/test.mdx b/docs/deploystack/development/backend/test.mdx index 4f35e3a..014bd80 100644 --- a/docs/deploystack/development/backend/test.mdx +++ b/docs/deploystack/development/backend/test.mdx @@ -74,6 +74,67 @@ The `globalSetup.ts` script automatically configures the following environment v - `PORT`: set to a dedicated test port (e.g., 3002) - `DEPLOYSTACK_ENCRYPTION_SECRET`: set to a dummy secret (`test-super-secret-key-for-jest`) +## Database Isolation Strategy + +The test suite uses a sophisticated database isolation strategy to ensure complete test independence: + +### Test Database Location + +- **Normal usage**: `persistent_data/database/deploystack.db` +- **Test usage**: `persistent_data/database-test/deploystack-{timestamp}.db` + +### Timestamp-Based Isolation + +Each test run creates a unique SQLite database file with a millisecond timestamp: +- Example: `deploystack-1704369600000.db` +- This ensures complete isolation between parallel test runs +- No conflicts when multiple developers run tests simultaneously +- Automatic cleanup through directory removal + +### Benefits + +- **Complete isolation**: Each test run gets a fresh database +- **Parallel test safety**: Multiple test runs won't interfere with each other +- **Easy cleanup**: The entire `database-test` directory can be safely removed +- **No manual intervention**: Tests are self-contained and don't require manual cleanup +- **Clear separation**: Test and production databases are completely separate + +### Implementation Details + +The database path selection is handled automatically in `src/db/config.ts`: +- Detects test environment via `NODE_ENV === 'test'` +- Generates unique timestamp-based filename for test databases +- Falls back to standard path for normal usage + +## Console Logging in Tests + +Unlike the main source code, **console.log statements are allowed in test files** for debugging and test output purposes. However, they are strictly prohibited in source code (`src/` directory). + +### CI/CD Enforcement + +The project includes automated checks that: +- ✅ **Allow** `console.log`, `console.error`, etc. in test files (`tests/` directory) +- ❌ **Block** any console statements in source code (`src/` directory) +- 🚫 **Prevent PRs from merging** if console statements are found in source code + +This check runs automatically on: +- All pull requests to the main branch +- Backend release workflows + +### For Source Code Logging + +When writing source code (not tests), always use the structured Fastify logger instead: + +```typescript +// ❌ Don't use in source code +console.log('User created:', user); + +// ✅ Use in source code +server.log.info('User created', { userId: user.id }); +``` + +See the [Backend Logging Guide](./logging.mdx) for complete logging best practices. + ## Writing New Tests When adding new E2E tests: @@ -107,14 +168,14 @@ When adding new E2E tests: ## Current Test Suites -### 1. `setup.e2e.test.ts` +### 1. `1-setup.e2e.test.ts` - **Purpose**: Verifies the initial database setup functionality. - **Key Checks**: - - Ensures the test database file does not exist before setup. + - Ensures the test database directory does not exist before setup. - Calls `POST /api/db/setup` with `{"type": "sqlite"}`. - - Verifies the API response indicates successful setup initiation. - - Checks that the SQLite database file is created in the test data directory (`tests/e2e/test-data/deploystack.test.db`). + - Verifies the API response indicates successful setup initiation and includes `database_type: "sqlite"`. + - Checks that the SQLite database file is created in the test database directory (`persistent_data/database-test/deploystack-{timestamp}.db`). - Calls `GET /api/db/status` and verifies the response shows `configured: true`, `initialized: true`, and `dialect: "sqlite"`. - Validates global settings initialization without errors. - Confirms all migrations are applied successfully. diff --git a/docs/deploystack/development/frontend/index.mdx b/docs/deploystack/development/frontend/index.mdx index 8a26c12..4af4296 100644 --- a/docs/deploystack/development/frontend/index.mdx +++ b/docs/deploystack/development/frontend/index.mdx @@ -82,15 +82,24 @@ services/frontend/ ### Component Development +#### Vue Component Structure + +**Always prefer Vue Single File Components (SFC) with ` +``` + +❌ **Avoid - TypeScript files with render functions:** + +```typescript +// Don't create files like this for UI components +import { h } from 'vue' +import type { ColumnDef } from '@tanstack/vue-table' + +export function createColumns(): ColumnDef[] { + return [ + { + id: 'actions', + cell: ({ row }) => { + return h('div', { class: 'flex justify-end' }, [ + h(Button, { + onClick: () => handleAction(row.original.id) + }, () => 'Action') + ]) + } + } + ] +} +``` + +#### Why Vue SFC is Preferred + +1. **Better Developer Experience**: Clear separation of logic, template, and styles +2. **Improved Readability**: Template syntax is more intuitive than render functions +3. **Better Tooling Support**: Vue DevTools, syntax highlighting, and IntelliSense work better +4. **Easier Maintenance**: Future developers can understand and modify components more easily +5. **Vue 3 Best Practices**: Aligns with official Vue 3 recommendations + +#### Table Components Example + +When creating table components, prefer this structure: + +```vue + + + ``` diff --git a/docs/deploystack/roles.mdx b/docs/deploystack/roles.mdx index 5ea934e..0c7ec38 100644 --- a/docs/deploystack/roles.mdx +++ b/docs/deploystack/roles.mdx @@ -22,9 +22,12 @@ User roles determine what actions a person can perform in DeployStack. Think of - Manage roles and permissions - Access all system features - Manage all teams +- View cloud credentials metadata across all teams (no credential values shown) **Important**: The first person to register automatically becomes a Global Administrator. +**Note**: Global Administrators can see that teams have cloud credentials but cannot view the actual credential values for security reasons. + ### Global User **Who needs this**: Regular users who want to deploy applications. diff --git a/docs/deploystack/self-hosted/database-setup.mdx b/docs/deploystack/self-hosted/database-setup.mdx new file mode 100644 index 0000000..b79122c --- /dev/null +++ b/docs/deploystack/self-hosted/database-setup.mdx @@ -0,0 +1,167 @@ +--- +title: Database Setup for Self-Hosting +description: Step-by-step guide to configure your database when self-hosting DeployStack - designed for non-technical users. +--- + +# Database Setup for Self-Hosting + +## Overview + +When you first start your self-hosted DeployStack instance, you'll need to choose and configure a database. This guide will walk you through the process step-by-step. + +**Important**: This setup only needs to be done once when you first install DeployStack. + +## What You'll Need + +- Your DeployStack instance running (backend and frontend) +- Access to your server's environment variables (if choosing cloud databases) +- About 5-10 minutes to complete the setup + +## Step 1: Access the Setup Page + +1. **Start your DeployStack instance** following your installation guide +2. **Open your web browser** and navigate to your DeployStack URL +3. **You'll be automatically redirected** to the setup page at `/setup` + +If you see a message like "Database setup required" or are redirected to a setup page, you're in the right place! + +## Step 2: Choose Your Database + +You'll see two database options. Here's what each one means: + +### Option 1: SQLite (Recommended for Most Users) +- **Best for**: Small to medium teams, development, testing +- **Pros**: + - No additional setup required + - Works immediately + - No external dependencies + - Perfect for getting started +- **Cons**: + - Single server only (no clustering) + - Limited to one database file + +**Choose this if**: You're just getting started, have a small team, or want the simplest setup. + +### Option 2: Turso (For Advanced Users) +- **Best for**: Advanced users needing distributed databases +- **Pros**: + - Multi-region replication + - Advanced SQLite features + - Good performance +- **Cons**: + - Requires Turso account + - More complex setup + +**Choose this if**: You need advanced database features or multi-region deployment. + +## Step 3: Configure Your Chosen Database + +### If You Chose SQLite (Easiest) + +1. **Select "SQLite"** from the options +2. **Click "Setup Database"** +3. **Wait for confirmation** (usually takes 10-30 seconds) +4. **Done!** You'll be redirected to the main application + +No additional configuration needed - SQLite works out of the box! + +### If You Chose Turso + +Before you can use Turso, you need to set up environment variables: + +#### Prerequisites +1. **Create a Turso account** at [turso.tech](https://turso.tech) +2. **Install Turso CLI** and create a database: + ```bash + turso db create deploystack-db + ``` +3. **Get your database URL and auth token**: + ```bash + turso db show deploystack-db + turso db tokens create deploystack-db + ``` + +#### Server Configuration +Add these environment variables to your server: + +```bash +TURSO_DATABASE_URL=libsql://your-database-url +TURSO_AUTH_TOKEN=your_auth_token_here +``` + +#### Complete Setup +1. **Restart your DeployStack instance** after setting the environment variables +2. **Go back to the setup page** (`/setup`) +3. **Select "Turso"** +4. **Click "Setup Database"** +5. **Wait for confirmation** + +## Step 4: Verify Setup + +After successful setup, you should: + +1. **See a success message** confirming database initialization +2. **Be redirected to the main application** +3. **Be able to create your first user account** + +If you see any errors, check the troubleshooting section below. + +## Troubleshooting + +### "Database setup has already been performed" +- This means your database is already configured +- You can proceed to use the application normally +- If you need to change databases, contact your system administrator + +### "Configuration incomplete" or "Missing environment variables" +- **For Turso**: Check that both Turso environment variables are set correctly +- **Restart your server** after setting environment variables + +### "Failed to connect" or "Network error" +- **Check your internet connection** +- **For Turso**: Verify your database URL and auth token are correct +- **Check server logs** for more detailed error messages + +### Setup page keeps loading +- **Check that your backend server is running** +- **Verify the backend is accessible** from your browser +- **Check browser console** for any JavaScript errors + +## Changing Databases Later + +**Important**: Once you've set up a database, changing to a different type requires: + +1. **Backing up your data** (if you have important information) +2. **Stopping your DeployStack instance** +3. **Removing the database selection file** (`persistent_data/db.selection.json`) +4. **Updating environment variables** for the new database type +5. **Restarting and going through setup again** + +**Note**: This will reset your application data, so make sure to backup anything important first. + +## Getting Help + +If you're having trouble with database setup: + +1. **Check the server logs** for detailed error messages +2. **Verify environment variables** are set correctly +3. **Ensure your server has internet access** (for cloud databases) +4. **Contact support** with your error messages and setup details + +## Security Notes + +- **Keep your API tokens secure** - never share them publicly +- **Use environment variables** - don't put credentials directly in code +- **Regularly rotate API tokens** for cloud databases +- **Backup your SQLite database file** if using SQLite + +## Next Steps + +After successful database setup: + +1. **Create your administrator account** +2. **Configure your application settings** +3. **Set up user authentication** (email, GitHub, etc.) +4. **Invite your team members** + +Your DeployStack instance is now ready to use! From ee6fce85ff4c1a94c856e7ffcf788d97433cb102 Mon Sep 17 00:00:00 2001 From: Lasim Date: Sun, 6 Jul 2025 21:05:22 +0200 Subject: [PATCH 2/3] Enhance team management documentation: clarify multi-user support, member management permissions, and default team restrictions. --- .../deploystack/development/backend/roles.mdx | 126 +++++++++++++++++- docs/deploystack/roles.mdx | 33 ++++- docs/deploystack/teams.mdx | 95 +++++++++++-- 3 files changed, 241 insertions(+), 13 deletions(-) diff --git a/docs/deploystack/development/backend/roles.mdx b/docs/deploystack/development/backend/roles.mdx index d7b1a91..753e86d 100644 --- a/docs/deploystack/development/backend/roles.mdx +++ b/docs/deploystack/development/backend/roles.mdx @@ -74,15 +74,16 @@ The RBAC system provides fine-grained access control through roles and permissio ## Team System -DeployStack includes a comprehensive team management system that allows users to organize their work into teams. Each user automatically gets their own team upon registration and can create up to 3 teams total. +DeployStack includes a comprehensive team management system that allows users to organize their work into teams and collaborate with other users. Each user automatically gets their own team upon registration and can create up to 3 teams total. ### Team Features - **Automatic Team Creation**: Every new user gets a default team created with their username - **Team Ownership**: Each team has an owner who has full administrative control -- **Single User Teams**: Currently, teams support only one user per team +- **Multi-User Teams**: Teams support up to 3 members with role-based access control - **Team Limits**: Users can create up to 3 teams maximum - **Unique Slugs**: Teams have URL-friendly slugs with automatic conflict resolution +- **Default Team Protection**: Default teams cannot have additional members added (personal workspace) ### Team Database Schema @@ -301,6 +302,95 @@ GET /api/teams/:id/members Authorization: Required (team.members.view permission) ``` +**Response:** + +```json +{ + "success": true, + "data": [ + { + "id": "membership123", + "user_id": "user123", + "username": "johndoe", + "email": "john@example.com", + "first_name": "John", + "last_name": "Doe", + "role": "team_admin", + "is_admin": true, + "is_owner": true, + "joined_at": "2025-01-30T15:00:00.000Z" + } + ] +} +``` + +#### Add Team Member + +```http +POST /api/teams/:id/members +Authorization: Required (team.members.manage permission or global admin) +Content-Type: application/json + +{ + "userId": "user456", + "role": "team_user" +} +``` + +**Restrictions:** +- Maximum 3 members per team +- Cannot add members to default teams (protected) +- User must exist in the system +- Team admin or global admin required + +#### Update Team Member Role + +```http +PUT /api/teams/:id/members/:userId/role +Authorization: Required (team.members.manage permission or global admin) +Content-Type: application/json + +{ + "role": "team_admin" +} +``` + +**Restrictions:** +- Cannot change roles in default teams +- Must maintain at least one team admin +- Team admin or global admin required + +#### Remove Team Member + +```http +DELETE /api/teams/:id/members/:userId +Authorization: Required (team.members.manage permission or global admin) +``` + +**Restrictions:** +- Cannot remove from default teams +- Cannot remove team owner (must transfer ownership first) +- Cannot remove last member from team +- Team admin or global admin required + +#### Transfer Team Ownership + +```http +PUT /api/teams/:id/ownership +Authorization: Required (team owner or global admin) +Content-Type: application/json + +{ + "newOwnerId": "user456" +} +``` + +**Restrictions:** +- Cannot transfer ownership of default teams +- New owner must be a team member +- New owner automatically becomes team_admin +- Only current owner or global admin can transfer + ### Team Service Methods The `TeamService` class provides comprehensive team management: @@ -341,6 +431,38 @@ const isDefault = await TeamService.isDefaultTeam(teamId, userId); // Get team membership details const membership = await TeamService.getTeamMembership(teamId, userId); + +// ===== TEAM MEMBER MANAGEMENT METHODS ===== + +// Add team member +const membership = await TeamService.addTeamMember(teamId, userId, 'team_user'); + +// Remove team member +const removed = await TeamService.removeTeamMember(teamId, userId); + +// Update member role +const updatedMembership = await TeamService.updateMemberRole(teamId, userId, 'team_admin'); + +// Transfer team ownership +const transferred = await TeamService.transferOwnership(teamId, newOwnerId); + +// Get team members with user info +const membersWithInfo = await TeamService.getTeamMembersWithUserInfo(teamId); + +// Get user teams with role info +const teamsWithRoles = await TeamService.getUserTeamsWithRoles(userId); + +// Team capacity and permission checks +const canAddMember = await TeamService.canAddMemberToTeam(teamId); +const canRemoveMember = await TeamService.canRemoveMemberFromTeam(teamId, userId); +const canManageMember = await TeamService.canUserManageTeamMember(teamId, managerId, targetUserId, 'add'); + +// Team member counts +const memberCount = await TeamService.getTeamMemberCount(teamId); +const adminCount = await TeamService.getTeamAdminCount(teamId); + +// Default team protection checks +const isTeamDefault = await TeamService.isTeamDefault(teamId); ``` ### Frontend Team Management diff --git a/docs/deploystack/roles.mdx b/docs/deploystack/roles.mdx index 0c7ec38..efd063b 100644 --- a/docs/deploystack/roles.mdx +++ b/docs/deploystack/roles.mdx @@ -45,8 +45,37 @@ User roles determine what actions a person can perform in DeployStack. Think of **What they can do**: - Manage their team's settings - View team members +- **Add new members to their teams** (up to 3 members total) +- **Change member roles** (promote team_user to team_admin, or demote) +- **Remove team members** (except team owners) +- **Transfer team ownership** to another team member - Manage team deployments -- Delete teams they own +- Delete teams they own (except default teams) + +**Important**: Team admins have full control over team membership and can manage all team members except the team owner. + +## Team Member Management Permissions + +The following table shows exactly what each role can do with team member management: + +| Action | team_user | team_admin | team_admin + owner | global_admin | +|--------|-----------|------------|-------------------|--------------| +| List team members | ✅ (own teams) | ✅ (own teams) | ✅ (own teams) | ✅ (any team) | +| Add team member | ❌ | ✅ (non-default) | ✅ (non-default) | ✅ (any team) | +| Remove team_user | ❌ | ✅ (non-default) | ✅ (non-default) | ✅ (any team) | +| Remove team_admin | ❌ | ❌ | ✅ (non-default) | ✅ (any team) | +| Remove team owner | ❌ | ❌ | ❌ | ✅ (any team) | +| Promote to team_admin | ❌ | ✅ (non-default) | ✅ (non-default) | ✅ (any team) | +| Demote team_admin | ❌ | ❌ | ✅ (non-default) | ✅ (any team) | +| Transfer ownership | ❌ | ❌ | ✅ (non-default) | ✅ (any team) | +| Delete team | ❌ | ❌ | ✅ (non-default) | ✅ (non-default) | + +**Key Notes:** +- **Default teams** are completely protected - no member management operations allowed +- **Team admins** can only manage team_users, not other team_admins or owners +- **Team owners** have full control over their teams (except default teams) +- **Global admins** can override most restrictions but still cannot modify default teams +- **3-member limit** applies to all teams (owner + 2 additional members maximum) ### Team User **Who needs this**: Basic team members who participate in deployments. @@ -56,6 +85,8 @@ User roles determine what actions a person can perform in DeployStack. Think of - See team members - Participate in team activities +**Limitations**: Team users cannot add members, change roles, or manage other team members. + ## Understanding Teams Teams are groups where users organize their deployment projects. Here's how teams work: diff --git a/docs/deploystack/teams.mdx b/docs/deploystack/teams.mdx index 58ae02b..1e6b56c 100644 --- a/docs/deploystack/teams.mdx +++ b/docs/deploystack/teams.mdx @@ -16,6 +16,8 @@ In DeployStack, teams provide: - **Resource Organization**: All your MCP servers, credentials, and settings are organized within teams - **Access Control**: Team-based permissions ensure secure access to your deployment resources - **Multi-Project Support**: Create up to 3 teams to organize different projects or environments +- **Team Collaboration**: Teams support multiple members with role-based access control +- **Default Team Protection**: Your personal default team cannot have additional members added Every team acts as a complete deployment environment, containing everything needed to deploy and manage MCP servers across various cloud providers. @@ -135,22 +137,95 @@ The interface provides clear visual feedback: ### Current Structure -DeployStack teams currently operate with a **single-user model**: +DeployStack teams support **multi-user collaboration** with role-based access control: -- Each team belongs to one user -- You have full control over your teams -- No team sharing or collaboration features (planned for future releases) +- Teams can have up to **3 members maximum** +- Each team has one **owner** who created the team +- Team members can have different roles with specific permissions +- **Default teams are personal** - no additional members can be added to your default team ### Team Roles -Within your teams, you automatically have the **Team Administrator** role, which provides: +Teams support two distinct roles with different capabilities: -- Full access to all team resources -- Ability to deploy and manage MCP servers -- Permission to modify team settings -- Authority to delete the team +#### Team Administrator +- **Full team management**: Can add/remove members, change roles, transfer ownership +- **Resource access**: Full access to all team resources and deployments +- **Team settings**: Can modify team name, description, and all configurations +- **Member management**: Can promote team users to admins or demote admins to users -*Note: Team User roles exist in the system for future multi-user team functionality.* +#### Team User +- **Basic access**: Can view team information and see team members +- **Limited permissions**: Cannot add members, change roles, or modify team settings +- **Resource viewing**: Can see team resources but with restricted management capabilities + +**Important**: Your **default team** (created automatically with your username) is protected - you cannot add other members to it. This keeps your personal workspace private. + +## Team Member Management + +### Adding Team Members + +Team administrators can add new members to their teams (except default teams): + +1. **Navigate to Team Management**: Go to your team's management page +2. **Find Members Section**: Look for the team members management area +3. **Add Member**: Click "Add Member" and enter the user's email or username +4. **Assign Role**: Choose either "Team Administrator" or "Team User" +5. **Send Invitation**: The user will be notified and added to the team + +**Limitations**: +- **Maximum 3 members** per team (including the owner) +- **Default teams**: Cannot add members to your personal default team +- **Existing users only**: Can only add users who already have DeployStack accounts + +### Managing Member Roles + +Team administrators and owners can change member roles: + +#### Promoting Team Users to Administrators +- **Who can do this**: Team administrators and team owners +- **Process**: Select the member and change their role to "Team Administrator" +- **Result**: User gains full team management capabilities + +#### Demoting Team Administrators to Users +- **Who can do this**: Team owners (and other team administrators) +- **Restriction**: Must maintain at least one team administrator +- **Process**: Change the administrator's role to "Team User" + +### Removing Team Members + +Team administrators can remove members from teams: + +- **Who can remove**: Team administrators and owners +- **Cannot remove**: Team owners (must transfer ownership first) +- **Default teams**: No members to remove (single-user only) +- **Process**: Select member and click "Remove from Team" + +### Transferring Team Ownership + +Team owners can transfer ownership to another team member: + +1. **Requirement**: Target user must already be a team member +2. **Process**: Go to team settings and select "Transfer Ownership" +3. **Choose New Owner**: Select from existing team administrators +4. **Confirm Transfer**: Confirm the ownership change +5. **Result**: New owner gains full control, previous owner becomes team administrator + +**Important**: +- **Cannot transfer default team ownership** - default teams always belong to the original user +- **Irreversible action** - ownership transfers cannot be undone +- **New owner requirements** - Target user must be a team administrator + +### Default Team Restrictions + +Your automatically created default team has special protections: + +- **No Additional Members**: Cannot add other users to your default team +- **Cannot Transfer Ownership**: Default team ownership cannot be changed +- **Cannot Leave**: You cannot leave your own default team +- **Personal Workspace**: Designed to remain your private workspace + +These restrictions ensure that every user always has a personal, private team for their individual work. ### Resource Isolation From 712b8a663f703adf563a37cd7b497a99d9b07295 Mon Sep 17 00:00:00 2001 From: Lasim Date: Tue, 15 Jul 2025 18:07:30 +0200 Subject: [PATCH 3/3] feat: Add comprehensive documentation for UI Design System, GitHub Integration, MCP Catalog, and user roles - Introduced a new UI Design System guide detailing design principles, component patterns, and accessibility guidelines. - Created a GitHub Application Integration document explaining the process of integrating GitHub with DeployStack for MCP server creation. - Added a GitHub Integration guide outlining synchronization, OAuth configuration, and repository management features. - Developed an MCP Server Catalog documentation to facilitate server discovery, management, and deployment processes. - Updated roles documentation to include specific permissions related to the MCP Catalog and server management capabilities. - Enhanced team documentation to clarify MCP server settings and GitHub integration for team-specific deployments. --- .../development/backend/api-pagination.mdx | 667 ++++++++++++ .../development/backend/api-security.mdx | 473 +++++++++ docs/deploystack/development/backend/api.mdx | 131 ++- .../deploystack/development/backend/roles.mdx | 992 ++---------------- .../development/frontend/event-bus.mdx | 26 + .../development/frontend/global-settings.mdx | 620 +++++++++++ .../development/frontend/index.mdx | 113 +- .../development/frontend/storage.mdx | 468 +++++++++ .../frontend/ui-design-system-pagination.mdx | 216 ++++ .../frontend/ui-design-system-table.mdx | 379 +++++++ .../development/frontend/ui-design-system.mdx | 312 ++++++ docs/deploystack/github-application.mdx | 160 +++ docs/deploystack/github-integration.mdx | 344 ++++++ docs/deploystack/mcp-catalog.mdx | 371 +++++++ docs/deploystack/roles.mdx | 25 +- docs/deploystack/teams.mdx | 11 +- 16 files changed, 4304 insertions(+), 1004 deletions(-) create mode 100644 docs/deploystack/development/backend/api-pagination.mdx create mode 100644 docs/deploystack/development/backend/api-security.mdx create mode 100644 docs/deploystack/development/frontend/global-settings.mdx create mode 100644 docs/deploystack/development/frontend/storage.mdx create mode 100644 docs/deploystack/development/frontend/ui-design-system-pagination.mdx create mode 100644 docs/deploystack/development/frontend/ui-design-system-table.mdx create mode 100644 docs/deploystack/development/frontend/ui-design-system.mdx create mode 100644 docs/deploystack/github-application.mdx create mode 100644 docs/deploystack/github-integration.mdx create mode 100644 docs/deploystack/mcp-catalog.mdx diff --git a/docs/deploystack/development/backend/api-pagination.mdx b/docs/deploystack/development/backend/api-pagination.mdx new file mode 100644 index 0000000..8352d32 --- /dev/null +++ b/docs/deploystack/development/backend/api-pagination.mdx @@ -0,0 +1,667 @@ +--- +title: API Pagination Guide +description: Complete guide to implementing pagination in DeployStack Backend APIs, including best practices, patterns, and examples. +--- + +# API Pagination Guide + +This document provides comprehensive guidance on implementing pagination in DeployStack Backend APIs. Pagination is essential for handling large datasets efficiently and providing a good user experience. + +## Overview + +DeployStack uses **offset-based pagination** with standardized query parameters and response formats. This approach provides: + +- **Consistent API Interface**: All paginated endpoints use the same parameter names and response structure +- **Performance**: Reduces memory usage and response times for large datasets +- **User Experience**: Enables smooth navigation through large result sets +- **Scalability**: Handles growing datasets without performance degradation + +## Standard Pagination Parameters + +### Query Parameters + +All paginated endpoints should accept these standardized query parameters: + +```typescript +const paginationQuerySchema = z.object({ + limit: z.string() + .regex(/^\d+$/, 'Limit must be a number') + .transform(Number) + .refine(n => n > 0 && n <= 100, 'Limit must be between 1 and 100') + .optional() + .default('20'), + offset: z.string() + .regex(/^\d+$/, 'Offset must be a number') + .transform(Number) + .refine(n => n >= 0, 'Offset must be non-negative') + .optional() + .default('0') +}); +``` + +#### Parameter Details + +- **`limit`** (optional, default: 20) + - Type: String (converted to Number) + - Range: 1-100 + - Description: Maximum number of items to return + - Validation: Must be a positive integer between 1 and 100 + +- **`offset`** (optional, default: 0) + - Type: String (converted to Number) + - Range: ≥ 0 + - Description: Number of items to skip from the beginning + - Validation: Must be a non-negative integer + +### Why String Parameters? + +Query parameters are always strings in HTTP. We use Zod's `.transform(Number)` to: +1. **Validate Format**: Ensure the string contains only digits +2. **Type Safety**: Convert to number for internal use +3. **Error Handling**: Provide clear validation messages + +## Standard Response Format + +### Response Schema + +All paginated endpoints should return responses in this format: + +```typescript +const paginatedResponseSchema = z.object({ + success: z.boolean(), + data: z.object({ + // Your actual data array + [dataArrayName]: z.array(yourItemSchema), + + // Pagination metadata + pagination: z.object({ + total: z.number(), // Total number of items available + limit: z.number(), // Items per page (as requested) + offset: z.number(), // Current offset (as requested) + has_more: z.boolean() // Whether more items are available + }) + }) +}); +``` + +### Response Example + +```json +{ + "success": true, + "data": { + "servers": [ + { + "id": "server-1", + "name": "Example Server", + // ... other server fields + } + // ... more servers + ], + "pagination": { + "total": 150, + "limit": 20, + "offset": 40, + "has_more": true + } + } +} +``` + +### Pagination Metadata Fields + +- **`total`**: Total number of items available (across all pages) +- **`limit`**: Number of items per page (echoes the request parameter) +- **`offset`**: Current starting position (echoes the request parameter) +- **`has_more`**: Boolean indicating if more items are available after this page + +## Implementation Pattern + +### 1. Route Schema Definition + +```typescript +import { z } from 'zod'; +import { zodToJsonSchema } from 'zod-to-json-schema'; + +// Query parameters (including pagination) +const querySchema = z.object({ + // Your filtering parameters + category: z.string().optional(), + status: z.enum(['active', 'inactive']).optional(), + + // Standard pagination parameters + limit: z.string() + .regex(/^\d+$/, 'Limit must be a number') + .transform(Number) + .refine(n => n > 0 && n <= 100, 'Limit must be between 1 and 100') + .optional() + .default('20'), + offset: z.string() + .regex(/^\d+$/, 'Offset must be a number') + .transform(Number) + .refine(n => n >= 0, 'Offset must be non-negative') + .optional() + .default('0') +}); + +// Response schema +const responseSchema = z.object({ + success: z.boolean(), + data: z.object({ + items: z.array(yourItemSchema), + pagination: z.object({ + total: z.number(), + limit: z.number(), + offset: z.number(), + has_more: z.boolean() + }) + }) +}); +``` + +### 2. Route Handler Implementation + +```typescript +export default async function listItems(server: FastifyInstance) { + server.get('/api/items', { + schema: { + tags: ['Items'], + summary: 'List items with pagination', + description: 'Retrieve items with pagination support. Supports filtering and sorting.', + querystring: zodToJsonSchema(querySchema, { + $refStrategy: 'none', + target: 'openApi3' + }), + response: { + 200: zodToJsonSchema(responseSchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + } + } + }, async (request, reply) => { + try { + // Parse and validate query parameters + const params = querySchema.parse(request.query); + + // Extract pagination parameters + const { limit, offset, ...filters } = params; + + // Get all items (with filtering applied) + const allItems = await yourService.getItems(filters); + + // Apply pagination + const total = allItems.length; + const paginatedItems = allItems.slice(offset, offset + limit); + + // Log pagination info + server.log.info({ + operation: 'list_items', + totalResults: total, + returnedResults: paginatedItems.length, + pagination: { limit, offset } + }, 'Items list completed'); + + // Return paginated response + return reply.send({ + success: true, + data: { + items: paginatedItems, + pagination: { + total, + limit, + offset, + has_more: offset + limit < total + } + } + }); + } catch (error) { + server.log.error({ error }, 'Failed to list items'); + return reply.status(500).send({ + success: false, + error: 'Failed to retrieve items' + }); + } + }); +} +``` + +## Database-Level Pagination (Advanced) + +For better performance with large datasets, implement pagination at the database level: + +### Using Drizzle ORM + +```typescript +import { desc, asc } from 'drizzle-orm'; + +async getItemsPaginated( + filters: ItemFilters, + limit: number, + offset: number +): Promise<{ items: Item[], total: number }> { + // Build base query with filters + let query = this.db.select().from(items); + + // Apply filters + if (filters.category) { + query = query.where(eq(items.category, filters.category)); + } + + // Get total count (before pagination) + const countQuery = this.db.select({ count: sql`count(*)` }).from(items); + // Apply same filters to count query + if (filters.category) { + countQuery = countQuery.where(eq(items.category, filters.category)); + } + const [{ count: total }] = await countQuery; + + // Apply pagination and ordering + const paginatedItems = await query + .orderBy(desc(items.created_at)) + .limit(limit) + .offset(offset); + + return { + items: paginatedItems, + total + }; +} +``` + +### Updated Route Handler + +```typescript +// In your route handler +const { items, total } = await yourService.getItemsPaginated(filters, limit, offset); + +return reply.send({ + success: true, + data: { + items, + pagination: { + total, + limit, + offset, + has_more: offset + limit < total + } + } +}); +``` + +## Client-Side Usage Examples + +### JavaScript/TypeScript + +```typescript +interface PaginationParams { + limit?: number; + offset?: number; +} + +interface PaginatedResponse { + success: boolean; + data: { + items: T[]; + pagination: { + total: number; + limit: number; + offset: number; + has_more: boolean; + }; + }; +} + +async function fetchItems(params: PaginationParams = {}): Promise> { + const url = new URL('/api/items', baseUrl); + + if (params.limit) url.searchParams.set('limit', params.limit.toString()); + if (params.offset) url.searchParams.set('offset', params.offset.toString()); + + const response = await fetch(url.toString(), { + credentials: 'include', + headers: { 'Accept': 'application/json' } + }); + + return await response.json(); +} + +// Usage examples +const firstPage = await fetchItems({ limit: 20, offset: 0 }); +const secondPage = await fetchItems({ limit: 20, offset: 20 }); +const customPage = await fetchItems({ limit: 50, offset: 100 }); +``` + +### Vue.js Composable + +```typescript +import { ref, computed } from 'vue'; + +export function usePagination( + fetchFunction: (limit: number, offset: number) => Promise>, + initialLimit = 20 +) { + const items = ref([]); + const currentPage = ref(1); + const limit = ref(initialLimit); + const total = ref(0); + const loading = ref(false); + + const totalPages = computed(() => Math.ceil(total.value / limit.value)); + const hasNextPage = computed(() => currentPage.value < totalPages.value); + const hasPrevPage = computed(() => currentPage.value > 1); + + const offset = computed(() => (currentPage.value - 1) * limit.value); + + async function loadPage(page: number) { + if (page < 1 || page > totalPages.value) return; + + loading.value = true; + try { + const response = await fetchFunction(limit.value, (page - 1) * limit.value); + items.value = response.data.items; + total.value = response.data.pagination.total; + currentPage.value = page; + } finally { + loading.value = false; + } + } + + async function nextPage() { + if (hasNextPage.value) { + await loadPage(currentPage.value + 1); + } + } + + async function prevPage() { + if (hasPrevPage.value) { + await loadPage(currentPage.value - 1); + } + } + + return { + items, + currentPage, + limit, + total, + totalPages, + loading, + hasNextPage, + hasPrevPage, + loadPage, + nextPage, + prevPage + }; +} +``` + +## Best Practices + +### 1. Consistent Parameter Validation + +Always use the same validation rules across all endpoints: + +```typescript +// Create a reusable schema +export const paginationSchema = z.object({ + limit: z.string() + .regex(/^\d+$/, 'Limit must be a number') + .transform(Number) + .refine(n => n > 0 && n <= 100, 'Limit must be between 1 and 100') + .optional() + .default('20'), + offset: z.string() + .regex(/^\d+$/, 'Offset must be a number') + .transform(Number) + .refine(n => n >= 0, 'Offset must be non-negative') + .optional() + .default('0') +}); + +// Use in your endpoint schemas +const querySchema = z.object({ + // Your specific filters + category: z.string().optional(), + status: z.enum(['active', 'inactive']).optional(), + + // Include pagination + ...paginationSchema.shape +}); +``` + +### 2. Proper Error Handling + +```typescript +try { + const params = querySchema.parse(request.query); +} catch (error) { + if (error instanceof z.ZodError) { + return reply.status(400).send({ + success: false, + error: 'Invalid query parameters', + details: error.errors + }); + } + throw error; +} +``` + +### 3. Performance Considerations + +- **Database Pagination**: Use `LIMIT` and `OFFSET` at the database level for large datasets +- **Indexing**: Ensure proper database indexes on columns used for sorting +- **Caching**: Consider caching total counts for frequently accessed endpoints +- **Reasonable Limits**: Enforce maximum page sizes (e.g., 100 items) + +### 4. OpenAPI Documentation + +Include clear pagination documentation in your API specs: + +```typescript +schema: { + tags: ['Items'], + summary: 'List items with pagination', + description: ` + Retrieve items with pagination support. + + **Pagination Parameters:** + - \`limit\`: Items per page (1-100, default: 20) + - \`offset\`: Items to skip (≥0, default: 0) + + **Response includes:** + - \`data.items\`: Array of items for current page + - \`data.pagination.total\`: Total items available + - \`data.pagination.has_more\`: Whether more pages exist + `, + // ... rest of schema +} +``` + +## Common Pitfalls and Solutions + +### 1. Inconsistent Response Formats + +❌ **Wrong**: Different endpoints use different response structures +```typescript +// Endpoint A +{ data: items, total: 100, page: 1 } + +// Endpoint B +{ results: items, count: 100, offset: 20 } +``` + +✅ **Correct**: Use standardized response format +```typescript +// All endpoints +{ + success: true, + data: { + items: [...], + pagination: { total, limit, offset, has_more } + } +} +``` + +### 2. Missing Validation + +❌ **Wrong**: No parameter validation +```typescript +const limit = parseInt(request.query.limit) || 20; +const offset = parseInt(request.query.offset) || 0; +``` + +✅ **Correct**: Proper Zod validation +```typescript +const params = paginationSchema.parse(request.query); +const { limit, offset } = params; +``` + +### 3. Performance Issues + +❌ **Wrong**: Loading all data then slicing +```typescript +const allItems = await db.select().from(items); // Loads everything! +const paginated = allItems.slice(offset, offset + limit); +``` + +✅ **Correct**: Database-level pagination +```typescript +const items = await db.select().from(items) + .limit(limit) + .offset(offset); +``` + +### 4. Incorrect Total Count + +❌ **Wrong**: Using paginated results length +```typescript +const items = await getItemsPaginated(limit, offset); +const total = items.length; // Wrong! This is just current page +``` + +✅ **Correct**: Separate count query +```typescript +const [items, total] = await Promise.all([ + getItemsPaginated(limit, offset), + getItemsCount(filters) +]); +``` + +## Real-World Examples + +### Example 1: MCP Servers List (Current Implementation) + +```typescript +// File: services/backend/src/routes/mcp/servers/list.ts +export default async function listServers(server: FastifyInstance) { + server.get('/mcp/servers', { + schema: { + tags: ['MCP Servers'], + summary: 'List MCP servers', + description: 'Retrieve MCP servers with pagination support...', + querystring: zodToJsonSchema(querySchema, { + $refStrategy: 'none', + target: 'openApi3' + }), + response: { + 200: zodToJsonSchema(listServersResponseSchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + } + } + }, async (request, reply) => { + const { limit, offset, ...filters } = querySchema.parse(request.query); + + const allServers = await catalogService.getServersForUser( + userId, userRole, teamIds, filters + ); + + const total = allServers.length; + const paginatedServers = allServers.slice(offset, offset + limit); + + return reply.send({ + success: true, + data: { + servers: paginatedServers, + pagination: { + total, + limit, + offset, + has_more: offset + limit < total + } + } + }); + }); +} +``` + +### Example 2: Search Endpoint (Reference Implementation) + +The search endpoint (`/mcp/servers/search`) demonstrates the complete pagination pattern and can serve as a reference for implementing pagination in other endpoints. + +## Testing Pagination + +### Unit Tests + +```typescript +describe('Pagination', () => { + test('should return first page with default limit', async () => { + const response = await request(app) + .get('/api/items') + .expect(200); + + expect(response.body.data.pagination).toEqual({ + total: expect.any(Number), + limit: 20, + offset: 0, + has_more: expect.any(Boolean) + }); + }); + + test('should handle custom pagination parameters', async () => { + const response = await request(app) + .get('/api/items?limit=10&offset=20') + .expect(200); + + expect(response.body.data.pagination.limit).toBe(10); + expect(response.body.data.pagination.offset).toBe(20); + }); + + test('should validate pagination parameters', async () => { + await request(app) + .get('/api/items?limit=invalid') + .expect(400); + + await request(app) + .get('/api/items?limit=101') // Over maximum + .expect(400); + }); +}); +``` + +### Integration Tests + +```typescript +test('should paginate through all results', async () => { + const limit = 5; + let offset = 0; + let allItems = []; + let hasMore = true; + + while (hasMore) { + const response = await request(app) + .get(`/api/items?limit=${limit}&offset=${offset}`) + .expect(200); + + const { items, pagination } = response.body.data; + allItems.push(...items); + + hasMore = pagination.has_more; + offset += limit; + } + + // Verify we got all items + expect(allItems.length).toBe(totalExpectedItems); +}); +``` diff --git a/docs/deploystack/development/backend/api-security.mdx b/docs/deploystack/development/backend/api-security.mdx new file mode 100644 index 0000000..a9312ac --- /dev/null +++ b/docs/deploystack/development/backend/api-security.mdx @@ -0,0 +1,473 @@ +--- +title: API Security Best Practices +description: Essential security patterns for DeployStack Backend API development, including proper authorization hook usage and security-first development principles. +--- + +# API Security Best Practices + +This document outlines critical security patterns and best practices for developing secure APIs in the DeployStack Backend. Following these guidelines ensures consistent security behavior and prevents common vulnerabilities. + +## Overview + +Security in API development requires careful consideration of the order in which validation and authorization occur. The DeployStack Backend uses Fastify's hook system to implement security controls, and understanding the proper hook usage is crucial for maintaining security. + +## The Critical Security Pattern: Authorization Before Validation + +### The Problem + +A common security anti-pattern occurs when authorization checks happen **after** input validation. This can lead to: + +- **Information Disclosure**: Unauthorized users receive validation errors instead of proper 403 Forbidden responses +- **Inconsistent Error Responses**: Some endpoints return 400 (validation errors) while others return 403 (authorization errors) +- **Security Through Obscurity Violation**: API structure and validation rules are leaked to unauthorized users + +### Real-World Example + +Consider this test failure that led to the discovery of this pattern: + +```javascript +// Test expectation +expect(response.status).toBe(403); // Expected: Forbidden + +// Actual result +expect(response.status).toBe(400); // Received: Bad Request (validation error) +``` + +The test was sending invalid data to a protected endpoint, expecting a 403 Forbidden response. Instead, it received a 400 Bad Request because validation ran before authorization. + +## Fastify Hook Execution Order + +Understanding Fastify's hook execution order is essential for proper security implementation: + +``` +1. onRequest ← Use for early authentication setup +2. preParsing ← Use for request preprocessing +3. preValidation ← ✅ USE FOR AUTHORIZATION +4. preHandler ← Use for post-validation processing +5. Route Handler ← Your business logic +``` + +### Key Security Principle + +**Authorization must happen in `preValidation` to ensure it runs before schema validation.** + +## Correct Implementation Patterns + +### ✅ Secure Pattern: preValidation for Authorization + +```typescript +import { requireGlobalAdmin } from '../../../middleware/roleMiddleware'; + +export default async function secureRoute(fastify: FastifyInstance) { + fastify.post<{ Body: RequestInput }>('/protected-endpoint', { + schema: { + tags: ['Protected'], + summary: 'Protected endpoint', + description: 'Requires admin permissions', + security: [{ cookieAuth: [] }], + body: zodToJsonSchema(RequestSchema, { + $refStrategy: 'none', + target: 'openApi3' + }), + response: { + 200: zodToJsonSchema(SuccessResponseSchema, { + $refStrategy: 'none', + target: 'openApi3' + }), + 401: zodToJsonSchema(ErrorResponseSchema.describe('Unauthorized'), { + $refStrategy: 'none', + target: 'openApi3' + }), + 403: zodToJsonSchema(ErrorResponseSchema.describe('Forbidden'), { + $refStrategy: 'none', + target: 'openApi3' + }), + 400: zodToJsonSchema(ErrorResponseSchema.describe('Bad Request'), { + $refStrategy: 'none', + target: 'openApi3' + }) + } + }, + preValidation: requireGlobalAdmin(), // ✅ CORRECT: Runs before validation + }, async (request, reply) => { + // If we reach here, user is authorized AND input is validated + const validatedData = request.body; + // Your business logic here + }); +} +``` + +### ❌ Insecure Pattern: preHandler for Authorization + +```typescript +export default async function insecureRoute(fastify: FastifyInstance) { + fastify.post<{ Body: RequestInput }>('/protected-endpoint', { + schema: { + // Schema definition... + body: zodToJsonSchema(RequestSchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + }, + preHandler: requireGlobalAdmin(), // ❌ WRONG: Runs after validation + }, async (request, reply) => { + // This handler may never be reached if validation fails first + }); +} +``` + +## Security Implications + +### With Incorrect Pattern (preHandler) + +``` +Request Flow: +1. Request received +2. Schema validation runs → Returns 400 if invalid +3. Authorization check (never reached if validation fails) +4. Handler execution +``` + +**Result**: Unauthorized users get validation errors, leaking API structure. + +### With Correct Pattern (preValidation) + +``` +Request Flow: +1. Request received +2. Authorization check → Returns 401/403 if unauthorized +3. Schema validation (only for authorized users) +4. Handler execution +``` + +**Result**: Unauthorized users always get proper 401/403 responses. + +## Authorization Middleware Usage + +### Available Middleware Functions + +The DeployStack Backend provides several authorization middleware functions: + +```typescript +// Role-based authorization +requireGlobalAdmin() // Requires 'global_admin' role +requireRole('role_id') // Requires specific role + +// Permission-based authorization +requirePermission('permission.name') // Requires specific permission +requireAnyPermission(['perm1', 'perm2']) // Requires any of the permissions + +// Team-aware permission authorization +requireTeamPermission('permission.name') // Requires permission within team context +requireTeamPermission('permission.name', getTeamIdFn) // Custom team ID extraction + +// Ownership-based authorization +requireOwnershipOrAdmin(getUserIdFromRequest) // User owns resource OR is admin +``` + +### Team-Aware Permission System + +For endpoints that operate within team contexts (e.g., `/teams/:teamId/resource`), use the team-aware permission middleware: + +```typescript +import { requireTeamPermission } from '../../../middleware/roleMiddleware'; + +export default async function teamResourceRoute(fastify: FastifyInstance) { + fastify.post<{ + Params: { teamId: string }; + Body: CreateResourceRequest; + }>('/teams/:teamId/resources', { + schema: { + tags: ['Team Resources'], + summary: 'Create team resource', + description: 'Creates a resource within the specified team context', + security: [{ cookieAuth: [] }], + params: zodToJsonSchema(z.object({ + teamId: z.string().min(1, 'Team ID is required') + })), + body: zodToJsonSchema(CreateResourceSchema), + response: { + 201: zodToJsonSchema(SuccessResponseSchema), + 401: zodToJsonSchema(ErrorResponseSchema.describe('Unauthorized')), + 403: zodToJsonSchema(ErrorResponseSchema.describe('Forbidden - Not team member or insufficient permissions')), + 400: zodToJsonSchema(ErrorResponseSchema.describe('Bad Request')) + } + }, + preValidation: requireTeamPermission('resources.create'), // ✅ Team-aware authorization + }, async (request, reply) => { + const { teamId } = request.params; + const resourceData = request.body; + + // User is guaranteed to be: + // 1. Authenticated + // 2. A member of the specified team + // 3. Have the 'resources.create' permission within that team + // 4. Input is validated + + // Your business logic here + }); +} +``` + +#### How Team-Aware Permissions Work + +The `requireTeamPermission()` middleware performs these security checks in order: + +1. **Authentication Check**: Verifies user is logged in +2. **Team ID Extraction**: Gets team ID from URL params (`:teamId`) or custom function +3. **Global Admin Bypass**: Global admins can access any team's resources +4. **Team Membership**: Verifies user belongs to the specified team +5. **Team Role Lookup**: Gets user's role within that team (`team_admin` or `team_user`) +6. **Permission Check**: Verifies the team role has the required permission + +#### Team Permission Security Model + +```typescript +// Global Admin - Can access any team's resources +if (userRole?.id === 'global_admin') { + // Check if global admin role has the permission + return globalPermissions.includes(permission); +} + +// Team Member - Must be member with appropriate role +const teamMembership = await TeamService.getTeamMembership(teamId, userId); +const teamRole = teamMembership.role; // 'team_admin' or 'team_user' + +// Check if team role has required permission +const rolePermissions = ROLE_DEFINITIONS[teamRole]; +return rolePermissions.includes(permission); +``` + +#### Error Responses for Team Permissions + +Team-aware endpoints return specific error messages: + +```typescript +// 401 - Not authenticated +{ + "success": false, + "error": "Authentication required" +} + +// 403 - Not a team member +{ + "success": false, + "error": "You are not a member of this team" +} + +// 403 - Team member but insufficient permissions +{ + "success": false, + "error": "Insufficient permissions for this team operation", + "required_permission": "resources.create", + "user_team_role": "team_user" +} + +// 400 - Invalid team ID +{ + "success": false, + "error": "Team ID is required" +} +``` + +### Permission-Based Authorization (Recommended) + +**Permission-based authorization is the preferred approach** for most endpoints as it provides: + +- **Granular Control**: Fine-grained access control per feature +- **Scalability**: Easy to add new permissions without role changes +- **Flexibility**: Users can have different permission combinations +- **Maintainability**: Clear separation between authentication and authorization + +#### Current Permission Structure + +The system includes these MCP-related permissions: + +```typescript +// MCP Categories (Admin-only operations) +'mcp.categories.view' // View category listings +'mcp.categories.create' // Create new categories +'mcp.categories.edit' // Modify existing categories +'mcp.categories.delete' // Remove categories + +// MCP Servers (User-accessible operations) +'mcp.servers.read' // List and search servers (all authenticated users) +'mcp.servers.global.view' // View global server details (admin-only) +'mcp.servers.global.create' // Create global servers (admin-only) +'mcp.servers.global.edit' // Modify global servers (admin-only) +'mcp.servers.global.delete' // Delete global servers (admin-only) + +// MCP Team Servers +'mcp.servers.team.view_all' // View all team servers (admin-only) + +// MCP Versions +'mcp.versions.manage' // Manage server versions (admin-only) +``` + +#### Permission Assignment by Role + +```typescript +// Global Admin - Full access to all MCP features +global_admin: [ + 'mcp.servers.read', // Basic server access + 'mcp.servers.global.view', // Global server management + 'mcp.servers.global.create', + 'mcp.servers.global.edit', + 'mcp.servers.global.delete', + 'mcp.servers.team.view_all', // Cross-team visibility + 'mcp.categories.view', // Category management + 'mcp.categories.create', + 'mcp.categories.edit', + 'mcp.categories.delete', + 'mcp.versions.manage' // Version management +] + +// Global User - Basic server access only +global_user: [ + 'mcp.servers.read' // Can list and search servers +] + +// Team Admin - Basic server access (team servers managed separately) +team_admin: [ + 'mcp.servers.read' // Can list and search servers +] + +// Team User - Basic server access +team_user: [ + 'mcp.servers.read' // Can list and search servers +] +``` + +### Correct Usage Examples + +```typescript +// Global admin only +fastify.delete('/admin/users/:id', { + schema: { /* ... */ }, + preValidation: requireGlobalAdmin(), +}, handler); + +// Specific permission required +fastify.post('/settings/bulk', { + schema: { /* ... */ }, + preValidation: requirePermission('settings.edit'), +}, handler); + +// User can access own data OR admin can access any +fastify.get('/users/:id/profile', { + schema: { /* ... */ }, + preValidation: requireOwnershipOrAdmin(getUserIdFromParams), +}, handler); +``` + +## Error Response Consistency + +### Proper Error Response Structure + +All authorization errors should follow this structure: + +```typescript +// 401 Unauthorized (not authenticated) +{ + success: false, + error: "Authentication required" +} + +// 403 Forbidden (authenticated but insufficient permissions) +{ + success: false, + error: "Insufficient permissions", + required_permission: "settings.edit" // Optional: what was required +} +``` + +### Response Status Code Guidelines + +- **401 Unauthorized**: User is not authenticated (no valid session) +- **403 Forbidden**: User is authenticated but lacks required permissions +- **400 Bad Request**: Input validation failed (only for authorized users) + +## Testing Security Properly + +### Test Authorization Before Validation + +```typescript +describe('Security Tests', () => { + it('should return 403 for unauthorized users regardless of input validity', async () => { + // Test with invalid data - should still get 403, not 400 + const response = await request(server) + .post('/protected-endpoint') + .set('Cookie', unauthorizedUserCookie) + .send({ invalid: 'data' }); // Intentionally invalid + + expect(response.status).toBe(403); // Should be 403, not 400 + expect(response.body.error).toContain('permission'); + }); + + it('should return 400 for authorized users with invalid input', async () => { + // Test with invalid data - authorized user should get validation error + const response = await request(server) + .post('/protected-endpoint') + .set('Cookie', authorizedUserCookie) + .send({ invalid: 'data' }); // Intentionally invalid + + expect(response.status).toBe(400); // Now validation error is appropriate + expect(response.body.error).toContain('validation'); + }); +}); +``` + +## Advanced Security Patterns + +### Multiple Authorization Checks + +For complex authorization requirements: + +```typescript +// Multiple checks in sequence +fastify.post('/complex-endpoint', { + schema: { /* ... */ }, + preValidation: [ + requireAuthentication(), // Must be logged in + requireRole('team_member'), // Must have team role + requirePermission('data.write') // Must have write permission + ], +}, handler); +``` + +### Conditional Authorization + +```typescript +// Different auth requirements based on request +async function conditionalAuth(request: FastifyRequest, reply: FastifyReply) { + const { action } = request.body as { action: string }; + + if (action === 'delete') { + return requireGlobalAdmin()(request, reply); + } else { + return requirePermission('data.edit')(request, reply); + } +} + +fastify.post('/conditional-endpoint', { + schema: { /* ... */ }, + preValidation: conditionalAuth, +}, handler); +``` + +## Security Checklist + +Before deploying any protected endpoint, verify: + +- [ ] Authorization uses `preValidation`, not `preHandler` +- [ ] Unauthorized users get 401/403, never validation errors +- [ ] Tests verify proper status codes for unauthorized access +- [ ] Error responses don't leak sensitive information +- [ ] Schema validation only runs for authorized users +- [ ] Documentation reflects security requirements + +## Related Documentation + +- [API Documentation Generation](/deploystack/development/backend/api) - General API development patterns +- [Authentication System](deploystack/auth) - User authentication implementation +- [Role-Based Access Control](/deploystack/development/backend/roles) - Permission system details diff --git a/docs/deploystack/development/backend/api.mdx b/docs/deploystack/development/backend/api.mdx index 8c18770..38efc0f 100644 --- a/docs/deploystack/development/backend/api.mdx +++ b/docs/deploystack/development/backend/api.mdx @@ -15,6 +15,17 @@ The DeployStack Backend uses Fastify with Swagger plugins to automatically gener - **Postman Integration**: JSON/YAML specs that can be imported into Postman - **Automated Generation**: Specifications are generated from actual route code +## 🔒 Security First + +**IMPORTANT**: Before developing any protected API endpoints, read the [API Security Best Practices](./api-security.mdx) documentation. It covers critical security patterns including: + +- **Authorization Before Validation**: Why `preValidation` must be used instead of `preHandler` for authorization +- **Proper Error Responses**: Ensuring unauthorized users get 403 Forbidden, not validation errors +- **Security Testing**: How to test authorization properly +- **Common Pitfalls**: Security anti-patterns to avoid + +**Key Rule**: Always use `preValidation` for authorization checks to prevent information disclosure to unauthorized users. + ## Available Commands ### 1. Generate Complete API Specification @@ -99,6 +110,8 @@ When the server is running (`npm run dev`), you can access: 2. **Directory Organization**: Group related routes in directories (e.g., `/auth/`, `/users/`, `/health/`) 3. **Import Pattern**: Routes are imported and registered in `src/routes/index.ts` 4. **Consistent Naming**: Use descriptive names that match the route purpose +5. **Modular Approach**: **Keep route files small and focused** - aim for 1-3 related methods per file maximum +6. **Maintainability**: Avoid large monolithic route files that become difficult to maintain ### Correct File Structure @@ -120,6 +133,37 @@ services/backend/src/routes/ └── index.ts # Team management endpoints ``` +### Modular Route Organization (Recommended) + +For complex feature areas, break down routes into smaller, focused files: + +``` +services/backend/src/routes/mcp/ +├── index.ts # Route registration only +├── categories/ +│ ├── create.ts # POST /api/mcp/categories (1 method) +│ ├── update.ts # PUT /api/mcp/categories/{id} (1 method) +│ └── delete.ts # DELETE /api/mcp/categories/{id} (1 method) +├── servers/ +│ ├── list.ts # GET /api/mcp/servers (1 method) +│ ├── get.ts # GET /api/mcp/servers/{id} (1 method) +│ ├── search.ts # GET /api/mcp/servers/search (1 method) +│ ├── create-global.ts # POST /api/mcp/servers/global (1 method) +│ ├── update-global.ts # PUT /api/mcp/servers/global/{id} (1 method) +│ └── delete-global.ts # DELETE /api/mcp/servers/global/{id} (1 method) +└── versions/ + ├── list.ts # GET /api/mcp/servers/{id}/versions (1 method) + ├── create.ts # POST /api/mcp/servers/{id}/versions (1 method) + └── update.ts # PUT /api/mcp/servers/{id}/versions/{versionId} (1 method) +``` + +**Benefits of Modular Approach:** +- **Easier Maintenance**: Small files are easier to understand and modify +- **Better Testing**: Individual route files can be tested in isolation +- **Team Collaboration**: Multiple developers can work on different routes without conflicts +- **Clear Responsibility**: Each file has a single, clear purpose +- **Reduced Complexity**: Avoid hundreds of lines in single files + ### Route File Template Each route file should follow this pattern: @@ -210,6 +254,71 @@ export const registerRoutes = (server: FastifyInstance): void => { 4. **Code Organization**: Related functionality is grouped together 5. **Team Collaboration**: Multiple developers can work on different routes without conflicts +## Content-Type Header Requirements + +### When to Include Content-Type Headers + +**IMPORTANT**: The `Content-Type: application/json` header is required for specific HTTP methods when sending request body data. + +#### ✅ ALWAYS Include Content-Type for: +- **POST** requests with request body data +- **PUT** requests with request body data +- **PATCH** requests with request body data + +#### ❌ NEVER Include Content-Type for: +- **GET** requests (no request body) +- **DELETE** requests (typically no request body) +- **HEAD** requests (no request body) + +#### Correct Client Implementation Pattern + +```javascript +function makeRequest(method, path, data = null, cookies = null) { + const options = { + method, + headers: { 'Accept': 'application/json' } + }; + + // Set Content-Type for methods that send request body data + if (['POST', 'PUT', 'PATCH'].includes(method.toUpperCase()) && data !== null) { + options.headers['Content-Type'] = 'application/json'; + } + + // Rest of implementation... +} +``` + +#### ❌ Problematic Pattern (Avoid This) + +```javascript +// UNCLEAR: This doesn't indicate WHICH methods need Content-Type +if (data) { + options.headers['Content-Type'] = 'application/json'; +} +``` + +### API Specification Content-Type Documentation + +When defining route schemas, explicitly document Content-Type requirements for POST/PUT/PATCH endpoints: + +```typescript +// For endpoints that require Content-Type +const routeSchema = { + tags: ['Category'], + summary: 'Create new item', + description: 'Creates a new item. Requires Content-Type: application/json header when sending request body.', + requestBody: { + required: true, + content: { + 'application/json': { + schema: zodToJsonSchema(requestSchema, { $refStrategy: 'none', target: 'openApi3' }) + } + } + }, + // ... rest of schema +}; +``` + ## Adding Documentation to Routes To add OpenAPI documentation to your routes, define your request body and response schemas using Zod. Then, use the `zodToJsonSchema` utility to convert these Zod schemas into the JSON Schema format expected by Fastify. @@ -246,12 +355,23 @@ const myErrorResponseSchema = z.object({ const routeSchema = { tags: ['Category'], // Your API category summary: 'Brief description of your endpoint', - description: 'Detailed description of what this endpoint does, its parameters, and expected outcomes.', + description: 'Detailed description of what this endpoint does, its parameters, and expected outcomes. Requires Content-Type: application/json header when sending request body.', security: [{ cookieAuth: [] }], // Include if authentication is required body: zodToJsonSchema(myRequestBodySchema, { $refStrategy: 'none', // Keeps definitions inline, often simpler for Fastify target: 'openApi3' // Ensures compatibility with OpenAPI 3.0 }), + requestBody: { + required: true, + content: { + 'application/json': { + schema: zodToJsonSchema(myRequestBodySchema, { + $refStrategy: 'none', + target: 'openApi3' + }) + } + } + }, response: { 200: zodToJsonSchema(mySuccessResponseSchema.describe("Successful operation"), { $refStrategy: 'none', @@ -301,6 +421,15 @@ fastify.post<{ Body: RequestBody }>( 5. **Type Safety**: Handlers receive properly typed, validated data 6. **Cleaner Code**: No redundant validation logic in handlers +### Why Both `body` and `requestBody` Properties? + +**Important**: You need BOTH properties for complete functionality: + +- **`body`**: Enables Fastify's automatic request validation using the Zod schema +- **`requestBody`**: Ensures proper OpenAPI specification generation with Content-Type documentation + +Without `body`, validation won't work. Without `requestBody`, your API specification won't properly document the `application/json` Content-Type requirement. + ### What NOT to Do (Anti-patterns) ❌ **Don't do manual validation in handlers:** diff --git a/docs/deploystack/development/backend/roles.mdx b/docs/deploystack/development/backend/roles.mdx index 753e86d..ebdedfe 100644 --- a/docs/deploystack/development/backend/roles.mdx +++ b/docs/deploystack/development/backend/roles.mdx @@ -1,982 +1,144 @@ --- -title: Role-Based Access Control System -description: Complete RBAC implementation with roles, permissions, team management, and security features for DeployStack Backend development. +title: Role Management System +description: Developer guide for managing roles and permissions in DeployStack Backend. --- -# Role-Based Access Control System +# Role Management System -This document describes the role-based access control (RBAC) system implemented in the DeployStack backend. +This guide explains how to manage roles and permissions in the DeployStack backend for developers. ## Overview -The RBAC system provides fine-grained access control through roles and permissions. It supports: - -- **Global Roles**: System-wide roles that control access to administrative functions -- **Permission-Based Access**: Granular permissions for specific actions -- **Extensible Design**: Easy to add new roles and permissions -- **Secure Defaults**: Safe fallbacks and protection against privilege escalation - -## Default Roles - -### Global Administrator (`global_admin`) - -- **Description**: Full system access with user management capabilities -- **Permissions**: - - `users.list` - List all users - - `users.view` - View user details - - `users.edit` - Edit user information - - `users.delete` - Delete users - - `users.create` - Create new users - - `roles.manage` - Manage roles and permissions - - `system.admin` - Administrative system access - - `settings.view` - View global application settings - - `settings.edit` - Create and update global application settings - - `settings.delete` - Delete global application settings - - `teams.create` - Create new teams - - `teams.view` - View team details - - `teams.edit` - Edit team settings - - `teams.delete` - Delete teams - - `teams.manage` - Full team management - - `team.members.view` - View team members - - `team.members.manage` - Manage team member roles - -**Note**: Global administrators have special access to view cloud credentials metadata across all teams, but cannot perform CRUD operations or view credential values. Cloud credentials management is team-contextual. - -### Global User (`global_user`) - -- **Description**: Standard user with basic profile access -- **Permissions**: - - `profile.view` - View own profile - - `profile.edit` - Edit own profile - - `teams.create` - Create new teams (up to 3) - - `teams.view` - View team details - - `teams.edit` - Edit own team settings - - `teams.delete` - Delete own teams - - `team.members.view` - View team members - -### Team Administrator (`team_admin`) - -- **Description**: Full management access within a specific team -- **Permissions**: - - `teams.view` - View team details - - `teams.edit` - Edit team settings - - `teams.delete` - Delete team (if owner) - - `teams.manage` - Full team management - - `team.members.view` - View team members - - `team.members.manage` - Manage team member roles - -### Team User (`team_user`) - -- **Description**: Basic team member with limited access -- **Permissions**: - - `teams.view` - View team details - - `team.members.view` - View team members - -## Team System - -DeployStack includes a comprehensive team management system that allows users to organize their work into teams and collaborate with other users. Each user automatically gets their own team upon registration and can create up to 3 teams total. - -### Team Features - -- **Automatic Team Creation**: Every new user gets a default team created with their username -- **Team Ownership**: Each team has an owner who has full administrative control -- **Multi-User Teams**: Teams support up to 3 members with role-based access control -- **Team Limits**: Users can create up to 3 teams maximum -- **Unique Slugs**: Teams have URL-friendly slugs with automatic conflict resolution -- **Default Team Protection**: Default teams cannot have additional members added (personal workspace) - -### Team Database Schema - -#### Teams Table +DeployStack uses a centralized role-based access control (RBAC) system with automatic synchronization between code definitions and database storage. -```sql -CREATE TABLE teams ( - id TEXT PRIMARY KEY, - name TEXT NOT NULL, - slug TEXT NOT NULL UNIQUE, - description TEXT, - owner_id TEXT NOT NULL REFERENCES authUser(id) ON DELETE CASCADE, - created_at INTEGER NOT NULL, - updated_at INTEGER NOT NULL -); -``` - -#### Team Memberships Table - -```sql -CREATE TABLE teamMemberships ( - id TEXT PRIMARY KEY, - team_id TEXT NOT NULL REFERENCES teams(id) ON DELETE CASCADE, - user_id TEXT NOT NULL REFERENCES authUser(id) ON DELETE CASCADE, - role TEXT NOT NULL, -- 'team_admin' or 'team_user' - joined_at INTEGER NOT NULL, - UNIQUE(team_id, user_id) -); -``` - -### Team Registration Flow - -When a user registers: - -1. User account is created with appropriate global role -2. A default team is automatically created using the user's username -3. The user is added as `team_admin` of their new team -4. If username conflicts exist, slug gets incremented (e.g., `john-doe-2`) - -### Team Management - -#### Team Creation - -- Users can create up to 3 teams -- Team names are converted to URL-friendly slugs -- Automatic conflict resolution for duplicate slugs -- Team owner becomes `team_admin` automatically - -#### Default Team Protection - -- **Default Team Identification**: Teams created during user registration (name matches username) -- **Name Protection**: Default team names cannot be changed via API or UI -- **Deletion Protection**: Default teams cannot be deleted -- **Description Editing**: Default team descriptions can still be modified -- **UI Indicators**: Frontend shows lock icons and explanatory text for protected fields - -#### Team Roles - -- **Team Admin**: Full control over team settings and management -- **Team User**: Basic team member (for future expansion) - -#### Team Permissions - -| Permission | Description | -|------------|-------------| -| `teams.create` | Create new teams (up to limit) | -| `teams.view` | View team details | -| `teams.edit` | Edit team settings | -| `teams.delete` | Delete team | -| `teams.manage` | Full team management | -| `team.members.view` | View team members | -| `team.members.manage` | Manage team member roles | - -#### Cloud Credentials Permissions (Team-Contextual) - -Cloud credentials are team-scoped resources with role-based access control. Unlike other permissions, cloud credentials access is determined by team membership and role, not global permissions. - -**Access Control Matrix:** - -| Role | Team Access | Can See | Can Do | Secret Values | -|------|-------------|---------|---------|---------------| -| `global_admin` | Any team | Metadata only (name, provider, dates) | List/View only | ❌ Never | -| `team_admin` | Own teams only | Metadata + non-secret field values | Full CRUD | ❌ Never | -| `team_user` | Own teams only | Metadata only (name, provider, dates) | Read only | ❌ Never | -| `global_user` | No access | Nothing | Nothing | ❌ Never | - -**Security Rules:** - -- **Team Membership Required**: Only team members can access team's cloud credentials (except global admins) -- **Secret Values Protected**: No role can view secret credential values via API -- **Team Isolation**: Users can only access credentials from teams they belong to -- **Role-Based Responses**: API responses vary based on user's role within the team - -**Response Examples:** - -```typescript -// Team Admin Response (can see non-secret values) -{ - "fields": { - "access_key_id": { - "hasValue": true, - "secret": false, - "value": "AKIATEST123456789" // ✅ Non-secret shown - }, - "secret_access_key": { - "hasValue": true, - "secret": true - // ❌ No "value" field - secret never shown - } - } -} - -// Team User Response (metadata only) -{ - "id": "cred123", - "name": "Production AWS", - "provider": { "name": "Amazon Web Services" }, - "createdAt": "2025-01-01T00:00:00Z" - // ❌ No "fields" object - no values shown -} - -// Global Admin Response (metadata only, any team) -{ - "fields": { - "access_key_id": { - "hasValue": true, - "secret": false - // ❌ No "value" field - admin sees no values - }, - "secret_access_key": { - "hasValue": true, - "secret": true - // ❌ No "value" field - admin sees no values - } - } -} -``` - -### Team API Endpoints +## Role Definitions -#### Get User's Teams +All roles and permissions are defined in a single file: -```http -GET /api/users/me/teams -Authorization: Required (authenticated user) -``` - -#### Create Team - -```http -POST /api/teams -Authorization: Required (teams.create permission) -Content-Type: application/json - -{ - "name": "My New Team", - "description": "Team description" -} -``` +**`services/backend/src/permissions/index.ts`** -#### Get Team by ID +This is the **single source of truth** for all role definitions. To see current roles and their permissions, check this file. -```http -GET /api/teams/:id -Authorization: Required (teams.view permission) -``` +### Role Types -#### Update Team +The system supports two types of roles: -```http -PUT /api/teams/:id -Authorization: Required (teams.edit permission) -Content-Type: application/json +- **Global Roles**: System-wide roles (`global_admin`, `global_user`) +- **Team Roles**: Team-specific roles (`team_admin`, `team_user`) -{ - "name": "Updated Team Name", - "description": "Updated description" -} -``` +Team roles are assigned within team contexts and work alongside global roles to provide fine-grained access control for team-based resources. -#### Delete Team +## Database Storage -```http -DELETE /api/teams/:id -Authorization: Required (teams.delete permission) -``` +Roles are stored in the `roles` table with the following structure: -#### Get Team by ID - -```http -GET /api/teams/:id -Authorization: Required (teams.view permission) -``` - -**Response:** - -```json -{ - "success": true, - "data": { - "id": "team123", - "name": "My Team", - "slug": "my-team", - "description": "Team description", - "owner_id": "user123", - "created_at": "2025-01-30T15:00:00.000Z", - "updated_at": "2025-01-30T15:00:00.000Z" - } -} -``` - -#### Get Team Members - -```http -GET /api/teams/:id/members -Authorization: Required (team.members.view permission) -``` +- **`id`** - Role identifier (e.g., 'global_admin') +- **`name`** - Display name (e.g., 'Global Administrator') +- **`description`** - Role description +- **`permissions`** - JSON array of permission strings +- **`is_system_role`** - Boolean flag for core system roles +- **`created_at`** / **`updated_at`** - Timestamps -**Response:** - -```json -{ - "success": true, - "data": [ - { - "id": "membership123", - "user_id": "user123", - "username": "johndoe", - "email": "john@example.com", - "first_name": "John", - "last_name": "Doe", - "role": "team_admin", - "is_admin": true, - "is_owner": true, - "joined_at": "2025-01-30T15:00:00.000Z" - } - ] -} -``` - -#### Add Team Member - -```http -POST /api/teams/:id/members -Authorization: Required (team.members.manage permission or global admin) -Content-Type: application/json - -{ - "userId": "user456", - "role": "team_user" -} -``` - -**Restrictions:** -- Maximum 3 members per team -- Cannot add members to default teams (protected) -- User must exist in the system -- Team admin or global admin required - -#### Update Team Member Role - -```http -PUT /api/teams/:id/members/:userId/role -Authorization: Required (team.members.manage permission or global admin) -Content-Type: application/json - -{ - "role": "team_admin" -} -``` - -**Restrictions:** -- Cannot change roles in default teams -- Must maintain at least one team admin -- Team admin or global admin required - -#### Remove Team Member - -```http -DELETE /api/teams/:id/members/:userId -Authorization: Required (team.members.manage permission or global admin) -``` +User role assignments are stored in the `authUser.role_id` column. -**Restrictions:** -- Cannot remove from default teams -- Cannot remove team owner (must transfer ownership first) -- Cannot remove last member from team -- Team admin or global admin required +## Automatic Synchronization -#### Transfer Team Ownership +The system automatically syncs role permissions from code to database on server startup. The RoleSyncService compares the definitions in `permissions/index.ts` with the database and updates any differences. This ensures the database always matches your code definitions without manual intervention. -```http -PUT /api/teams/:id/ownership -Authorization: Required (team owner or global admin) -Content-Type: application/json - -{ - "newOwnerId": "user456" -} -``` - -**Restrictions:** -- Cannot transfer ownership of default teams -- New owner must be a team member -- New owner automatically becomes team_admin -- Only current owner or global admin can transfer - -### Team Service Methods - -The `TeamService` class provides comprehensive team management: - -```typescript -// Create team -const team = await TeamService.createTeam({ - name: 'My Team', - owner_id: userId, - description: 'Team description' -}); - -// Get user's teams -const teams = await TeamService.getUserTeams(userId); - -// Get team by ID -const team = await TeamService.getTeamById(teamId); - -// Update team -const updatedTeam = await TeamService.updateTeam(teamId, { - name: 'New Name', - description: 'New description' -}); - -// Delete team -const deleted = await TeamService.deleteTeam(teamId); - -// Check team limits -const canCreate = await TeamService.canUserCreateTeam(userId); - -// Team membership checks -const isAdmin = await TeamService.isTeamAdmin(teamId, userId); -const isOwner = await TeamService.isTeamOwner(teamId, userId); -const isMember = await TeamService.isTeamMember(teamId, userId); - -// Default team checks -const isDefault = await TeamService.isDefaultTeam(teamId, userId); - -// Get team membership details -const membership = await TeamService.getTeamMembership(teamId, userId); - -// ===== TEAM MEMBER MANAGEMENT METHODS ===== - -// Add team member -const membership = await TeamService.addTeamMember(teamId, userId, 'team_user'); - -// Remove team member -const removed = await TeamService.removeTeamMember(teamId, userId); - -// Update member role -const updatedMembership = await TeamService.updateMemberRole(teamId, userId, 'team_admin'); - -// Transfer team ownership -const transferred = await TeamService.transferOwnership(teamId, newOwnerId); - -// Get team members with user info -const membersWithInfo = await TeamService.getTeamMembersWithUserInfo(teamId); - -// Get user teams with role info -const teamsWithRoles = await TeamService.getUserTeamsWithRoles(userId); - -// Team capacity and permission checks -const canAddMember = await TeamService.canAddMemberToTeam(teamId); -const canRemoveMember = await TeamService.canRemoveMemberFromTeam(teamId, userId); -const canManageMember = await TeamService.canUserManageTeamMember(teamId, managerId, targetUserId, 'add'); - -// Team member counts -const memberCount = await TeamService.getTeamMemberCount(teamId); -const adminCount = await TeamService.getTeamAdminCount(teamId); - -// Default team protection checks -const isTeamDefault = await TeamService.isTeamDefault(teamId); -``` - -### Frontend Team Management - -The system includes a comprehensive team management interface: - -#### Teams List Page (`/teams`) - -- Displays all user's teams in a data table -- Shows team name, description, creation date, and member count -- Includes "Manage" button for team administrators -- Uses shadcn/ui components for consistent styling - -#### Team Management Page (`/teams/manage/:id`) +## Adding New Roles -- **URL Pattern**: `/teams/manage/{teamId}` -- **Access Control**: Only team administrators can access -- **Design**: Matches admin interface styling (`/admin/users/:id`) -- **Features**: - - Team information display (ID, creation date, update date) - - Editable team name (disabled for default teams with lock icon) - - Editable team description (always available) - - Default team badge and explanations - - Danger zone with team deletion (protected for default teams) - - Confirmation modal for team deletion using shadcn dialog +### 1. Define the Role -#### UI Components +Add your new role to `services/backend/src/permissions/index.ts`: ```typescript -// Team management form validation -const teamSchema = z.object({ - name: z.string().min(1, 'Team name is required'), - description: z.string().optional() -}); - -// Default team protection in UI -const isDefaultTeam = computed(() => { - return team.value?.name === user.value?.username; -}); -``` - -#### Internationalization - -Complete i18n support with translation keys: - -- `teams.manage.title` - Page title -- `teams.manage.defaultTeam.badge` - Default team indicator -- `teams.manage.form.name.disabled` - Lock explanation -- `teams.manage.dangerZone.title` - Deletion section -- `teams.manage.delete.confirmation` - Confirmation dialog - -## Database Schema - -### Roles Table - -```sql -CREATE TABLE roles ( - id TEXT PRIMARY KEY, -- Role identifier (e.g., 'global_admin') - name TEXT NOT NULL UNIQUE, -- Display name (e.g., 'Global Administrator') - description TEXT, -- Role description - permissions TEXT NOT NULL, -- JSON array of permissions - is_system_role BOOLEAN DEFAULT FALSE, -- Prevents deletion of core roles - created_at INTEGER NOT NULL, -- Creation timestamp - updated_at INTEGER NOT NULL -- Last update timestamp -); +export const ROLE_DEFINITIONS = { + // ... existing roles + content_moderator: [ + 'users.view', + 'content.moderate', + 'reports.view', + ], +} as const; ``` -### User Role Assignment +### 2. Create Database Migration (Optional) -The `authUser` table includes a `role_id` column that references the `roles` table: +If you want the role available immediately in existing databases, create a migration: ```sql -ALTER TABLE authUser ADD COLUMN role_id TEXT DEFAULT 'global_user' REFERENCES roles(id); -``` - -## API Endpoints - -### Role Management - -#### List Roles - -```http -GET /api/roles -Authorization: Required (roles.manage permission) -``` - -**Response:** - -```json -{ - "success": true, - "data": [ - { - "id": "global_admin", - "name": "Global Administrator", - "description": "Full system access with user management capabilities", - "permissions": ["users.list", "users.view", "users.edit", "users.delete", "users.create", "roles.manage", "system.admin"], - "is_system_role": true, - "created_at": "2025-01-30T15:00:00.000Z", - "updated_at": "2025-01-30T15:00:00.000Z" - } - ] -} -``` - -#### Get Role by ID - -```http -GET /api/roles/:id -Authorization: Required (roles.manage permission) -``` - -#### Create Role - -```http -POST /api/roles -Authorization: Required (roles.manage permission) -Content-Type: application/json - -{ - "id": "moderator", - "name": "Moderator", - "description": "Content moderation capabilities", - "permissions": ["users.view", "content.moderate"] -} +-- Add new role to existing databases +INSERT OR IGNORE INTO `roles` (`id`, `name`, `description`, `permissions`, `is_system_role`, `created_at`, `updated_at`) VALUES +('content_moderator', 'Content Moderator', 'Manages user content', '[]', 1, strftime('%s', 'now') * 1000, strftime('%s', 'now') * 1000); ``` -#### Update Role +### 3. Restart Server -```http -PUT /api/roles/:id -Authorization: Required (roles.manage permission) -Content-Type: application/json - -{ - "name": "Updated Role Name", - "description": "Updated description", - "permissions": ["updated.permission"] -} -``` - -**Note:** System roles (`is_system_role: true`) cannot be updated or deleted. - -#### Delete Role - -```http -DELETE /api/roles/:id -Authorization: Required (roles.manage permission) -``` - -**Restrictions:** - -- Cannot delete system roles -- Cannot delete roles that are assigned to users - -#### Get Available Permissions - -```http -GET /api/roles/permissions -Authorization: Required (roles.manage permission) -``` - -### User Management - -#### List Users - -```http -GET /api/users -Authorization: Required (users.list permission) -``` - -#### Get User by ID - -```http -GET /api/users/:id -Authorization: Required (own profile or system.admin permission) -``` - -#### Update User - -```http -PUT /api/users/:id -Authorization: Required (own profile or system.admin permission) -Content-Type: application/json - -{ - "username": "newusername", - "email": "newemail@example.com", - "first_name": "John", - "last_name": "Doe", - "role_id": "global_user" -} -``` - -**Restrictions:** - -- Users cannot change their own role (only admins can) -- Email and username must be unique - -#### Delete User - -```http -DELETE /api/users/:id -Authorization: Required (users.delete permission) -``` - -**Restrictions:** - -- Cannot delete your own account -- Cannot delete the last global administrator - -#### Assign Role to User - -```http -PUT /api/users/:id/role -Authorization: Required (users.edit permission) -Content-Type: application/json - -{ - "role_id": "global_admin" -} -``` - -**Restrictions:** - -- Cannot change your own role - -#### Get Current User Profile - -```http -GET /api/users/me -Authorization: Required (authenticated user) -``` - -#### Get User Statistics - -```http -GET /api/users/stats -Authorization: Required (users.list permission) -``` - -#### Get Users by Role - -```http -GET /api/users/role/:roleId -Authorization: Required (users.list permission) -``` +The RoleSyncService will automatically populate the role's permissions from your code definition on the next server startup. ## Permission System ### Available Permissions -| Permission | Description | -|------------|-------------| -| `users.list` | List all users in the system | -| `users.view` | View detailed user information | -| `users.edit` | Edit user information and assign roles | -| `users.delete` | Delete user accounts | -| `users.create` | Create new user accounts | -| `roles.manage` | Create, update, and delete roles | -| `system.admin` | Administrative system access | -| `settings.view` | View global application settings | -| `settings.edit` | Create and update global application settings | -| `settings.delete` | Delete global application settings | -| `profile.view` | View own profile information | -| `profile.edit` | Edit own profile information | -| `teams.create` | Create new teams (up to limit) | -| `teams.view` | View team details | -| `teams.edit` | Edit team settings | -| `teams.delete` | Delete team | -| `teams.manage` | Full team management | -| `team.members.view` | View team members | -| `team.members.manage` | Manage team member roles | +All permissions are auto-generated from role definitions. The `AVAILABLE_PERMISSIONS` array in `permissions/index.ts` contains all unique permissions across all roles, sorted alphabetically. ### Permission Checking -The system provides several ways to check permissions: - -#### Middleware Functions +Use the middleware functions for route protection: ```typescript -import { requirePermission, requireRole, requireGlobalAdmin } from '../middleware/roleMiddleware'; +import { requirePermission } from '../middleware/roleMiddleware'; // Require specific permission -fastify.get('/admin-only', { - preHandler: requirePermission('system.admin') -}, handler); - -// Require specific role -fastify.get('/admin-role', { - preHandler: requireRole('global_admin') +fastify.get('/admin-endpoint', { + preValidation: requirePermission('system.admin') }, handler); - -// Require global admin (shorthand) -fastify.get('/global-admin', { - preHandler: requireGlobalAdmin() -}, handler); -``` - -#### Utility Functions - -```typescript -import { checkUserPermission, getUserRole } from '../middleware/roleMiddleware'; - -// Check permission programmatically -const hasPermission = await checkUserPermission(userId, 'users.edit'); - -// Get user's role information -const userRole = await getUserRole(userId); -``` - -## User Registration Flow - -### First User - -When the first user registers in the system: - -1. They are automatically assigned the `global_admin` role -2. This ensures there's always at least one administrator - -### Subsequent Users - -All subsequent users are assigned the `global_user` role by default. - -### Registration Code Example - -```typescript -// Check if this is the first user -const allUsers = await db.select().from(authUserTable).limit(1); -const isFirstUser = allUsers.length === 0; -const defaultRole = isFirstUser ? 'global_admin' : 'global_user'; - -// Create user with appropriate role -await db.insert(authUserTable).values({ - // ... other user data - role_id: defaultRole -}); ``` -## Security Considerations - -### Role Protection - -- **System Roles**: Cannot be modified or deleted -- **Last Admin Protection**: Cannot delete the last global administrator -- **Self-Role Protection**: Users cannot change their own roles -- **Self-Delete Protection**: Users cannot delete their own accounts - -### Permission Validation - -- All permissions are validated against a whitelist -- Invalid permissions are rejected during role creation/update -- Database constraints ensure referential integrity - -### Session Security - -- Role information is fetched fresh for each permission check -- No role caching to prevent stale permission issues -- Lucia v3 handles secure session management - -## Adding New Roles - -### 1. Define Permissions - -First, add any new permissions to the available permissions list: - -```typescript -// In services/backend/src/routes/roles/schemas.ts -export const AVAILABLE_PERMISSIONS = [ - // ... existing permissions - 'content.moderate', - 'reports.view', - 'analytics.access', -] as const; -``` - -### 2. Create Role via API - -Use the role creation API to add new roles: - -```http -POST /api/roles -{ - "id": "content_moderator", - "name": "Content Moderator", - "description": "Manages user-generated content", - "permissions": ["users.view", "content.moderate", "reports.view"] -} -``` - -### 3. Update Default Permissions (Optional) - -If you want to include the role in default setups: +For team-based resources, use team-aware permissions: ```typescript -// In services/backend/src/services/roleService.ts -static getDefaultPermissions() { - return { - global_admin: [/* ... */], - global_user: [/* ... */], - content_moderator: ['users.view', 'content.moderate', 'reports.view'], - }; -} -``` +import { requireTeamPermission } from '../middleware/roleMiddleware'; -## Migration and Setup - -### Database Migration - -The role system is set up through migration `0003_huge_prism.sql` (generated using `npm run db:generate`): - -1. Creates the `roles` table -2. Adds `role_id` column to `authUser` table -3. Seeds default roles (`global_admin`, `global_user`) -4. Assigns existing users to `global_user` -5. Promotes the first user to `global_admin` - -### Manual Setup - -If you need to manually set up roles: - -```sql --- Insert default roles -INSERT INTO roles (id, name, description, permissions, is_system_role, created_at, updated_at) VALUES -('global_admin', 'Global Administrator', 'Full system access', '["users.list","users.view","users.edit","users.delete","users.create","roles.manage","system.admin"]', 1, strftime('%s', 'now') * 1000, strftime('%s', 'now') * 1000), -('global_user', 'Global User', 'Standard user access', '["profile.view","profile.edit"]', 1, strftime('%s', 'now') * 1000, strftime('%s', 'now') * 1000); - --- Assign roles to users -UPDATE authUser SET role_id = 'global_user' WHERE role_id IS NULL; -UPDATE authUser SET role_id = 'global_admin' WHERE id = (SELECT id FROM authUser ORDER BY id ASC LIMIT 1); +// Team-specific permission checking +fastify.post('/teams/:teamId/resources', { + preValidation: requireTeamPermission('resources.create') +}, handler); ``` -## Troubleshooting - -### Common Issues - -#### Permission Denied Errors - -- Verify the user has the required permission -- Check if the user's role includes the necessary permission -- Ensure the role exists and is properly assigned - -#### Role Assignment Failures - -- Verify the target role exists -- Check if you're trying to assign a role to yourself (not allowed) -- Ensure you have `users.edit` permission - -#### Migration Issues +See [API Security Best Practices](./api-security.mdx) for detailed information about team-aware permissions and security patterns. -- Ensure the database is properly initialized -- Check that previous migrations have been applied -- Verify foreign key constraints are working - -### Debug Commands +### Programmatic Checks ```typescript -// Check user's current role and permissions -const userRole = await roleService.getUserRole(userId); -logger.info('User role:', userRole); - -// Check specific permission -const hasPermission = await roleService.userHasPermission(userId, 'users.edit'); -logger.info('Has permission:', hasPermission); +import { checkUserPermission } from '../middleware/roleMiddleware'; -// List all roles -const allRoles = await roleService.getAllRoles(); -logger.info('All roles:', allRoles); +const hasPermission = await checkUserPermission(userId, 'users.edit'); ``` -## Future Enhancements - -### Planned Features - -- **Hierarchical Roles**: Parent-child role relationships -- **Temporary Permissions**: Time-limited access grants -- **Permission Groups**: Logical grouping of related permissions -- **Audit Logging**: Track role and permission changes -- **Role Templates**: Predefined role configurations - -### Extension Points +## API Endpoints -The system is designed to be extensible: +### Role Management +- `GET /api/roles` - List all roles +- `POST /api/roles` - Create new role +- `PUT /api/roles/:id` - Update role +- `DELETE /api/roles/:id` - Delete role -- Add new permissions by updating the `AVAILABLE_PERMISSIONS` array -- Create custom middleware for complex permission logic -- Implement role-based UI components in the frontend -- Add role-specific business logic in services +### User Role Assignment +- `PUT /api/users/:id/role` - Assign role to user +- `GET /api/users/me` - Get current user with role info ## Best Practices -### Role Design - -- Keep roles focused and specific -- Use descriptive names and descriptions -- Group related permissions logically -- Avoid overly broad permissions - -### Permission Naming - -- Use dot notation for hierarchy (`users.edit`, `content.moderate`) -- Be specific about the action (`view`, `edit`, `delete`, `create`) -- Use consistent naming patterns - -### Security - -- Always check permissions at the API level -- Don't rely solely on frontend permission checks -- Regularly audit role assignments -- Monitor for privilege escalation attempts +- **Single Source**: Always define roles in `permissions/index.ts` +- **Descriptive Names**: Use clear permission names with dot notation (`users.edit`, `content.moderate`) +- **Minimal Permissions**: Give roles only the permissions they need +- **System Roles**: Mark core roles as `is_system_role: true` to prevent deletion +- **Testing**: Test permission changes in development before deploying -### Performance +## Security Notes -- Permission checks are lightweight but avoid excessive calls -- Consider caching user roles for high-frequency operations -- Use middleware for route-level protection -- Batch permission checks when possible +- System roles cannot be modified or deleted via API +- Users cannot change their own roles +- The last global administrator cannot be deleted +- All permission checks happen server-side diff --git a/docs/deploystack/development/frontend/event-bus.mdx b/docs/deploystack/development/frontend/event-bus.mdx index c82dea3..5126e5d 100644 --- a/docs/deploystack/development/frontend/event-bus.mdx +++ b/docs/deploystack/development/frontend/event-bus.mdx @@ -47,9 +47,31 @@ export type EventBusEvents = { 'user-profile-updated': void 'mcp-server-deployed': { serverId: string; status: string } 'notification-show': { message: string; type: 'success' | 'error' | 'warning' } + 'storage-changed': { key: string; oldValue: any; newValue: any } } ``` +## Storage Integration + +The event bus includes built-in storage capabilities for persistent state management. When you use storage methods, they automatically emit events for reactive updates. + +### Storage Events + +The storage system emits `storage-changed` events whenever data is modified: + +```typescript +// Automatically emitted when using storage methods +eventBus.setState('selected_team_id', 'team-123') +// Emits: { key: 'selected_team_id', oldValue: null, newValue: 'team-123' } + +// Listen for storage changes +eventBus.on('storage-changed', (data) => { + console.log(`Storage key "${data.key}" changed from ${data.oldValue} to ${data.newValue}`) +}) +``` + +> **📖 For detailed storage usage, see [Frontend Storage System](./storage)** + ## Usage ### Basic Implementation @@ -593,4 +615,8 @@ if (import.meta.env.DEV) { } ``` +## Related Documentation + +- **[Frontend Storage System](/deploystack/development/frontend/storage)** - Persistent state management with automatic event emission + The global event bus system provides a powerful and type-safe way to handle cross-component communication in the DeployStack frontend, enabling immediate updates and better user experience. diff --git a/docs/deploystack/development/frontend/global-settings.mdx b/docs/deploystack/development/frontend/global-settings.mdx new file mode 100644 index 0000000..ec39059 --- /dev/null +++ b/docs/deploystack/development/frontend/global-settings.mdx @@ -0,0 +1,620 @@ +--- +title: Global Settings Frontend Integration +description: Complete guide to the flexible global settings component system for creating custom setting interfaces with connection testing and validation. +sidebar: Global Settings +--- + +# Global Settings Frontend Integration + +The DeployStack frontend provides a flexible component system for global settings that allows developers to create custom interfaces for specific setting groups. This system enables rich functionality like connection testing, custom validation, and specialized UI components while maintaining consistency with the overall design system. + +## Architecture Overview + +The global settings system uses a **component registry pattern** that allows custom Vue components to be registered for specific setting groups. When a user navigates to a settings group, the system checks if a custom component is registered and uses it instead of the default form renderer. + +``` +Global Settings Flow +├── User navigates to /admin/settings/{groupId} +├── System checks component registry +├── Custom component found? +│ ├── Yes → Render custom component +│ └── No → Render standard form +└── Component handles form state, validation, and API calls +``` + +## Key Components + +### 1. Component Registry (`useSettingsComponentRegistry`) + +The registry manages the mapping between setting group IDs and their custom components. + +```typescript +// Register a component for a specific group +registerSettingsComponent('github-app', { + component: GitHubAppSettings, + description: 'GitHub App configuration with connection testing', + author: 'DeployStack Team', + version: '1.0.0' +}) + +// Check if a component is registered +const hasCustom = hasCustomComponent('github-app') + +// Get the registered component +const componentDef = getSettingsComponent('github-app') +``` + +### 2. Settings Form Composable (`useSettingsForm`) + +Provides common form functionality for settings components. + +```typescript +const { + formValues, // Reactive form values + isSaving, // Save state + hasChanges, // Dirty state tracking + saveForm, // Save function + updateField, // Update individual fields + getFieldError // Get validation errors +} = useSettingsForm(settings) +``` + +### 3. Connection Test Composable (`useConnectionTest`) + +Handles connection testing functionality for external services. + +```typescript +const { + isTestingConnection, // Test state + lastTestResult, // Last test result + testConnection, // Generic test function + testGitHubAppConnection, // Specific test functions + getStatusMessage // Helper for UI +} = useConnectionTest() +``` + +## Creating Custom Setting Components + +### Step 1: Create the Component + +Create a new Vue component in `src/components/settings/`: + +```vue + + + + +``` + +### Step 2: Register the Component + +Add your component to the registration file: + +```typescript +// src/components/settings/index.ts +import { registerSettingsComponent } from '@/composables/useSettingsComponentRegistry' +import GitHubAppSettings from './GitHubAppSettings.vue' +import MyServiceSettings from './MyServiceSettings.vue' // Add your component + +export function registerSettingsComponents() { + // Existing registrations + registerSettingsComponent('github-app', { + component: GitHubAppSettings, + description: 'GitHub App configuration with connection testing', + author: 'DeployStack Team', + version: '1.0.0' + }) + + // Register your new component + registerSettingsComponent('myservice', { + component: MyServiceSettings, + description: 'My Service configuration with connection testing', + author: 'Your Name', + version: '1.0.0' + }) +} +``` + +### Step 3: Component Props and Events + +Your component must implement the required props and events: + +```typescript +// Required Props +interface SettingsComponentProps { + group: GlobalSettingGroup // The settings group metadata + settings: Setting[] // Array of settings for this group +} + +// Required Events +interface SettingsComponentEvents { + 'settings-updated': [settings: Setting[]] // Emitted when settings are saved + 'validation-error': [errors: Record] // Emitted on validation errors + 'connection-tested': [result: { success: boolean; message: string }] // Emitted after connection tests +} +``` + +## Advanced Patterns + +### Custom Validation + +Add custom validation logic to your components: + +```typescript +// In your component +const { + formValues, + saveForm, + // ... other form methods +} = useSettingsForm(props.settings, { + onValidate: (values) => { + const errors: ValidationError[] = [] + + // Custom validation logic + if (!values['myservice.api_key']) { + errors.push({ + field: 'myservice.api_key', + message: 'API key is required' + }) + } + + if (values['myservice.endpoint'] && !isValidUrl(values['myservice.endpoint'])) { + errors.push({ + field: 'myservice.endpoint', + message: 'Please enter a valid URL' + }) + } + + return errors + } +}) + +function isValidUrl(url: string): boolean { + try { + new URL(url) + return true + } catch { + return false + } +} +``` + +### Custom Connection Testing + +Implement service-specific connection testing: + +```typescript +// In your component +async function testMyServiceConnection() { + const credentials = { + api_key: String(formValues.value['myservice.api_key']), + endpoint: String(formValues.value['myservice.endpoint']), + timeout: 10000 + } + + try { + const result = await testConnection('myservice', credentials, { + endpoint: '/api/settings/test-connection/myservice', + timeout: 15000, + retries: 2 + }) + + // Handle successful test + if (result.success) { + // Show additional success information + console.log('Service details:', result.details) + } + + return result + } catch (error) { + console.error('Connection test failed:', error) + throw error + } +} +``` + +### Multi-Step Configuration + +Create complex multi-step configuration flows: + +```vue + + + +``` + +## Integration with Existing Systems + +### Design System +Follow the established [UI Design System](/deploystack/development/frontend/ui-design-system) patterns. Use shadcn-vue components and maintain consistency with the overall design. + +### Internationalization +Add i18n support following the [Internationalization Guide](/deploystack/development/frontend/internationalization). Create dedicated translation files for your settings components. + +### Event Bus +Use the [Global Event Bus](/deploystack/development/frontend/event-bus) for cross-component communication when settings are updated. + +## Testing Custom Components + +### Unit Testing + +```typescript +// tests/components/settings/MyServiceSettings.test.ts +import { describe, it, expect, vi } from 'vitest' +import { mount } from '@vue/test-utils' +import MyServiceSettings from '@/components/settings/MyServiceSettings.vue' + +describe('MyServiceSettings', () => { + const mockSettings = [ + { + key: 'myservice.api_key', + value: '', + type: 'string', + description: 'API Key', + is_encrypted: true + } + ] + + const mockGroup = { + id: 'myservice', + name: 'My Service', + description: 'Service configuration' + } + + it('renders form fields correctly', () => { + const wrapper = mount(MyServiceSettings, { + props: { + group: mockGroup, + settings: mockSettings + } + }) + + expect(wrapper.find('input[type="password"]').exists()).toBe(true) + expect(wrapper.text()).toContain('API Key') + }) + + it('emits settings-updated on save', async () => { + const wrapper = mount(MyServiceSettings, { + props: { + group: mockGroup, + settings: mockSettings + } + }) + + // Simulate form submission + await wrapper.find('form').trigger('submit') + + expect(wrapper.emitted('settings-updated')).toBeTruthy() + }) + + it('disables test button when fields are empty', () => { + const wrapper = mount(MyServiceSettings, { + props: { + group: mockGroup, + settings: mockSettings + } + }) + + const testButton = wrapper.find('[data-testid="test-connection"]') + expect(testButton.attributes('disabled')).toBeDefined() + }) +}) +``` + +### Integration Testing + +```typescript +// tests/integration/settings.test.ts +import { describe, it, expect } from 'vitest' +import { mount } from '@vue/test-utils' +import { createRouter, createWebHistory } from 'vue-router' +import GlobalSettings from '@/views/admin/GlobalSettings.vue' + +describe('Global Settings Integration', () => { + it('loads custom component for registered group', async () => { + // Register test component + registerSettingsComponent('test-service', { + component: TestServiceSettings + }) + + const router = createRouter({ + history: createWebHistory(), + routes: [ + { path: '/admin/settings/:groupId', component: GlobalSettings } + ] + }) + + await router.push('/admin/settings/test-service') + + const wrapper = mount(GlobalSettings, { + global: { + plugins: [router] + } + }) + + // Should render custom component + expect(wrapper.findComponent(TestServiceSettings).exists()).toBe(true) + }) +}) +``` diff --git a/docs/deploystack/development/frontend/index.mdx b/docs/deploystack/development/frontend/index.mdx index 4af4296..78406da 100644 --- a/docs/deploystack/development/frontend/index.mdx +++ b/docs/deploystack/development/frontend/index.mdx @@ -240,71 +240,7 @@ const props = defineProps() ## UI Components and Styling -### TailwindCSS Integration - -The frontend uses TailwindCSS for styling with the shadcn-vue component library for consistent UI elements. - -#### Installing New shadcn-vue Components - -```bash -npx shadcn-vue@latest add button -npx shadcn-vue@latest add input -npx shadcn-vue@latest add dialog -``` - -#### Custom Component Example - -```vue - -``` - -### Icons - -The project uses Lucide Icons through the `lucide-vue-next` package. - -#### Using Icons - -```vue - - - -``` +The frontend uses **TailwindCSS** for styling with **shadcn-vue** component library for consistent UI elements. For comprehensive styling guidelines, component patterns, and design standards, see the [UI Design System](/deploystack/development/frontend/ui-design-system) documentation. ## Environment Configuration @@ -341,7 +277,11 @@ const allEnvVars = getAllEnv() ### Service Layer Pattern -The frontend uses a service layer pattern for API communication: +**IMPORTANT**: The frontend uses a service layer pattern with direct `fetch()` calls for API communication. This is the established pattern and must be followed for consistency. + +#### ✅ Required Pattern - Direct Fetch Calls + +All API services must use direct `fetch()` calls instead of API client libraries. This ensures consistency across the codebase and simplifies maintenance. ```typescript // services/mcpServerService.ts @@ -374,6 +314,29 @@ export class McpServerService { } ``` +#### ❌ Avoid - API Client Libraries + +Do not use API client libraries like Axios, or custom API client wrappers: + +```typescript +// DON'T DO THIS +import axios from 'axios' +import { apiClient } from '@/utils/apiClient' + +// Avoid these patterns +const response = await axios.get('/api/servers') +const data = await apiClient.get('/api/servers') +``` + +#### Service Layer Guidelines + +1. **Use Static Classes**: All service methods should be static +2. **Direct Fetch**: Always use native `fetch()` API +3. **Error Handling**: Throw meaningful errors for failed requests +4. **Type Safety**: Define proper TypeScript interfaces for requests/responses +5. **Consistent Naming**: Use descriptive method names (e.g., `getAllServers`, `createCategory`) +6. **Base URL**: Always use environment variables for API endpoints + ### Using Services in Components ```vue @@ -410,11 +373,11 @@ onMounted(() => { Continue reading the detailed guides: +- [UI Design System](/deploystack/development/frontend/ui-design-system) - Component patterns, styling guidelines, and design standards - [Environment Variables](/deploystack/development/frontend/environment-variables) - Complete environment configuration guide - [Global Event Bus](/deploystack/development/frontend/event-bus) - Cross-component communication system - [Internationalization (i18n)](/deploystack/development/frontend/internationalization) - Multi-language support - [Plugin System](/deploystack/development/frontend/plugins) - Extending functionality -- [Router Optimization](/deploystack/development/frontend/router-optimization) - Performance improvements ## Docker Development @@ -442,19 +405,3 @@ docker run -d -p 8080:80 \ -e VITE_APP_TITLE="DeployStack" \ deploystack/frontend:latest ``` - -## Troubleshooting - -### Common Issues - -1. **Build failures**: Check Node.js and npm versions -2. **API connection issues**: Verify `VITE_API_URL` environment variable -3. **Styling issues**: Ensure TailwindCSS is properly configured -4. **TypeScript errors**: Run `npm run lint` to check for issues - -### Development Tips - -- Use Vue DevTools browser extension for debugging -- Enable TypeScript strict mode in `tsconfig.json` -- Use ESLint and Prettier for code consistency -- Test components in isolation when possible diff --git a/docs/deploystack/development/frontend/storage.mdx b/docs/deploystack/development/frontend/storage.mdx new file mode 100644 index 0000000..7aaa775 --- /dev/null +++ b/docs/deploystack/development/frontend/storage.mdx @@ -0,0 +1,468 @@ +--- +title: Frontend Storage System +description: Complete guide to using the enhanced event bus storage system for persistent data management in the DeployStack frontend. +sidebar: Storage +--- + +# Frontend Storage System + +The storage system is built into the [global event bus](/deploystack/development/frontend/event-bus) and provides persistent data management across route changes and browser sessions. This system uses localStorage with a type-safe API and automatically emits events when data changes. + +> **📖 For event bus fundamentals, see [Global Event Bus](/deploystack/development/frontend/event-bus)** + +## Overview + +The storage system solves common frontend challenges such as: +- **Persistent State**: Maintain application state across route changes and page refreshes +- **Type Safety**: Full TypeScript support with generic methods +- **Easy Integration**: Simple API that works with the existing event bus +- **Automatic Cleanup**: Consistent storage key management with prefixing +- **Event Integration**: Storage changes emit events for reactive updates + +## Architecture + +### Storage Configuration + +The storage system is built into the event bus and uses a centralized configuration: + +```typescript +// Storage configuration in useEventBus.ts +const STORAGE_CONFIG = { + prefix: 'deploystack_', + keys: { + SELECTED_TEAM_ID: 'selected_team_id', + // Add new keys here as needed + } +} +``` + +### Type Safety + +All storage operations are type-safe using TypeScript generics: + +```typescript +// Generic storage methods +setState(key: string, value: T): void +getState(key: string, defaultValue?: T): T | null +clearState(key: string): void +hasState(key: string): boolean +``` + +## Usage + +### Basic Storage Operations + +#### Storing Data + +```typescript +import { useEventBus } from '@/composables/useEventBus' + +const eventBus = useEventBus() + +// Store a string +eventBus.setState('selected_team_id', 'team-123') + +// Store an object +eventBus.setState('user_preferences', { + theme: 'dark', + language: 'en', + notifications: true +}) + +// Store an array +eventBus.setState('recent_searches', ['query1', 'query2', 'query3']) + +// Store a boolean +eventBus.setState('sidebar_collapsed', true) +``` + +#### Retrieving Data + +```typescript +// Get data with type safety +const teamId = eventBus.getState('selected_team_id') + +// Get data with default value +const theme = eventBus.getState('selected_theme', 'light') + +// Get complex objects +interface UserPreferences { + theme: string + language: string + notifications: boolean +} + +const preferences = eventBus.getState('user_preferences') +``` + +#### Checking and Clearing Data + +```typescript +// Check if data exists +if (eventBus.hasState('selected_team_id')) { + console.log('Team selection exists') +} + +// Clear specific data +eventBus.clearState('selected_team_id') + +// Get all stored data +const allData = eventBus.getAllState() +console.log('All stored data:', allData) + +// Clear all stored data +eventBus.clearAllState() +``` + +## Adding New Storage Values + +### Step 1: Add to Configuration (Optional) + +For better organization, add your new storage key to the configuration: + +```typescript +// In /composables/useEventBus.ts +const STORAGE_CONFIG = { + prefix: 'deploystack_', + keys: { + SELECTED_TEAM_ID: 'selected_team_id', + SELECTED_THEME: 'selected_theme', // NEW + USER_DASHBOARD_LAYOUT: 'dashboard_layout', // NEW + RECENT_SEARCHES: 'recent_searches', // NEW + } +} +``` + +### Step 2: Use in Components + +```typescript +// In any Vue component + +``` + +### Step 3: Listen for Storage Changes (Optional) + +```typescript +// Listen for storage change events +eventBus.on('storage-changed', (data) => { + console.log(`Storage changed: ${data.key}`, { + oldValue: data.oldValue, + newValue: data.newValue + }) +}) +``` + +## Real-World Examples + +### Example 1: Theme Persistence + +```typescript +// ThemeManager.vue + +``` + +### Example 2: Dashboard Layout Persistence + +```typescript +// DashboardLayout.vue + +``` + +### Example 3: Search History + +```typescript +// SearchComponent.vue + +``` + +## Best Practices + +### 1. Use Descriptive Keys + +```typescript +// Good +eventBus.setState('selected_team_id', teamId) +eventBus.setState('user_dashboard_layout', layout) +eventBus.setState('notification_preferences', prefs) + +// Avoid +eventBus.setState('data', someData) +eventBus.setState('temp', tempValue) +eventBus.setState('x', value) +``` + +### 2. Provide Default Values + +```typescript +// Good - provides fallback +const theme = eventBus.getState('selected_theme', 'light') +const layout = eventBus.getState('dashboard_layout', defaultLayout) + +// Less robust - might return null +const theme = eventBus.getState('selected_theme') +``` + +### 3. Use Type Safety + +```typescript +// Good - type-safe +interface UserPreferences { + theme: 'light' | 'dark' + language: string + notifications: boolean +} + +const prefs = eventBus.getState('user_preferences') + +// Less safe - no type checking +const prefs = eventBus.getState('user_preferences') +``` + +### 4. Handle Storage Errors Gracefully + +```typescript +// The storage system handles errors internally, but you can add extra validation +const getStoredTeamId = (): string | null => { + try { + const teamId = eventBus.getState('selected_team_id') + + // Additional validation + if (teamId && teamId.length > 0) { + return teamId + } + + return null + } catch (error) { + console.warn('Failed to get stored team ID:', error) + return null + } +} +``` + +### 5. Clean Up When Appropriate + +```typescript +// Clear storage when user logs out +const logout = () => { + // Clear user-specific data + eventBus.clearState('selected_team_id') + eventBus.clearState('user_preferences') + eventBus.clearState('dashboard_layout') + + // Or clear everything + eventBus.clearAllState() + + // Proceed with logout... +} +``` + +## Storage Events + +The storage system emits events when data changes, allowing for reactive updates: + +```typescript +// Listen for any storage changes +eventBus.on('storage-changed', (data) => { + console.log(`Storage key "${data.key}" changed:`, { + from: data.oldValue, + to: data.newValue + }) +}) + +// React to specific storage changes +eventBus.on('storage-changed', (data) => { + if (data.key === 'selected_theme') { + applyTheme(data.newValue) + } +}) +``` + +## Technical Details + +### Storage Implementation + +- **Prefix**: All keys are prefixed with `deploystack_` to avoid conflicts +- **Serialization**: Data is stored as JSON strings using safe parsing +- **Error Handling**: All storage operations include try-catch blocks +- **Type Safety**: Generic methods provide compile-time type checking +- **Event Integration**: Storage changes emit `storage-changed` events + +### Browser Compatibility + +The storage system uses `localStorage`, which is supported in all modern browsers. The system gracefully handles storage errors (e.g., when localStorage is disabled or full). + +### Performance Considerations + +- **Synchronous Operations**: localStorage operations are synchronous but fast +- **JSON Serialization**: Large objects may impact performance during serialization +- **Storage Limits**: localStorage typically has a 5-10MB limit per domain +- **Event Frequency**: Storage change events are emitted for every setState/clearState call + +## Migration Guide + +### From Component State to Storage + +**Before:** +```typescript +// Component-level state +const selectedTeam = ref(null) + +onMounted(() => { + // Initialize from API or default + selectedTeam.value = await getDefaultTeam() +}) +``` + +**After:** +```typescript +// Storage-backed state +const selectedTeam = ref(null) + +onMounted(() => { + // Initialize from storage with fallback + const storedTeamId = eventBus.getState('selected_team_id') + if (storedTeamId) { + selectedTeam.value = await getTeamById(storedTeamId) + } else { + const defaultTeam = await getDefaultTeam() + selectedTeam.value = defaultTeam + eventBus.setState('selected_team_id', defaultTeam.id) + } +}) + +// Update storage when state changes +const selectTeam = (team: Team) => { + selectedTeam.value = team + eventBus.setState('selected_team_id', team.id) +} +``` + +## Related Documentation + +- **[Global Event Bus](/deploystack/development/frontend/event-bus)** - Core event system that powers storage + +The enhanced event bus storage system provides a powerful, type-safe way to manage persistent state in the DeployStack frontend, making it easy to maintain user preferences and application state across sessions. diff --git a/docs/deploystack/development/frontend/ui-design-system-pagination.mdx b/docs/deploystack/development/frontend/ui-design-system-pagination.mdx new file mode 100644 index 0000000..6ae9ad4 --- /dev/null +++ b/docs/deploystack/development/frontend/ui-design-system-pagination.mdx @@ -0,0 +1,216 @@ +--- +title: Frontend Pagination Implementation Guide +description: Developer guide for implementing pagination in DeployStack frontend using the PaginationControls component. +--- + +# Frontend Pagination Implementation Guide + +This guide shows developers how to add pagination to any data table in the DeployStack frontend. + +## Quick Implementation + +### 1. Service Layer + +Add pagination support to your service: + +```typescript +// services/yourService.ts +export interface PaginationParams { + limit?: number + offset?: number +} + +export interface PaginationMeta { + total: number + limit: number + offset: number + has_more: boolean +} + +export interface PaginatedResponse { + items: T[] + pagination: PaginationMeta +} + +static async getItemsPaginated( + filters?: ItemFilters, + pagination?: PaginationParams +): Promise> { + const url = new URL(`${this.baseUrl}/api/items`) + + // Add filters and pagination params + if (filters) { + Object.entries(filters).forEach(([key, value]) => { + if (value !== undefined) url.searchParams.append(key, String(value)) + }) + } + + if (pagination) { + if (pagination.limit) url.searchParams.append('limit', String(pagination.limit)) + if (pagination.offset) url.searchParams.append('offset', String(pagination.offset)) + } + + const response = await fetch(url.toString(), { + method: 'GET', + credentials: 'include', + headers: { 'Content-Type': 'application/json' } + }) + + const data = await response.json() + + return { + items: data.data.items, + pagination: data.data.pagination + } +} +``` + +### 2. Component Implementation + +```vue + + + +``` + +### 3. Add Translations + +Add to your i18n file (e.g., `i18n/locales/en/yourFeature.ts`): + +```typescript +pagination: { + showing: 'Showing {start} to {end} of {total} items', + noItems: 'No items to display', + itemsPerPage: 'Items per page:', + pageInfo: 'Page {current} of {total}', + previous: 'Previous', + next: 'Next' +} +``` + +## PaginationControls Component + +### Props +- `currentPage: number` - Current page number (1-based) +- `pageSize: number` - Items per page +- `totalItems: number` - Total number of items +- `isLoading?: boolean` - Loading state +- `pageSizeOptions?: number[]` - Available page sizes (default: [10, 20, 50, 100]) + +### Events +- `@page-change(page: number)` - Emitted when page changes +- `@page-size-change(pageSize: number)` - Emitted when page size changes + +## shadcn-vue Components Used + +The `PaginationControls` component uses these shadcn-vue components: +- `Button` - For Previous/Next navigation +- `Select`, `SelectContent`, `SelectItem`, `SelectTrigger`, `SelectValue` - For page size selector +- Lucide icons: `ChevronLeft`, `ChevronRight` + +## Search Integration + +For search functionality, conditionally show pagination: + +```vue + +``` + +## Backend Requirements + +Your backend API must support these query parameters: +- `limit` - Number of items per page (1-100) +- `offset` - Number of items to skip + +And return this response format: +```json +{ + "success": true, + "data": { + "items": [...], + "pagination": { + "total": 150, + "limit": 20, + "offset": 40, + "has_more": true + } + } +} +``` diff --git a/docs/deploystack/development/frontend/ui-design-system-table.mdx b/docs/deploystack/development/frontend/ui-design-system-table.mdx new file mode 100644 index 0000000..afbc8b9 --- /dev/null +++ b/docs/deploystack/development/frontend/ui-design-system-table.mdx @@ -0,0 +1,379 @@ +--- +title: Table Design System +description: Developer guide for implementing data tables in DeployStack frontend using shadcn-vue Table components. +--- + +# Table Design System + +This guide shows developers how to implement consistent, accessible data tables in the DeployStack frontend. + +## Quick Implementation + +### Basic Table Structure + +```vue + + + +``` + +## shadcn-vue Table Components + +### Required Components +```vue +import { + Table, + TableBody, + TableCell, + TableHead, + TableHeader, + TableRow, +} from '@/components/ui/table' +``` + +### Component Structure +- `Table` - Main table wrapper +- `TableHeader` - Table header section +- `TableBody` - Table body section +- `TableRow` - Table row (for both header and body) +- `TableHead` - Header cell +- `TableCell` - Data cell + +## Design Patterns + +### 1. Container Structure +```vue +
+ + +
+
+``` + +### 2. Header Pattern +```vue + + + Column Name + Another Column + Actions + + +``` + +### 3. Empty State Handling +```vue + + + + {{ t('table.noData') }} + + + + +``` + +### 4. Data Cell Patterns + +**Primary Content (Names, Titles):** +```vue + + {{ item.name }} + +``` + +**Secondary Content (Descriptions, Metadata):** +```vue + + + {{ item.description }} + + + {{ t('table.noDescription') }} + + +``` + +**Status Indicators:** +```vue + + {{ item.status }} + +``` + +**Dates and Timestamps:** +```vue + + {{ formatDate(item.created_at) }} + +``` + +## Action Menu Pattern + +For table actions, use DropdownMenu with AlertDialog for destructive actions: + +```vue + + + +``` + +## Badge Patterns for Tables + +### Status Badges +```vue +Active +Inactive +Error +Pending +``` + +### Category/Tag Badges +```vue + + {{ category.icon }} + +``` + +### Numeric Badges +```vue + + {{ item.sort_order }} + +``` + +## Migration from Raw HTML + +### ❌ Deprecated Pattern - Raw HTML Tables +```vue + + + + + + + + + + + + +
Name
{{ item.name }}
+``` + +### ✅ Preferred Pattern - shadcn-vue Components +```vue + + + + Name + + + + + {{ item.name }} + + +
+``` + +## Migration Steps + +1. **Replace HTML elements** with shadcn-vue components: + - `` → `
` + - `` → `` + - `` → `` + - `` → `` + - `
` → `` + - `` → `` + +2. **Update imports**: + ```vue + import { + Table, + TableBody, + TableCell, + TableHead, + TableHeader, + TableRow, + } from '@/components/ui/table' + ``` + +3. **Add proper empty state handling** +4. **Update action menus** to use AlertDialog for destructive actions +5. **Ensure proper badge usage** for status indicators + +## Required Translations + +Add to your i18n file: + +```typescript +table: { + noData: 'No data available', + noDescription: 'No description provided', + openMenu: 'Open menu', + columns: { + name: 'Name', + description: 'Description', + status: 'Status', + created: 'Created', + actions: 'Actions' + }, + actions: { + edit: 'Edit', + delete: 'Delete', + view: 'View Details' + } +}, +deleteDialog: { + title: 'Delete Item', + description: 'Are you sure you want to delete "{itemName}"? This action cannot be undone.', + cancel: 'Cancel', + confirm: 'Delete' +} +``` diff --git a/docs/deploystack/development/frontend/ui-design-system.mdx b/docs/deploystack/development/frontend/ui-design-system.mdx new file mode 100644 index 0000000..4c9c705 --- /dev/null +++ b/docs/deploystack/development/frontend/ui-design-system.mdx @@ -0,0 +1,312 @@ +--- +title: UI Design System +description: Comprehensive guide to UI components, styling patterns, and design standards for the DeployStack frontend. +sidebar: UI & Design +--- + +# UI Design System + +This document establishes the official UI design patterns and component standards for the DeployStack frontend. All new components and pages must follow these guidelines to ensure consistency and maintainability. + +## Design Principles + +- **Consistency**: Use established patterns and components +- **Accessibility**: Follow WCAG guidelines and semantic HTML +- **Responsiveness**: Design for all screen sizes +- **Performance**: Optimize for fast loading and smooth interactions +- **Maintainability**: Write clean, reusable component code + +## Data Tables + +For data table implementation, see the dedicated [Table Design System](/deploystack/development/frontend/ui-design-system-table) guide. + +For pagination implementation, see the [Pagination Implementation Guide](/deploystack/development/frontend/ui-design-system-pagination). + +## Badge Design Patterns + +Badges are used for status indicators, categories, and metadata. + +### Status Badges +```html +Active +Inactive +Error +Pending +``` + +### Category/Tag Badges +```html + + {{ category.icon }} + +``` + +### Numeric Badges +```html + + {{ item.sort_order }} + +``` + +## Form Design Patterns + +### Modal Forms +Use `AlertDialog` for forms in modals: + +```html + + + + {{ modalTitle }} + + {{ modalDescription }} + + + +
+ +
+ + +
+ {{ errors.name }} +
+
+ + + + + +
+
+
+``` + +### Form Field Pattern +```html +
+ + +
+ {{ errors.field }} +
+
+``` + +## Button Patterns + +### Primary Actions +```html + +``` + +### Secondary Actions +```html + +``` + +### Destructive Actions +```html + +``` + +### Icon-Only Buttons +```html + +``` + +## Layout Patterns + +### Page Header +```html +
+
+

{{ pageTitle }}

+

{{ pageDescription }}

+
+ +
+``` + +### Content Sections +```html +
+ +
+ +
+ + + + + {{ successMessage }} + + + +
+ +
+
+``` + +## Icon Usage + +### Standard Icon Sizes +- **Small icons**: `h-4 w-4` (16px) - for buttons, table actions +- **Medium icons**: `h-5 w-5` (20px) - for form fields, navigation +- **Large icons**: `h-6 w-6` (24px) - for page headers, prominent actions + +### Icon with Text +```html + +``` + +### Status Icons +```html + + + +``` + +## Responsive Design + +### Mobile-First Approach +```html +
+ +
+``` + +### Hide/Show on Different Screens +```html + +
Mobile only
+``` + +## Accessibility Guidelines + +### Screen Reader Support +```html + +``` + +### Proper Labels +```html + + +``` + +### Focus Management +```html + +``` + +## Migration Guide + +### Updating Existing Tables + +If you have an existing table using raw HTML elements, follow these steps: + +1. **Replace HTML elements** with shadcn-vue components: + - `` → `
` + - `` → `` + - `` → `` + - `` → `` + - `
` → `` + - `` → `` + +2. **Update imports**: + ```html + import { + Table, + TableBody, + TableCell, + TableHead, + TableHeader, + TableRow, + } from '@/components/ui/table' + ``` + +3. **Add proper empty state handling** +4. **Update action menus** to use AlertDialog for destructive actions +5. **Ensure proper badge usage** for status indicators + +### Example Migration + +**Before (deprecated):** +```html + + + + + + + + + + + +
Name
{{ item.name }}
+``` + +**After (preferred):** +```html + + + + Name + + + + + {{ item.name }} + + +
+``` diff --git a/docs/deploystack/github-application.mdx b/docs/deploystack/github-application.mdx new file mode 100644 index 0000000..f044773 --- /dev/null +++ b/docs/deploystack/github-application.mdx @@ -0,0 +1,160 @@ +--- +title: GitHub Application Integration +description: Understanding how DeployStack's GitHub integration works for MCP server creation and repository information extraction. +sidebar: GitHub Application +--- + +# GitHub Application Integration + +DeployStack provides seamless GitHub integration that automatically extracts repository information when creating MCP servers. This integration works in two modes to accommodate both local development and production environments. + +## How GitHub Integration Works + +### MCP Server Creation Process + +When you create an MCP server in DeployStack and provide a GitHub repository URL, the system automatically: + +1. **Fetches Repository Metadata**: Extracts the repository name, description, primary programming language, license information, and topics +2. **Analyzes Technical Details**: Determines the appropriate runtime environment (Node.js for TypeScript/JavaScript, Python for Python projects, etc.) +3. **Reads Package Information**: Scans package.json, pyproject.toml, or other package files to understand dependencies and installation requirements +4. **Generates Installation Methods**: Creates appropriate installation commands based on the detected package manager and project type +5. **Auto-populates Form Fields**: Pre-fills the MCP server creation form with all discovered information + +This automation saves significant time and ensures consistency when adding MCP servers from GitHub repositories. + +### Smart Runtime Detection + +DeployStack intelligently maps programming languages to their corresponding runtime environments: + +- **TypeScript/JavaScript** → Node.js runtime +- **Python** → Python runtime +- **Go** → Go runtime +- **Rust** → Rust runtime +- **Java** → Java runtime +- **C#** → .NET runtime + +The system also detects minimum runtime versions from package files when available. + +## Two Authentication Modes + +DeployStack supports two GitHub integration modes to accommodate different use cases: + +### Local/Development Mode (No GitHub App Required) + +**When to use**: Perfect for local development, personal projects, and working with public repositories. + +**How it works**: +- Uses GitHub's public API without authentication +- Works immediately without any setup +- Accesses public repositories only +- Subject to GitHub's unauthenticated rate limits (60 requests per hour per IP address) + +**Benefits**: +- Zero configuration required +- Works out of the box +- Ideal for open-source MCP servers +- Perfect for local development and testing + +**Limitations**: +- Public repositories only +- Lower rate limits +- Cannot access private repositories +- May hit rate limits with heavy usage + +### Production/Enterprise Mode (GitHub App Required) + +**When to use**: Production environments, private repositories, high-volume usage, or enterprise deployments. + +**How it works**: +- Uses GitHub App authentication with higher privileges +- Requires GitHub App creation and configuration +- Can access both public and private repositories +- Higher rate limits (5,000 requests per hour) + +**Benefits**: +- Access to private repositories +- Higher rate limits for production use +- Better for team environments +- More reliable for continuous integration + +**Requirements**: +- GitHub App must be created +- App credentials must be configured in DeployStack +- App must be installed on target repositories + +## Automatic Mode Selection + +DeployStack automatically chooses the appropriate authentication mode based on your configuration: + +- **GitHub App Disabled** (default): Uses public API mode for immediate functionality +- **GitHub App Enabled**: Uses authenticated mode for enhanced capabilities + +You can switch between modes at any time through the Global Settings without affecting existing MCP servers. + +## Setting Up GitHub App (Optional) + +If you need access to private repositories or higher rate limits, you can optionally configure a GitHub App: + +### Step 1: Create GitHub App + +1. **Navigate to GitHub**: Go to GitHub.com → Settings → Developer settings → GitHub Apps +2. **Create New App**: Click "New GitHub App" +3. **Configure Basic Information**: + - **App Name**: `DeployStack - [Your Organization]` + - **Homepage URL**: Your DeployStack instance URL + - **Webhook URL**: Not required for this integration + - **Repository permissions**: + - Contents: Read (to access repository files) + - Metadata: Read (to access repository information) + - Pull requests: Read (optional, for future features) + +### Step 2: Generate Credentials + +1. **Note the App ID**: Copy the numeric App ID from the app settings +2. **Generate Private Key**: Click "Generate a private key" and download the .pem file +3. **Install the App**: Install the app on your organization or specific repositories + +### Step 3: Configure DeployStack + +1. **Access Global Settings**: Navigate to Admin → Global Settings → GitHub App Configuration +2. **Enter Credentials**: + - **App ID**: The numeric ID from step 2 + - **Private Key**: Upload or paste the contents of the .pem file (will be automatically base64 encoded) + - **Installation ID**: Found in the app installation URL +3. **Enable Integration**: Toggle "Enable GitHub App integration" +4. **Test Connection**: Use the built-in connection test to verify setup + +### Step 4: Verify Setup + +1. **Test with Private Repository**: Try creating an MCP server from a private repository +2. **Check Rate Limits**: Monitor usage in the GitHub App settings +3. **Verify Permissions**: Ensure the app has access to required repositories + +## Repository Information Extracted + +When processing a GitHub repository, DeployStack extracts: + +### Basic Information +- Repository name and description +- Primary programming language +- License type (MIT, Apache, GPL, etc.) +- Homepage URL +- Repository topics/tags + +### Technical Details +- Runtime environment and version requirements +- Package dependencies and their versions +- Installation and build scripts +- Project structure and entry points + +### Metadata +- Repository statistics (stars, forks) +- Latest release information +- Default branch name +- Repository owner and organization + +### Installation Methods +- Package manager commands (npm, pip, cargo, etc.) +- Git clone instructions +- Build and setup procedures +- Environment variable requirements diff --git a/docs/deploystack/github-integration.mdx b/docs/deploystack/github-integration.mdx new file mode 100644 index 0000000..e41f252 --- /dev/null +++ b/docs/deploystack/github-integration.mdx @@ -0,0 +1,344 @@ +--- +title: GitHub Integration +description: Seamless GitHub integration for MCP servers, global settings, and automated synchronization in DeployStack. +sidebar: GitHub Integration +--- + +# GitHub Integration + +DeployStack provides comprehensive GitHub integration that enables seamless synchronization of MCP servers, automated repository scanning, and streamlined deployment workflows. This integration connects your GitHub repositories directly with your DeployStack installation. + +## Overview + +The GitHub integration system offers: + +- **MCP Server Synchronization**: Automatic detection and sync of MCP server configurations +- **Repository Metadata Extraction**: Pull descriptions, languages, licenses, and topics +- **Version Management**: Automatic version detection from repository tags and releases +- **Global Settings Integration**: Configure GitHub OAuth and API access +- **Team-Based Access**: Respect team boundaries and permissions +- **Real-time Updates**: Monitor repository changes and trigger updates + +## GitHub OAuth Configuration + +### Setting Up GitHub OAuth + +To enable GitHub integration, you need to configure GitHub OAuth in your global settings: + +#### 1. Create GitHub OAuth App + +1. **Go to GitHub**: Navigate to GitHub.com → Settings → Developer settings → OAuth Apps +2. **Create New App**: Click "New OAuth App" +3. **Configure Application**: + - **Application Name**: `DeployStack - [Your Instance]` + - **Homepage URL**: `https://your-deploystack-domain.com` + - **Authorization Callback URL**: `https://your-deploystack-domain.com/auth/github/callback` + - **Application Description**: Optional description of your DeployStack instance + +#### 2. Configure in DeployStack + +1. **Access Global Settings**: Go to Admin → Global Settings → GitHub OAuth +2. **Enter Credentials**: + - **Client ID**: From your GitHub OAuth app + - **Client Secret**: From your GitHub OAuth app + - **Enable GitHub Integration**: Toggle to activate +3. **Save Configuration**: Apply the settings + +#### 3. Test Integration + +1. **Verify Connection**: Use the "Test Connection" button in settings +2. **Check Permissions**: Ensure the app has necessary repository access +3. **Validate Callback**: Test the OAuth flow with a user account + +### GitHub App vs OAuth App + +DeployStack supports both GitHub OAuth Apps and GitHub Apps: + +#### GitHub OAuth App (Recommended for most users) +- **Simpler Setup**: Easier to configure and manage +- **User-Based Access**: Uses individual user permissions +- **Public Repositories**: Works well with public repositories +- **Rate Limits**: Subject to user-based rate limits + +#### GitHub App (Enterprise/High-Volume) +- **Enhanced Security**: App-level permissions and authentication +- **Higher Rate Limits**: Better rate limiting for high-volume usage +- **Fine-Grained Permissions**: More granular access control +- **Installation-Based**: Installed per organization/repository + +## MCP Server GitHub Integration + +### Automatic Repository Scanning + +When you provide a GitHub URL for an MCP server, DeployStack automatically: + +#### Repository Information Extraction +- **Description**: Uses repository description as server description +- **Language**: Detects primary programming language +- **License**: Extracts license information +- **Topics**: Imports repository topics as server tags +- **Homepage**: Uses repository homepage URL +- **README**: Processes README for additional metadata + +#### MCP Configuration Detection +- **Package Files**: Scans `package.json`, `pyproject.toml`, `Cargo.toml` +- **MCP Config**: Looks for MCP-specific configuration files +- **Dependencies**: Extracts runtime dependencies +- **Scripts**: Identifies installation and run scripts + +### Repository Synchronization + +#### Manual Synchronization + +1. **Server Management**: Go to your MCP server details +2. **Sync Repository**: Click "Sync from GitHub" button +3. **Review Changes**: Preview what will be updated +4. **Apply Updates**: Confirm synchronization + +#### Automatic Synchronization (Future Feature) + +- **Webhook Integration**: Automatic updates on repository changes +- **Scheduled Sync**: Regular synchronization intervals +- **Conflict Resolution**: Handle conflicts between local and remote changes + +### Version Management + +#### Automatic Version Detection + +DeployStack automatically detects versions from: + +- **Git Tags**: Semantic version tags (e.g., `v1.2.3`, `1.2.3`) +- **GitHub Releases**: Published releases with version numbers +- **Package Files**: Version information in `package.json`, etc. +- **Commit History**: Latest commits for development versions + +#### Version Synchronization + +1. **Scan Repository**: Check for new tags and releases +2. **Create Versions**: Automatically create version entries +3. **Update Metadata**: Sync changelog and release notes +4. **Mark Latest**: Identify the latest stable version + +### Supported Repository Structures + +#### Node.js MCP Servers +``` +repository/ +├── package.json # Required: MCP server configuration +├── README.md # Recommended: Documentation +├── src/ # Source code +│ ├── index.ts # Main server file +│ └── tools/ # MCP tools +├── dist/ # Compiled output (optional) +└── .github/ # GitHub workflows (optional) +``` + +#### Python MCP Servers +``` +repository/ +├── pyproject.toml # Required: Python project configuration +├── README.md # Recommended: Documentation +├── src/ # Source code +│ └── mcp_server/ # MCP server package +├── requirements.txt # Dependencies (optional) +└── .github/ # GitHub workflows (optional) +``` + +#### Configuration Requirements + +For optimal integration, repositories should include: + +- **Clear Description**: Repository description explaining the MCP server's purpose +- **Proper Licensing**: Valid open-source license +- **Semantic Versioning**: Use semantic version tags +- **Documentation**: Comprehensive README with usage instructions +- **Topics/Tags**: Relevant GitHub topics for categorization + +## GitHub API Integration + +### API Endpoints + +#### Repository Information +```http +GET /api/mcp/github/repo-info +Query Parameters: + - url: GitHub repository URL + - branch: Target branch (default: main) + +Response: +{ + "success": true, + "data": { + "name": "example-mcp-server", + "description": "An example MCP server", + "language": "TypeScript", + "license": "MIT", + "topics": ["mcp", "ai", "tools"], + "homepage": "https://example.com", + "default_branch": "main", + "latest_commit": { + "sha": "abc123", + "message": "Update server configuration", + "date": "2025-01-07T15:30:00Z" + } + } +} +``` + +#### Repository Synchronization +```http +POST /api/mcp/github/sync/{serverId} +Authorization: Required (server management permissions) + +Response: +{ + "success": true, + "data": { + "server_id": "server123", + "sync_status": "completed", + "changes": { + "description": "Updated from repository", + "version": "1.2.3", + "tags": ["mcp", "ai", "updated"] + }, + "last_sync_at": "2025-01-07T15:30:00Z" + } +} +``` + +### Rate Limiting and Best Practices + +#### GitHub API Rate Limits +- **Authenticated Requests**: 5,000 requests per hour +- **Unauthenticated Requests**: 60 requests per hour +- **Search API**: 30 requests per minute +- **GraphQL API**: 5,000 points per hour + +#### Best Practices +- **Cache Repository Data**: Minimize API calls by caching metadata +- **Batch Operations**: Group multiple repository operations +- **Error Handling**: Graceful handling of rate limit errors +- **Retry Logic**: Implement exponential backoff for failed requests + +## Security Considerations + +### Access Control + +#### Repository Access +- **Public Repositories**: No special permissions required +- **Private Repositories**: Requires appropriate GitHub permissions +- **Organization Repositories**: Respects organization access controls +- **Team Boundaries**: DeployStack team permissions still apply + +#### Token Security +- **Secure Storage**: GitHub tokens encrypted in database +- **Scope Limitation**: Minimal required scopes for OAuth apps +- **Token Rotation**: Regular token refresh and rotation +- **Audit Logging**: Track all GitHub API operations + +### Privacy and Data Handling + +#### Data Collection +- **Repository Metadata**: Only public metadata is collected +- **No Source Code**: Source code is never stored in DeployStack +- **Minimal Permissions**: Request only necessary GitHub permissions +- **User Consent**: Clear disclosure of GitHub integration features + +#### Data Retention +- **Metadata Caching**: Repository metadata cached for performance +- **Sync History**: Synchronization logs for troubleshooting +- **User Control**: Users can disable GitHub integration anytime + +## Troubleshooting + +### Common Issues + +#### OAuth Configuration Problems + +**Problem**: "GitHub OAuth not configured" error +**Solution**: +1. Verify Client ID and Client Secret in global settings +2. Check callback URL matches GitHub OAuth app configuration +3. Ensure GitHub OAuth app is not suspended +4. Test connection using the settings panel + +**Problem**: "Access denied" during OAuth flow +**Solution**: +1. Check user has access to the repository +2. Verify OAuth app permissions +3. Ensure user has granted necessary scopes +4. Check for organization restrictions + +#### Repository Synchronization Issues + +**Problem**: "Repository not found" error +**Solution**: +1. Verify repository URL is correct and accessible +2. Check repository is public or user has access +3. Ensure repository exists and is not archived +4. Verify GitHub token has repository access + +**Problem**: "Sync failed" with rate limit error +**Solution**: +1. Wait for rate limit reset (shown in error message) +2. Reduce frequency of synchronization operations +3. Consider upgrading to GitHub App for higher limits +4. Implement retry logic with exponential backoff + +#### Version Detection Problems + +**Problem**: Versions not detected from repository +**Solution**: +1. Ensure repository uses semantic version tags +2. Check tags follow format: `v1.2.3` or `1.2.3` +3. Verify releases are published (not just tags) +4. Check package.json or pyproject.toml version field + +### Debug Information + +#### Checking GitHub Integration Status + +1. **Global Settings**: Admin → Global Settings → GitHub OAuth +2. **Test Connection**: Use built-in connection test +3. **API Logs**: Check server logs for GitHub API calls +4. **Rate Limit Status**: Monitor current rate limit usage + +#### Repository Analysis + +1. **Repository Info API**: Test `/api/mcp/github/repo-info` endpoint +2. **Manual Sync**: Try manual synchronization from server details +3. **Error Logs**: Check synchronization error messages +4. **GitHub API**: Test direct GitHub API access + +## Future Enhancements + +### Planned Features + +#### Advanced Integration +- **Webhook Support**: Real-time repository change notifications +- **GitHub Actions Integration**: Trigger deployments from CI/CD +- **Pull Request Integration**: Preview changes before merging +- **Issue Tracking**: Link MCP server issues to GitHub issues + +#### Enhanced Automation +- **Automated Testing**: Run MCP server tests on synchronization +- **Dependency Scanning**: Security vulnerability detection +- **License Compliance**: Automated license compatibility checking +- **Quality Metrics**: Code quality and documentation scoring + +#### Enterprise Features +- **GitHub Enterprise Support**: On-premises GitHub integration +- **SAML/SSO Integration**: Enterprise authentication flows +- **Audit Logging**: Comprehensive audit trails +- **Compliance Reporting**: Generate compliance reports + +### Community Contributions + +#### Contributing to GitHub Integration + +- **Feature Requests**: Submit enhancement requests +- **Bug Reports**: Report integration issues +- **Documentation**: Improve integration documentation +- **Testing**: Help test new GitHub features + +The GitHub integration system provides a powerful foundation for connecting your repositories with DeployStack's MCP catalog, enabling streamlined workflows and automated synchronization while maintaining security and team boundaries. diff --git a/docs/deploystack/mcp-catalog.mdx b/docs/deploystack/mcp-catalog.mdx new file mode 100644 index 0000000..cb330c9 --- /dev/null +++ b/docs/deploystack/mcp-catalog.mdx @@ -0,0 +1,371 @@ +--- +title: MCP Server Catalog +description: Discover, manage, and deploy Model Context Protocol (MCP) servers through DeployStack's comprehensive catalog system. +sidebar: MCP Catalog +--- + +# MCP Server Catalog + +The MCP (Model Context Protocol) Server Catalog is DeployStack's comprehensive system for discovering, managing, and deploying MCP servers. It provides a centralized repository where you can find pre-configured MCP servers, contribute your own, and manage deployments across your teams. + +## What is the MCP Catalog? + +The MCP Catalog serves as a marketplace and management system for MCP servers, offering: + +- **Server Discovery**: Browse available MCP servers by category, language, and functionality +- **Team-Based Management**: Organize servers within your teams with proper access control +- **Version Management**: Track different versions of MCP servers with changelog support +- **GitHub Integration**: Automatic synchronization with GitHub repositories +- **Global and Team Servers**: Support for both community-wide and team-specific servers + +## Catalog Structure + +### Server Visibility Levels + +The catalog supports two types of servers: + +#### Global Servers +- **Visibility**: Available to all users across the platform +- **Management**: Only Global Administrators can create, edit, and delete +- **Purpose**: Community-contributed servers, official integrations, popular tools +- **Examples**: Official OpenAI MCP server, popular GitHub integrations, common utilities + +#### Team Servers +- **Visibility**: Only visible to team members +- **Management**: Team administrators can create, edit, and delete within their teams +- **Purpose**: Custom integrations, private tools, team-specific configurations +- **Examples**: Internal API integrations, custom business logic, proprietary tools + +### Categories + +Servers are organized into categories for easy discovery: + +- **Development Tools**: Code analysis, Git integration, CI/CD tools +- **Data Sources**: Database connectors, API integrations, file systems +- **AI & ML**: Machine learning models, AI services, data processing +- **Communication**: Chat platforms, email services, notification systems +- **Productivity**: Task management, calendars, document processing +- **Custom**: User-defined categories for specialized use cases + +**Note**: Only Global Administrators can create and manage categories. + +## User Permissions + +Access to the MCP catalog is controlled by your role and team membership: + +### Permission Matrix + +| Role | Global Servers | Team Servers | Can Create | Can Edit | Can Delete | Categories | +|------|----------------|--------------|------------|----------|------------|------------| +| global_admin | ✅ View/Manage All | ✅ View All Teams | ✅ Global only | ✅ Global only | ✅ Global only | ✅ Full CRUD | +| team_admin | ✅ View only | ✅ View/Manage own team | ✅ Team only | ✅ Team only | ✅ Team only | ❌ View only | +| team_user | ✅ View only | ✅ View team servers | ❌ No | ❌ No | ❌ No | ❌ View only | +| global_user | ✅ View only | ❌ No access | ❌ No | ❌ No | ❌ No | ❌ View only | +| unauthenticated | ❌ No access | ❌ No access | ❌ No | ❌ No | ❌ No | ❌ No access | + +**Note**: Authentication is required for all MCP catalog access. Unauthenticated users cannot access any servers or catalog features. + +### Detailed Permissions + +#### Global Administrator +- **Global Servers**: Full management capabilities - create, edit, delete, and feature servers +- **Team Servers**: Read-only access to all team servers across the platform for administrative oversight +- **Categories**: Complete category management - create, edit, delete, and organize categories +- **Special Privileges**: Can mark servers as "featured" and manage server visibility + +#### Team Administrator +- **Global Servers**: Can browse and view all global servers but cannot modify them +- **Team Servers**: Full management within their own teams - create, edit, delete team-specific servers +- **Server Creation**: Can create new servers that are automatically scoped to their team +- **Team Scope**: All created servers are marked with team visibility and ownership + +#### Team User +- **Global Servers**: Can browse and view all global servers +- **Team Servers**: Can view servers within their teams but cannot modify them +- **Read-Only Access**: Cannot create, edit, or delete any servers +- **Discovery**: Can search and filter servers for deployment purposes + +#### Global User +- **Global Servers**: Can browse and view all global servers +- **Team Servers**: No access to any team-specific servers +- **Limited Scope**: Most restricted authenticated access level for catalog browsing + +#### Unauthenticated Users +- **No Access**: Cannot access any MCP catalog features +- **Authentication Required**: Must log in to view any servers +- **Security**: All MCP endpoints require valid authentication + +## Server Management + +### Creating Servers + +#### Global Servers (Global Admin Only) +1. **Navigate to Catalog**: Access the MCP catalog from your admin dashboard +2. **Create Global Server**: Click "Create Global Server" button +3. **Server Details**: Fill in comprehensive server information: + - **Basic Info**: Name, description, category + - **Technical Details**: Language, runtime, minimum version requirements + - **Installation**: Supported installation methods (npm, pip, docker, etc.) + - **Capabilities**: Tools, resources, and prompts provided + - **Repository**: GitHub URL for automatic synchronization + - **Metadata**: Author information, license, organization +4. **Visibility Settings**: Configure as global server +5. **Featured Status**: Optionally mark as featured for prominence + +#### Team Servers (Team Admin) +1. **Team Context**: Navigate to your team's MCP catalog section +2. **Create Team Server**: Click "Create Server" within your team +3. **Server Configuration**: Same detailed form as global servers +4. **Team Scope**: Server is automatically assigned to your team +5. **Team Visibility**: Only your team members can see and use this server + +### Server Information + +Each server in the catalog includes comprehensive metadata: + +#### Basic Information +- **Name & Description**: Clear identification and purpose +- **Category**: Organizational classification +- **Tags**: Searchable keywords and labels +- **Status**: Active, deprecated, or maintenance mode + +#### Technical Specifications +- **Language**: Programming language (Node.js, Python, etc.) +- **Runtime**: Specific runtime requirements +- **Minimum Version**: Required runtime version +- **Dependencies**: External dependencies and requirements + +#### Capabilities +- **Tools**: Available MCP tools and their functions +- **Resources**: Data sources and resource types +- **Prompts**: Pre-configured prompts and templates +- **Configuration**: Default settings and environment variables + +#### Repository Integration +- **GitHub URL**: Source code repository +- **Branch**: Target branch for synchronization +- **Last Sync**: When repository was last synchronized +- **Version Tracking**: Automatic version detection from repository + +### Version Management + +The catalog supports comprehensive version tracking: + +#### Version Information +- **Version Numbers**: Semantic versioning (e.g., 1.2.3) +- **Git Commits**: Linked to specific repository commits +- **Changelog**: Detailed change descriptions +- **Stability**: Stable vs. beta/alpha versions +- **Latest Flag**: Automatic latest version detection + +#### Version Operations +- **Create Version**: Add new versions manually or via GitHub sync +- **Update Version**: Modify version information and changelog +- **Version History**: Complete timeline of all versions +- **Rollback Support**: Deploy specific versions as needed + +### GitHub Integration + +Seamless integration with GitHub repositories for automatic synchronization and metadata extraction. For complete details on setting up and using GitHub integration, see the [GitHub Integration Guide](./github-integration.mdx). + +**Key Features:** +- **Automatic Repository Sync**: Pull server metadata from GitHub repositories +- **Version Detection**: Automatic version tracking from repository tags +- **Metadata Extraction**: Import descriptions, licenses, and topics +- **Manual and Scheduled Sync**: Flexible synchronization options + +## Browsing and Discovery + +### Catalog Interface + +The catalog provides multiple ways to discover servers: + +#### Browse by Category +- **Category Navigation**: Organized browsing by functional categories +- **Category Descriptions**: Clear explanations of each category's purpose +- **Server Counts**: Number of servers in each category + +#### Search and Filtering +- **Text Search**: Search by name, description, and tags +- **Language Filter**: Filter by programming language +- **Runtime Filter**: Filter by runtime environment +- **Status Filter**: Show only active, deprecated, or maintenance servers +- **Featured Filter**: Highlight featured and recommended servers + +#### Server Listings +- **Grid View**: Visual cards showing server information +- **List View**: Detailed table with comprehensive information +- **Sorting Options**: Sort by name, popularity, recent updates, or featured status +- **Pagination**: Efficient browsing of large server collections + +### Server Details + +Detailed server pages provide comprehensive information: + +#### Overview Section +- **Server Description**: Detailed explanation of functionality +- **Quick Stats**: Language, runtime, last update, version count +- **Installation Preview**: Quick installation commands +- **Author Information**: Creator and maintainer details + +#### Technical Details +- **Capabilities Breakdown**: Detailed tool, resource, and prompt listings +- **Configuration Options**: Available settings and customizations +- **Environment Variables**: Required and optional environment settings +- **Dependencies**: External requirements and compatibility + +#### Version History +- **Version Timeline**: Chronological list of all versions +- **Changelog Details**: Comprehensive change descriptions +- **Download/Deploy Options**: Direct deployment links +- **Stability Indicators**: Version stability and recommendation status + +## Team Integration + +### Team-Scoped Servers + +Team servers provide private server management: + +#### Team Server Benefits +- **Privacy**: Servers visible only to team members +- **Customization**: Team-specific configurations and settings +- **Control**: Full management by team administrators +- **Integration**: Seamless integration with team deployments + +#### Team Server Management +- **Team Dashboard**: Dedicated section for team's MCP servers +- **Member Access**: All team members can view team servers +- **Admin Control**: Team administrators manage server lifecycle +- **Deployment Integration**: Direct deployment to team environments + +### Cross-Team Visibility + +#### Global Admin Oversight +- **Administrative View**: Global admins can see all team servers +- **Read-Only Access**: Cannot modify team servers, only view for support +- **System Monitoring**: Track server usage and adoption across teams +- **Support Capabilities**: Assist teams with server-related issues + +#### Team Isolation +- **Secure Boundaries**: Teams cannot see other teams' servers +- **Data Protection**: Team server configurations remain private +- **Access Control**: Strict enforcement of team-based permissions + +## Server Deployment + +### From Catalog to Deployment + +The catalog integrates seamlessly with DeployStack's deployment system: + +#### Deployment Process +1. **Server Selection**: Choose server from catalog +2. **Version Selection**: Pick specific version to deploy +3. **Configuration**: Customize settings and environment variables +4. **Team Context**: Deploy within appropriate team context +5. **Cloud Provider**: Select target deployment platform +6. **Launch**: Deploy server to chosen environment + +#### Deployment Options +- **Quick Deploy**: One-click deployment with default settings +- **Custom Deploy**: Full configuration customization +- **Template Deploy**: Use pre-configured deployment templates +- **Batch Deploy**: Deploy multiple servers simultaneously + +### Configuration Management + +#### Default Configurations +- **Server Defaults**: Pre-configured settings from catalog +- **Team Overrides**: Team-specific configuration templates +- **Environment Variables**: Secure handling of sensitive configuration +- **Validation**: Configuration validation before deployment + +#### Custom Configurations +- **Parameter Customization**: Modify server parameters +- **Environment Setup**: Configure runtime environment +- **Resource Allocation**: Set memory, CPU, and storage requirements +- **Network Configuration**: Configure ports, domains, and routing + +## Best Practices + +### For Server Contributors + +#### Creating Quality Servers +- **Clear Documentation**: Comprehensive README and documentation +- **Semantic Versioning**: Follow proper version numbering +- **Changelog Maintenance**: Keep detailed change logs +- **Testing**: Ensure servers work across different environments +- **Security**: Follow security best practices for MCP servers + +#### Repository Management +- **Clean Structure**: Organize repository with clear structure +- **Configuration Files**: Include proper MCP configuration +- **Examples**: Provide usage examples and tutorials +- **License**: Include appropriate open-source license +- **Maintenance**: Regular updates and issue resolution + +### For Server Users + +#### Server Selection +- **Requirements Analysis**: Understand your specific needs +- **Version Consideration**: Choose stable versions for production +- **Documentation Review**: Read server documentation thoroughly +- **Testing**: Test servers in development before production deployment +- **Updates**: Keep servers updated to latest stable versions + +#### Team Management +- **Server Organization**: Organize team servers logically +- **Access Control**: Manage team member access appropriately +- **Documentation**: Document team-specific server configurations +- **Monitoring**: Monitor server performance and usage + +### For Administrators + +#### Catalog Management +- **Category Organization**: Maintain clear category structure +- **Quality Control**: Review and curate global servers +- **Featured Servers**: Highlight high-quality, popular servers +- **Community Engagement**: Encourage community contributions +- **Performance Monitoring**: Monitor catalog performance and usage + +#### User Support +- **Documentation**: Maintain comprehensive user documentation +- **Training**: Provide training on catalog usage +- **Support Channels**: Establish clear support processes +- **Feedback Collection**: Gather user feedback for improvements + +## Security Considerations + +### Access Control +- **Role-Based Permissions**: Strict enforcement of role-based access +- **Team Isolation**: Secure boundaries between teams +- **Admin Oversight**: Appropriate administrative visibility +- **Audit Logging**: Track all catalog operations (future feature) + +### Server Security +- **Source Verification**: Verify server sources and authenticity +- **Security Scanning**: Scan servers for security vulnerabilities (future feature) +- **Safe Defaults**: Secure default configurations +- **Update Notifications**: Alert users to security updates + +### Data Protection +- **Configuration Security**: Secure handling of server configurations +- **Environment Variables**: Encrypted storage of sensitive settings +- **Repository Access**: Secure GitHub integration +- **Privacy Controls**: Respect team privacy and data boundaries + +## Future Enhancements + +### Planned Features +- **Server Ratings**: Community rating and review system +- **Usage Analytics**: Server usage statistics and trends +- **Automated Testing**: Continuous integration for server validation +- **Marketplace**: Enhanced discovery and recommendation engine +- **API Integration**: Programmatic catalog access + +### Community Features +- **Contribution Guidelines**: Streamlined server contribution process +- **Community Voting**: Community-driven server curation +- **Discussion Forums**: Server-specific discussion and support +- **Contributor Recognition**: Acknowledge active contributors + +The MCP Server Catalog transforms how you discover, manage, and deploy MCP servers, providing a comprehensive platform for both individual users and teams to leverage the power of the Model Context Protocol ecosystem. diff --git a/docs/deploystack/roles.mdx b/docs/deploystack/roles.mdx index efd063b..2c96f77 100644 --- a/docs/deploystack/roles.mdx +++ b/docs/deploystack/roles.mdx @@ -23,6 +23,8 @@ User roles determine what actions a person can perform in DeployStack. Think of - Access all system features - Manage all teams - View cloud credentials metadata across all teams (no credential values shown) +- **MCP Catalog**: Full management of global MCP servers and categories +- **MCP Oversight**: View all team MCP servers across the platform (read-only) **Important**: The first person to register automatically becomes a Global Administrator. @@ -36,6 +38,7 @@ User roles determine what actions a person can perform in DeployStack. Think of - Create up to 3 teams - Manage their own teams - Deploy applications through their teams +- **MCP Catalog**: Browse and view global MCP servers only **Note**: This is the default role for new users. @@ -51,6 +54,7 @@ User roles determine what actions a person can perform in DeployStack. Think of - **Transfer team ownership** to another team member - Manage team deployments - Delete teams they own (except default teams) +- **MCP Catalog**: View global servers + full management of team MCP servers **Important**: Team admins have full control over team membership and can manage all team members except the team owner. @@ -77,6 +81,24 @@ The following table shows exactly what each role can do with team member managem - **Global admins** can override most restrictions but still cannot modify default teams - **3-member limit** applies to all teams (owner + 2 additional members maximum) +## MCP Catalog Permissions + +The MCP (Model Context Protocol) Catalog has specific permissions based on your role: + +| Role | Global Servers | Team Servers | Can Create | Can Edit | Can Delete | Categories | +|------|----------------|--------------|------------|----------|------------|------------| +| global_admin | ✅ View/Manage All | ✅ View All Teams | ✅ Global only | ✅ Global only | ✅ Global only | ✅ Full CRUD | +| team_admin | ✅ View only | ✅ View/Manage own team | ✅ Team only | ✅ Team only | ✅ Team only | ❌ View only | +| team_user | ✅ View only | ✅ View team servers | ❌ No | ❌ No | ❌ No | ❌ View only | +| global_user | ✅ View only | ❌ No access | ❌ No | ❌ No | ❌ No | ❌ View only | + +**MCP Catalog Notes:** +- **Global Servers**: Community-wide MCP servers available to all users +- **Team Servers**: Private MCP servers visible only to team members +- **Categories**: Organizational categories for MCP servers (admin-managed) +- **Global Admins**: Can see all team servers for administrative oversight but cannot modify them +- **Team Isolation**: Teams can only manage their own servers, not other teams' servers + ### Team User **Who needs this**: Basic team members who participate in deployments. @@ -84,8 +106,9 @@ The following table shows exactly what each role can do with team member managem - View team information - See team members - Participate in team activities +- **MCP Catalog**: View global servers + view team MCP servers (read-only) -**Limitations**: Team users cannot add members, change roles, or manage other team members. +**Limitations**: Team users cannot add members, change roles, manage other team members, or create/edit MCP servers. ## Understanding Teams diff --git a/docs/deploystack/teams.mdx b/docs/deploystack/teams.mdx index 1e6b56c..411fd43 100644 --- a/docs/deploystack/teams.mdx +++ b/docs/deploystack/teams.mdx @@ -43,10 +43,13 @@ Each user can create and manage up to **3 teams total**, including your default Teams serve as comprehensive containers for all your deployment resources: ### MCP Server Settings -- All deployed MCP server configurations -- Server deployment history and status -- Custom server settings and parameters -- Deployment logs and monitoring data +- **Team MCP Servers**: Private MCP servers visible only to your team members +- **Global MCP Server Access**: Browse and deploy community-wide MCP servers +- **Server Management**: Team administrators can create, edit, and delete team servers +- **Version Control**: Track different versions of your team's MCP servers +- **GitHub Integration**: Automatic synchronization with your team's repositories (see [GitHub Integration Guide](./github-integration.mdx)) +- **Custom Configurations**: Team-specific server settings and parameters +- **Deployment History**: Complete logs and monitoring data for team deployments ### Cloud Provider Credentials - **Render.com**: API tokens and service configurations