This guide provides comprehensive testing procedures for The Inspector's background functions architecture. The architecture handles long-running AI analysis requests using Netlify Background Functions, which support execution times up to 15 minutes.
For architectural details, see BACKGROUND_FUNCTIONS.md.
Before testing, ensure you have:
- Netlify account with deployed site
- API keys configured:
- OpenRouter API key (for multiple AI models), OR
- OpenAI API key (for GPT models only)
- Browser DevTools access (Chrome, Firefox, Safari, or Edge)
- Network access to test endpoints
- Basic understanding of HTTP requests and browser console
Purpose: Verify that fast models complete quickly and polling returns results efficiently.
Model to use: anthropic/claude-3.5-sonnet (Claude 3.5 Sonnet)
Expected behavior: Analysis completes in 3-8 seconds, polling returns result within 1-2 poll attempts (3-6 seconds total).
Steps:
- Open The Inspector application in your browser
- Enter a small package name (e.g.,
lodash) - Select "Claude 3.5 Sonnet" from the model dropdown
- Click the "Inspect" button
- Observe the loading message: Should display "Analyzing package... This may take up to 2 minutes for complex models."
- Open browser DevTools (F12) and navigate to the Network tab
- Verify POST request to
/.netlify/functions/analyze-startreturns 200 status withjobIdin response - Verify GET requests to
/.netlify/functions/analyze-status?jobId=<uuid>start polling every 3 seconds - Verify status returns
"pending"initially, then"completed"within 3-8 seconds - Verify analysis results display correctly with:
- Risk level (Low, Medium, High, Critical)
- Security concerns
- Recommendations
Success criteria:
- ✅ Analysis completes in <10 seconds
- ✅ No timeout errors occur
- ✅ Results display correctly with all sections populated
Troubleshooting:
- If analysis takes longer than expected, check AI provider status at OpenRouter Status
- Verify API key is valid in Settings
- Check Netlify function logs for errors
Purpose: Verify that slow models complete successfully without timeout errors.
Model to use: moonshotai/kimi-k2-thinking (Moonshot Kimi K2 Thinking - Recommended)
Expected behavior: Analysis completes in 50-60+ seconds, polling continues until completion, loading message updates with elapsed time.
Steps:
- Open The Inspector application in your browser
- Enter a medium-sized package name (e.g.,
react) - Select "Moonshot Kimi K2 Thinking (Recommended)" from the model dropdown
- Click the "Inspect" button
- Observe loading message progression:
- 0-30s: "Analyzing package... This may take up to 2 minutes for complex models."
- 30-60s: "Analyzing package... Still processing (45s elapsed)..."
- 60s+: "Analyzing package... Still processing (1m 15s elapsed)..."
- Open browser DevTools (F12) and navigate to the Network tab
- Verify POST request to
/.netlify/functions/analyze-startreturns 200 status withjobId - Verify GET requests to
/.netlify/functions/analyze-statuspoll every 3 seconds - Count polling attempts: Should see 15-20 status checks before completion
- Verify status eventually returns
"completed"with full analysis results - Verify no 504 Gateway Timeout errors occur
Success criteria:
- ✅ Analysis completes in 50-70 seconds
- ✅ Polling continues throughout entire duration
- ✅ Results display correctly
- ✅ No timeout errors (504, 408, etc.)
Troubleshooting:
- If 504 errors occur, verify
analyze-background.jsis properly configured as a background function - Check Netlify deployment logs to confirm background function was detected
- Verify function export signature:
export default async (req, context)andexport const config
Purpose: Verify error handling for authentication failures.
Steps:
- Open The Inspector application
- Click the Settings icon (gear icon in top-right corner)
- Enter an invalid API key (e.g.,
invalid-key-12345) - Save settings
- Enter a package name (e.g.,
express) - Click the "Inspect" button
- Observe error handling:
- Frontend starts polling normally
- Background function fails with authentication error
- Error is stored in Netlify Blobs
- Status endpoint returns
{"status": "failed", "error": "...API key..."} - Frontend displays error message: "Invalid API Key. Please check your key in the settings."
- Open browser DevTools Console (F12 → Console tab)
- Verify error is logged with type
INVALID_API_KEY
Expected behavior:
- Error is caught gracefully
- Error is stored in Netlify Blobs
- Error is returned to frontend via status endpoint
- User-friendly error message is displayed
- Job does not remain in "pending" state forever
Success criteria:
- ✅ No unhandled errors or crashes
- ✅ Clear, actionable error message displayed
- ✅ Job transitions from "pending" to "failed" state
Verification:
- Check Netlify function logs for
[analyze-background] Error processing jobmessage - Verify error includes authentication-related keywords
Purpose: Verify fallback endpoint logic and network error recovery.
Steps:
- Open browser DevTools (F12) and navigate to the Network tab
- Enable network throttling:
- Chrome: Network tab → Throttling dropdown → "Slow 3G"
- Firefox: Network tab → Throttling icon → "GPRS"
- Enter a package name and click "Inspect"
- Observe fallback behavior:
- Primary endpoint fails with network error
- Frontend retries with fallback endpoint
- If both endpoints fail, error is displayed
- Disable network throttling (set to "No throttling" or "Online")
- Retry the analysis
- Verify analysis completes successfully
Expected behavior:
- Frontend attempts fallback endpoint on network errors
- Clear error message displayed if both endpoints fail
- Analysis succeeds once network is restored
Success criteria:
- ✅ Fallback logic works correctly
- ✅ Error messages are user-friendly
- ✅ Recovery works after network restoration
Purpose: Verify timeout handling when analysis exceeds maximum polling duration.
Note: This scenario is difficult to test naturally (requires AI model that takes >5 minutes). Use simulated test approach.
Simulated test approach:
- Temporarily modify polling timeout (for testing only):
- Open
src/api/ai.js - Find
MAX_POLL_ATTEMPTSconstant (line 54) - Change value from
100to5(15 seconds total: 5 attempts × 3 seconds) - Save file
- Open
- Rebuild and deploy the application
- Test with slow model (Moonshot Kimi K2 Thinking)
- Verify timeout error after 15 seconds
- Verify error message: "Analysis timed out after 5 minutes. This can happen with very slow AI models. Please try again."
- Restore original value:
- Change
MAX_POLL_ATTEMPTSback to100 - Rebuild and redeploy
- Change
Expected behavior:
- Polling stops after max attempts
- Timeout error is thrown with type
TIMEOUT_ERROR - User-friendly message suggests retry
Success criteria:
- ✅ Timeout is enforced correctly
- ✅ Error message is clear and actionable
- ✅ No infinite polling loops
MAX_POLL_ATTEMPTS to 100 after testing!
Purpose: Verify that job results expire after 1 hour and are cleaned up.
Steps:
- Complete a successful analysis (any model, any package)
- Note the
jobIdfrom browser DevTools Network tab:- Find the POST request to
analyze-start - Copy the
jobIdfrom the response body
- Find the POST request to
- Manually call the status endpoint:
curl "https://your-site.netlify.app/.netlify/functions/analyze-status?jobId=<uuid>" - Verify response returns
"completed"status with full results - Wait 1 hour (or temporarily modify TTL in code for faster testing)
- Call the status endpoint again with the same
jobId - Verify response returns
"failed"status with error: "Job expired (results are only available for 1 hour)" - Verify blob is deleted (check Netlify Blobs dashboard or function logs)
Expected behavior:
- Results are available for 1 hour after completion
- After 1 hour, results expire and are deleted
- Clear error message is returned for expired jobs
Success criteria:
- ✅ TTL is enforced correctly (1 hour)
- ✅ Expired blobs are deleted
- ✅ Clear error message for expired jobs
Implementation details:
- TTL is enforced application-side by checking
expiresAtmetadata - Netlify Blobs does not automatically delete expired blobs
- The status endpoint deletes expired blobs when accessed
Faster testing (optional):
- Temporarily modify
TTLconstant inanalyze-background.js(line 18) from3600000(1 hour) to60000(1 minute) - Rebuild and deploy
- Test with 1-minute expiration
- Restore original TTL value
Purpose: Verify that multiple analyses can run simultaneously without conflicts.
Steps:
- Open The Inspector in two separate browser tabs
- In Tab 1:
- Enter package name:
react - Select model: "Moonshot Kimi K2 Thinking (Recommended)"
- Click "Inspect"
- Enter package name:
- In Tab 2 (immediately after starting Tab 1):
- Enter package name:
vue - Select model: "Claude 3.5 Sonnet"
- Click "Inspect"
- Enter package name:
- Observe both analyses:
- Each receives a unique
jobId(generated bycrypto.randomUUID()) - Each polls independently
- Tab 2 (fast model) should complete first (~5 seconds)
- Tab 1 (slow model) should complete second (~60 seconds)
- Results should not interfere with each other
- Each receives a unique
- Verify in browser DevTools (Network tab) that each tab polls its own
jobId - Verify both analyses complete successfully with correct results for each package
Expected behavior:
- Multiple concurrent analyses work independently
- Each analysis has a unique job ID
- No race conditions or conflicts
- Results are correct for each package
Success criteria:
- ✅ Both analyses complete successfully
- ✅ Results are correct for each package (no cross-contamination)
- ✅ No errors or conflicts
Troubleshooting:
- If results are mixed up, verify unique job IDs in DevTools
- Check React state management in
InspectorForm.jsx - Verify blob storage keys are unique per job
Use this checklist to track your testing progress:
- Test 1: Fast Model (Claude 3.5 Sonnet) - Analysis completes in <10 seconds
- Test 2: Slow Model (Moonshot Kimi K2 Thinking) - Analysis completes in 50-70 seconds without timeout
- Test 3: Invalid API Key - Error is caught and displayed with clear message
- Test 4: Network Error - Fallback logic works correctly
- Test 5: Polling Timeout - Timeout is enforced after max attempts
- Test 6: Blob TTL - Results expire after 1 hour and are deleted
- Test 7: Concurrent Analysis - Multiple analyses run independently without conflicts
Notes and Observations:
[Add your testing notes here]
Cause: Background function not properly configured or detected by Netlify.
Solution:
- Verify
analyze-background.jshas correct export signature:export default async (req, context) => { ... } export const config = { path: '/analyze-background' }
- Redeploy to Netlify
- Check Netlify Functions dashboard to confirm function is listed as "Background Function" (not "Serverless Function")
- Review deployment logs for any warnings
Cause: Background function crashed without storing error state, or blob storage is not working.
Solution:
- Check Netlify function logs for errors:
- Go to Netlify dashboard → Functions → analyze-background → Logs
- Look for error messages or stack traces
- Verify blob storage is working:
- Check Netlify Blobs dashboard
- Verify blobs are being created with correct keys
- Check API key validity in Settings
- Verify environment variables are set correctly:
OPENROUTER_API_KEYorOPENAI_API_KEYVITE_AI_PROVIDERVITE_SITE_URLVITE_SITE_NAME
Cause: Response validation failing in frontend.
Solution:
- Check browser console for validation errors
- Verify AI response format matches expected structure:
- Must include
riskLevel,concerns,recommendations - Check
src/api/ai.jslines 102-150 for validation logic
- Must include
- Verify AI model is returning properly formatted JSON
- Check for parsing errors in function logs
Cause: Job ID collision (extremely unlikely with UUID) or frontend state management issue.
Solution:
- Verify unique job IDs in DevTools Network tab
- Check React state management in
InspectorForm.jsx:- Ensure each analysis uses separate state
- Verify
jobIdis stored correctly per analysis
- Check blob storage keys are unique per job
- Verify no global state pollution
Expected performance metrics for different AI models:
| Model Category | Example Models | Expected Time | Polling Attempts |
|---|---|---|---|
| Fast | Claude 3.5 Sonnet, GPT-4o | 3-10 seconds | 1-3 attempts |
| Medium | Gemini Flash, Mistral Large | 10-30 seconds | 3-10 attempts |
| Slow | Moonshot Kimi K2 Thinking | 50-70 seconds | 15-25 attempts |
Function Performance:
analyze-start: <1 second (job creation and background trigger)analyze-status: <1 second (blob lookup and JSON parsing)analyze-background: Variable (depends on AI model speed)
Polling Overhead:
- Polling interval: 3 seconds
- Maximum polling duration: 5 minutes (100 attempts × 3 seconds)
- Network overhead per poll: ~100-200ms
Before testing, verify your deployment is configured correctly:
- Go to Netlify dashboard → Site → Functions
- Verify 3 functions are deployed:
analyze-start(Serverless Function)analyze-status(Serverless Function)analyze-background(Background Function)
- Verify
analyze-backgroundis listed as "Background Function" (not "Serverless Function")
- Go to Netlify dashboard → Site Settings → Environment Variables
- Verify the following variables are set:
OPENROUTER_API_KEYorOPENAI_API_KEY(depending on provider)VITE_AI_PROVIDER(e.g.,openrouteroropenai)VITE_SITE_URL(your Netlify site URL)VITE_SITE_NAME(your site name)
- Verify values are correct (no typos, no extra spaces)
Test each endpoint manually to verify they respond correctly:
Test analyze-start:
curl -X POST https://your-site.netlify.app/.netlify/functions/analyze-start \
-H "Content-Type: application/json" \
-d '{"packageName": "lodash", "model": "anthropic/claude-3.5-sonnet"}'Expected response:
{
"jobId": "550e8400-e29b-41d4-a716-446655440000",
"status": "pending",
"message": "Analysis started. Poll /analyze-status?jobId=<jobId> for results."
}Test analyze-status:
curl "https://your-site.netlify.app/.netlify/functions/analyze-status?jobId=<jobId>"Expected response (pending):
{
"status": "pending"
}Expected response (completed):
{
"status": "completed",
"result": { ... }
}- Go to Netlify dashboard → Functions → Select function → Logs
- Trigger a test analysis
- Verify logs show:
[analyze-start] Starting analysis job[analyze-background] Processing job[analyze-background] Analysis completed successfully[analyze-status] Job status: completed
- Check for any errors or warnings
- Background Functions Architecture - Detailed architecture documentation
- Deployment Guide - Deployment instructions for Netlify
- Netlify Background Functions - Official documentation
- Netlify Blobs - Blob storage documentation
- OpenRouter API - AI provider API documentation
If you encounter issues not covered in this guide:
- Check Netlify function logs for detailed error messages
- Review browser console for frontend errors
- Verify all environment variables are set correctly
- Consult the BACKGROUND_FUNCTIONS.md architecture document
- Check OpenRouter Status for API availability
Last Updated: 2025-11-09