Harness the power of Azure OpenAI's flagship models (GPT-4o, GPT-4o-mini, GPT-4 Turbo) for semantic code analysis and AI-powered graph extraction in graphify-dotnet.
- Create an Azure OpenAI resource in Azure Portal
- Deploy a model (e.g.,
gpt-4oorgpt-4o-mini) - Grab your endpoint and API key
- Configure graphify-dotnet with
AzureOpenAIClientFactoryor unifiedChatClientFactory - Start analyzing!
- Azure Subscription: Sign up for Azure free account
- Azure OpenAI Resource: Access to Azure OpenAI service (request access if needed)
- Model Deployment: A deployed model in your Azure OpenAI resource
- Go to Azure Portal
- Click Create a resource → search for "Azure OpenAI"
- Click Create
- Fill in the form:
- Subscription: Select your subscription
- Resource group: Create new or select existing
- Region: Choose a region (e.g.,
East US,France Central) - Name: e.g.,
my-graphify-openai - Pricing tier: Standard (S0)
- Click Review + Create → Create
- Wait for deployment to complete (2-5 minutes)
az cognitiveservices account create \
--name my-graphify-openai \
--resource-group my-resource-group \
--kind OpenAI \
--sku S0 \
--location eastus- In your Azure OpenAI resource, go to Model deployments
- Click Create new deployment → Deploy model
- Select a model:
- gpt-4o: Latest, most capable model (recommended for code analysis)
- gpt-4o-mini: Faster, cheaper, still powerful
- gpt-4-turbo: Older but stable
- Give it a deployment name:
gpt-4oorgpt-4o-mini - Set capacity (20 tokens/min is default for free tier)
- Click Create
az cognitiveservices account deployment create \
--name my-graphify-openai \
--resource-group my-resource-group \
--deployment-name gpt-4o \
--model-name gpt-4o \
--model-version "2024-08-06" \
--model-format OpenAI \
--scale-settings-capacity 20-
In your Azure OpenAI resource, go to Keys and Endpoint
-
Copy:
- Endpoint: e.g.,
https://my-graphify-openai.openai.azure.com/ - Key 1 or Key 2: Use either one
- Endpoint: e.g.,
-
Store these securely (environment variables or secrets manager):
# Linux/macOS export AZURE_OPENAI_ENDPOINT="https://my-graphify-openai.openai.azure.com/" export AZURE_OPENAI_API_KEY="your-api-key-here" export AZURE_OPENAI_DEPLOYMENT="gpt-4o" # Windows (PowerShell) $env:AZURE_OPENAI_ENDPOINT = "https://my-graphify-openai.openai.azure.com/" $env:AZURE_OPENAI_API_KEY = "your-api-key-here" $env:AZURE_OPENAI_DEPLOYMENT = "gpt-4o"
Use the new System.CommandLine CLI syntax to configure Azure OpenAI:
# Run with Azure OpenAI
graphify run ./my-project \
--provider azureopenai \
--endpoint https://myresource.openai.azure.com/ \
--api-key sk-... \
--deployment gpt-4o
# With custom model
graphify run ./my-project \
--provider azureopenai \
--endpoint https://myresource.openai.azure.com/ \
--api-key sk-... \
--deployment gpt-4o-minigraphify-dotnet supports a layered configuration system (priority order):
- CLI arguments (highest priority)
- User secrets (.NET user secrets)
- Environment variables
- appsettings.local.json (saved by
graphify configwizard) - appsettings.json (lowest priority)
Set these for automatic configuration:
# Linux/macOS
export GRAPHIFY__Provider=AzureOpenAI
export GRAPHIFY__AzureOpenAI__Endpoint=https://myresource.openai.azure.com/
export GRAPHIFY__AzureOpenAI__ApiKey=sk-...
export GRAPHIFY__AzureOpenAI__DeploymentName=gpt-4o
# Windows (PowerShell)
$env:GRAPHIFY__Provider = "AzureOpenAI"
$env:GRAPHIFY__AzureOpenAI__Endpoint = "https://myresource.openai.azure.com/"
$env:GRAPHIFY__AzureOpenAI__ApiKey = "sk-..."
$env:GRAPHIFY__AzureOpenAI__DeploymentName = "gpt-4o"Use .NET user secrets for local development (keeps API keys out of source):
# Set secrets for your project
dotnet user-secrets set "Graphify:Provider" "AzureOpenAI"
dotnet user-secrets set "Graphify:AzureOpenAI:Endpoint" "https://myresource.openai.azure.com/"
dotnet user-secrets set "Graphify:AzureOpenAI:ApiKey" "sk-..."
dotnet user-secrets set "Graphify:AzureOpenAI:DeploymentName" "gpt-4o"
# List configured secrets
dotnet user-secrets listConfigure in your application's appsettings.json (API key should still come from secrets):
{
"Graphify": {
"Provider": "AzureOpenAI",
"AzureOpenAI": {
"Endpoint": "https://myresource.openai.azure.com/",
"DeploymentName": "gpt-4o",
"ModelId": "gpt-4o"
}
}
}Use the graphify config show command to verify your configuration:
graphify config showThis displays the active configuration values from all sources (sensitive values like API keys are masked).
For SDK usage in your own applications:
using Graphify.Sdk;
using Microsoft.Extensions.AI;
// Use the unified ChatClientFactory
var aiOptions = new AiProviderOptions(
Provider: AiProvider.AzureOpenAI,
Endpoint: Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT"),
ApiKey: Environment.GetEnvironmentVariable("AZURE_OPENAI_API_KEY"),
DeploymentName: Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT"),
ModelId: "gpt-4o"
);
IChatClient client = ChatClientFactory.Create(aiOptions);
// Use the client
var response = await client.GetResponseAsync(
[new ChatMessage(ChatRole.User, "Analyze this code structure...")]);
Console.WriteLine(response.Text);using System;
using Graphify.Sdk;
using Microsoft.Extensions.AI;
public class CodeAnalyzer
{
public static async Task Main(string[] args)
{
// 1. Create options from environment
var options = new AiProviderOptions(
Provider: AiProvider.AzureOpenAI,
Endpoint: GetEnvOrThrow("AZURE_OPENAI_ENDPOINT"),
ApiKey: GetEnvOrThrow("AZURE_OPENAI_API_KEY"),
DeploymentName: GetEnvOrThrow("AZURE_OPENAI_DEPLOYMENT"),
ModelId: "gpt-4o"
);
// 2. Create the chat client
IChatClient client = ChatClientFactory.Create(options);
// 3. Analyze code
string codeSnippet = @"
public class Calculator {
public int Add(int a, int b) => a + b;
public int Multiply(int a, int b) => a * b;
}";
string prompt = $"Analyze this C# code and explain its structure:\n\n{codeSnippet}";
var response = await client.GetResponseAsync(
[new ChatMessage(ChatRole.User, prompt)]);
Console.WriteLine("Analysis:");
Console.WriteLine(response.Text);
}
private static string GetEnvOrThrow(string key)
{
return Environment.GetEnvironmentVariable(key)
?? throw new InvalidOperationException($"Missing environment variable: {key}");
}
}| Model | Use Case | Cost | Speed |
|---|---|---|---|
| gpt-4o | Production, complex analysis | Higher | Moderate |
| gpt-4o-mini | Development, testing, cost-sensitive | Low | Fast |
| gpt-4-turbo | Legacy, large context windows | Moderate | Moderate |
Store these securely (not in source code):
| Variable | Description | Example |
|---|---|---|
AZURE_OPENAI_ENDPOINT |
Resource endpoint | https://my-resource.openai.azure.com/ |
AZURE_OPENAI_API_KEY |
API key (Key 1 or Key 2) | xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx |
AZURE_OPENAI_DEPLOYMENT |
Deployment name | gpt-4o |
Cause: Invalid API key or endpoint
Solution:
- Double-check your API key in Azure Portal → Keys and Endpoint
- Verify the endpoint URL matches your resource
- Ensure no trailing whitespace in credentials
// Debug: Print (masked) credentials
Console.WriteLine($"Endpoint: {options.Endpoint}");
Console.WriteLine($"Deployment: {options.DeploymentName}");
Console.WriteLine($"Key (first 10): {options.ApiKey.Substring(0, 10)}...");Cause: Deployment name doesn't exist in your resource
Solution:
- Go to Azure Portal → Azure OpenAI resource → Model deployments
- Verify the deployment name matches exactly (case-sensitive)
- Ensure the model is actually deployed (status should be "Succeeded")
// Verify deployment exists
var deploymentName = "gpt-4o"; // Must match Azure Portal exactlyCause: Invalid endpoint URL or wrong region
Solution:
- Copy the full endpoint from Azure Portal → Keys and Endpoint
- Include the trailing slash:
https://my-resource.openai.azure.com/ - Ensure your subscription has Azure OpenAI access in that region
Cause: Exceeded token quota
Solution:
- Increase deployment capacity in Azure Portal
- Add backoff/retry logic:
int retries = 3; while (retries-- > 0) { try { return await client.GetResponseAsync( [new ChatMessage(ChatRole.User, prompt)]); } catch (Exception ex) when (ex.Message.Contains("429") && retries > 0) { await Task.Delay(TimeSpan.FromSeconds(Math.Pow(2, 3 - retries))); } }
-
Use Managed Identity (if running in Azure):
- Replace
ApiKeyCredentialwithDefaultAzureCredential - No API keys in code or environment variables
- Replace
-
Store Credentials Securely:
- Use Azure Key Vault for API keys
- Use environment variables or secrets manager in CI/CD
-
Implement Retry Logic:
- Handle transient failures (rate limits, timeouts)
- Use exponential backoff
-
Monitor Usage:
- Track token consumption in Azure Portal
- Set up alerts for quota approaching
-
Use Deployment Aliases:
- Deploy multiple model versions
- Switch between versions without code changes
- Using graphify-dotnet with Ollama (Local Models)
- Using graphify-dotnet with GitHub Copilot SDK
- Azure OpenAI Documentation
- Azure OpenAI Models
- API Reference: AzureOpenAIClientFactory
Once configured:
- Run your first code analysis with
ChatClientFactory.Create(options) - Explore the README for full SDK capabilities
- Check out example projects in the repository
Need help? Open an issue on GitHub or check the documentation.