Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .github/workflows/issue_notification.yml
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,9 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OPENAI_TOKEN: ${{ secrets.OPENAI_TOKEN }}
OPENAI_SYSTEM_PROMPT: ${{ vars.OPENAI_SYSTEM_PROMPT }}
OPENAI_MODEL: ${{ vars.OPENAI_MODEL }}
OPENAI_TEMPERATURE: ${{ vars.OPENAI_TEMPERATURE }}
OPENAI_MAX_COMPLETION_TOKENS: ${{ vars.OPENAI_MAX_COMPLETION_TOKENS }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we could have kept the env var name the same and only update the param sent when calling the API

Copy link
Contributor Author

@sbarrio sbarrio Mar 6, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I initially did that, yes. But I think it could be very easy to forget why we did that change down the lane so I opted to properly rename it all.

SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
SLACK_CHANNEL_ID: ${{ secrets.SLACK_CHANNEL_ID }}
GITHUB_REPOSITORY: ${{ github.repository }}
Expand Down
8 changes: 4 additions & 4 deletions tools/issue_handler/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,9 +47,9 @@ This creates a `.env` file that you will need to fill with the required tokens a
- `GITHUB_REPOSITORY` - Repository in format `owner/repo`

**Optional variables** (override defaults):
- `OPENAI_MODEL` - Model to use (default: `chatgpt-4o-latest`)
- `OPENAI_MODEL` - Model to use (default: `gpt-5.2-chat-latest`)
- `OPENAI_TEMPERATURE` - Response creativity 0.0-1.0 (default: `0.4`)
- `OPENAI_MAX_RESPONSE_TOKENS` - Max response length (default: `500`)
- `OPENAI_MAX_COMPLETION_TOKENS` - Max response length (default: `500`)

## Usage

Expand Down Expand Up @@ -80,9 +80,9 @@ The tool runs:
- `OPENAI_SYSTEM_PROMPT` - OpenAI analysis prompt (stored as variable for easier updates)

**Optional GitHub Variables** (override defaults if needed):
- `OPENAI_MODEL` - Model to use (default: `chatgpt-4o-latest`)
- `OPENAI_MODEL` - Model to use (default: `gpt-5.2-chat-latest`)
- `OPENAI_TEMPERATURE` - Response creativity 0.0-1.0 (default: `0.4`)
- `OPENAI_MAX_RESPONSE_TOKENS` - Max response length (default: `500`)
- `OPENAI_MAX_COMPLETION_TOKENS` - Max response length (default: `500`)

**Automatically Provided**:
- `GITHUB_TOKEN` - Provided by GitHub Actions
Expand Down
4 changes: 2 additions & 2 deletions tools/issue_handler/setup_env.sh
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,9 @@ SLACK_CHANNEL_ID=
GITHUB_REPOSITORY=DataDog/dd-sdk-ios

# Optional: Override OpenAI defaults
# OPENAI_MODEL=chatgpt-4o-latest
# OPENAI_MODEL=gpt-5.2-chat-latest
# OPENAI_TEMPERATURE=0.4
# OPENAI_MAX_RESPONSE_TOKENS=500
# OPENAI_MAX_COMPLETION_TOKENS=500
EOL

echo "✨ Created .env file"
Expand Down
8 changes: 4 additions & 4 deletions tools/issue_handler/src/openai_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ class OpenAIHandler:

# Content limits to prevent abuse
MAX_CONTENT_LENGTH = 4000
MAX_RESPONSE_TOKENS = int(os.environ.get("OPENAI_MAX_RESPONSE_TOKENS", "500"))
MAX_COMPLETION_TOKENS = int(os.environ.get("OPENAI_MAX_COMPLETION_TOKENS", "500"))

def __init__(self, api_key: str):
"""
Expand All @@ -71,7 +71,7 @@ def __init__(self, api_key: str):
raise EnvironmentError("OPENAI_SYSTEM_PROMPT environment variable must be set")

# Model can be overridden via env
self.model = os.environ.get("OPENAI_MODEL", "chatgpt-4o-latest")
self.model = os.environ.get("OPENAI_MODEL", "gpt-5.2-chat-latest")

def analyze_issue(self, issue: GithubIssue) -> AnalysisResult:
"""
Expand Down Expand Up @@ -117,8 +117,8 @@ def analyze_issue(self, issue: GithubIssue) -> AnalysisResult:
response = self.client.chat.completions.create(
model=self.model,
messages=messages,
temperature=float(os.environ.get("OPENAI_TEMPERATURE", "0.4")),
max_tokens=self.MAX_RESPONSE_TOKENS,
# temperature=float(os.environ.get("OPENAI_TEMPERATURE", "0.4")),
max_completion_tokens=self.MAX_COMPLETION_TOKENS,
response_format={"type": "json_object"}
)

Expand Down
2 changes: 1 addition & 1 deletion tools/issue_handler/tests/test_openai_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -138,7 +138,7 @@ def test_analyze_issue_success(self, mock_openai):
# Verify OpenAI call
mock_client.chat.completions.create.assert_called_once()
call_args = mock_client.chat.completions.create.call_args
assert call_args[1]["max_tokens"] == 500
assert call_args[1]["max_completion_tokens"] == 500
assert call_args[1]["response_format"] == {"type": "json_object"}

@patch('src.openai_handler.openai.OpenAI')
Expand Down
Loading