[Cursor] fix: Resolve test failures on master branch#131
Merged
Conversation
Corrected three test failures in tests/test_llm_api.py identified when running tests locally on the master branch: 1. test_create_openai_client: Updated assertion to include the `base_url` parameter which is now passed during client creation. 2. test_query_anthropic: Updated the expected model name in the assertion from `claude-3-sonnet-20240229` to the current default `claude-3-7-sonnet-20250219`. 3. test_query_gemini: Refactored the mock setup and assertions to correctly reflect the use of the chat session (`start_chat` and `send_message`) instead of the previous `generate_content` method.
There was a problem hiding this comment.
Pull Request Overview
This pull request fixes three test failures in the LLM API by updating assertion parameters and refactoring the mocking strategy to support the new chat session flow.
- Updates the OpenAI client test to include base_url in its assertion.
- Adjusts the expected model name in the Anthropic client test.
- Refactors the Gemini client test to use the chat session (start_chat and send_message) instead of the obsolete generate_content method.
Comments suppressed due to low confidence (1)
tests/test_llm_api.py:299
- [nitpick] Consider extracting the string literal "gemini-2.0-flash-exp" to a named constant to improve readability and ease future maintenance.
self.mock_gemini_client.GenerativeModel.assert_called_once_with("gemini-2.0-flash-exp")
Comment on lines
+114
to
+115
| self.mock_gemini_model = MagicMock() # Mock for the GenerativeModel | ||
| self.mock_gemini_model.start_chat.return_value = self.mock_gemini_chat_session # Mock start_chat |
There was a problem hiding this comment.
[nitpick] Consider initializing self.mock_gemini_model only once instead of reassigning it after configuring the chat session, to maintain clarity and reduce potential confusion.
Suggested change
| self.mock_gemini_model = MagicMock() # Mock for the GenerativeModel | |
| self.mock_gemini_model.start_chat.return_value = self.mock_gemini_chat_session # Mock start_chat | |
| self.mock_gemini_model = MagicMock( # Mock for the GenerativeModel | |
| start_chat=MagicMock(return_value=self.mock_gemini_chat_session) # Mock start_chat | |
| ) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Corrected three test failures in tests/test_llm_api.py identified when running tests locally on the master branch:
base_urlparameter which is now passed during client creation.claude-3-sonnet-20240229to the current defaultclaude-3-7-sonnet-20250219.start_chatandsend_message) instead of the previousgenerate_contentmethod.