Skip to content

fix(tests): update imports from kaizen to evolve module#126

Open
visahak wants to merge 1 commit intoAgentToolkit:mainfrom
visahak:fix/update-test-imports-kaizen-to-evolve
Open

fix(tests): update imports from kaizen to evolve module#126
visahak wants to merge 1 commit intoAgentToolkit:mainfrom
visahak:fix/update-test-imports-kaizen-to-evolve

Conversation

@visahak
Copy link
Copy Markdown
Collaborator

@visahak visahak commented Apr 1, 2026

Error observed when running uv run pytest

  • Updated test_filesystem_guidelines.py to use evolve module instead of kaizen
  • Updated test_full_flow.py to use evolve module instead of kaizen
  • Added pytest.importorskip for psycopg in test_postgres_backend.py to skip tests when optional dependency is not installed

Fixes 3 test collection errors that were preventing pytest from running successfully.

Summary by CodeRabbit

  • Tests
    • Improved test infrastructure to gracefully skip database-related tests when optional dependencies are unavailable, ensuring test suite stability across different installation configurations.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

Warning

Rate limit exceeded

@visahak has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 38 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 38 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f7eb061e-d3f2-4329-8c40-bdf7c702e5ad

📥 Commits

Reviewing files that changed from the base of the PR and between 7b0f2db and 317217d.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (1)
  • pyproject.toml
📝 Walkthrough

Walkthrough

A module-level pytest.importorskip("psycopg") call was added to tests/unit/test_postgres_backend.py to skip the entire test file when the optional psycopg dependency is unavailable.

Changes

Cohort / File(s) Summary
Test Skip Configuration
tests/unit/test_postgres_backend.py
Added module-level pytest.importorskip("psycopg") call to gracefully skip test file when optional dependency is unavailable.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

🐰 Hop, hop! When dependencies play hide,
Our tests skip with grace and with pride,
No psycopg? No frown, just skip on,
pytest.importorskip keeps testing strong! 🌟

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Title check ⚠️ Warning The PR title claims to update imports from 'kaizen' to 'evolve' module, but the raw summary shows only a pytest.importorskip addition with no import changes. The title should reflect the actual changes in this specific file (test_postgres_backend.py). Consider using 'fix(tests): add pytest.importorskip for psycopg dependency' or clarify that this PR addresses multiple test files with different changes.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (4)
test_full_flow.py (2)

53-54: Content slicing assumes string type.

Same concern as in test_filesystem_guidelines.py: EntityUpdate.content can be str | list | dict, but [:80] will fail for dict types.

Proposed fix
     for update in updates:
-        print(f"   - {update.event}: {update.content[:80]}...")
+        content_preview = str(update.content)[:80]
+        print(f"   - {update.event}: {content_preview}...")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_full_flow.py` around lines 53 - 54, The test prints assume
update.content is a string and slices with update.content[:80], which will crash
for dict/list types; update references are instances with the field
EntityUpdate.content. Fix by normalizing content to a string before
slicing—e.g., convert non-str content to a reliable string representation (repr
or json.dumps) and then take the first 80 chars—so replace the direct slice on
update.content with a safe stringification of update.content in the loop over
updates.

16-50: Consider ensuring the namespace exists before use.

Unlike test_filesystem_guidelines.py, this script doesn't verify the namespace exists before calling update_entities. Based on the base backend implementation in evolve/backend/base.py, update_entities calls _validate_namespace which raises NamespaceNotFoundException if the namespace doesn't exist.

If this is an intentional design choice (assuming setup is done elsewhere), you could document that assumption. Otherwise, adding a namespace check similar to test_filesystem_guidelines.py would make the test more robust for standalone execution.

Proposed fix
+from evolve.schema.exceptions import NamespaceNotFoundException
+
 def test_full_flow():
     """Test creating and retrieving guidelines."""
     print("=" * 60)
     print("Testing Full Evolve Flow")
     print("=" * 60)
 
     client = EvolveClient()
+
+    # Ensure namespace exists
+    namespace_id = evolve_config.namespace_id
+    try:
+        ns = client.get_namespace_details(namespace_id)
+        print(f"Using namespace: {ns.id} (entities: {ns.num_entities})")
+    except NamespaceNotFoundException:
+        print(f"Creating namespace: {namespace_id}")
+        client.create_namespace(namespace_id)
 
     # Step 1: Create some sample guidelines
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_full_flow.py` around lines 16 - 50, The test calls
EvolveClient.update_entities without ensuring the namespace exists, which can
trigger NamespaceNotFoundException from the backend's _validate_namespace; add a
pre-check similar to test_filesystem_guidelines.py by calling
EvolveClient.get_namespace or a provided namespace-existence helper (or create
the namespace via EvolveClient.create_namespace) for evolve_config.namespace_id
before invoking update_entities, and handle or fail clearly if the namespace is
missing so the test is robust for standalone execution.
test_filesystem_guidelines.py (2)

29-34: Consider catching the specific exception.

Using bare except Exception is broad. Based on the relevant code snippet in evolve/backend/filesystem.py, the get_namespace_details method raises NamespaceNotFoundException when the namespace doesn't exist.

Proposed fix
+from evolve.schema.exceptions import NamespaceNotFoundException
+
 # ... later in the function ...
 
     try:
         ns = client.get_namespace_details(namespace_id)
         print(f"Namespace: {ns.id} (entities: {ns.num_entities})")
-    except Exception:
+    except NamespaceNotFoundException:
         print(f"Creating namespace: {namespace_id}")
         client.create_namespace(namespace_id)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_filesystem_guidelines.py` around lines 29 - 34, Replace the broad except
block around client.get_namespace_details(namespace_id) with a specific handler
for NamespaceNotFoundException: catch NamespaceNotFoundException to detect a
missing namespace and call client.create_namespace(namespace_id) in that branch,
and let other exceptions propagate (or handle them separately) so only the
intended "namespace not found" case triggers creation; reference
get_namespace_details, NamespaceNotFoundException, and create_namespace on the
client to locate the change.

59-60: Content slicing assumes string type.

EntityUpdate.content is typed as str | list | dict (per evolve/schema/conflict_resolution.py). Slicing with [:60] will fail with TypeError for dict content. While the test data here is all strings, consider a defensive approach:

Proposed fix
     for update in updates:
-        print(f"  - {update.event}: {update.content[:60]}...")
+        content_preview = str(update.content)[:60]
+        print(f"  - {update.event}: {content_preview}...")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test_filesystem_guidelines.py` around lines 59 - 60, The loop that prints
updates assumes update.content is a string and uses slicing (for update in
updates -> print(f"  - {update.event}: {update.content[:60]}...")), but
EntityUpdate.content can be str | list | dict; update.content[:60] will raise
TypeError for dict/list. Fix by normalizing content before slicing: convert
non-str values (list/dict) to a string representation (e.g., json.dumps or
str()) then safely take the first 60 characters, and use that normalized
variable in the print; ensure you handle None if applicable. Reference the loop
variable update, the updates iterable, and EntityUpdate.content when applying
the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test_filesystem_guidelines.py`:
- Around line 29-34: Replace the broad except block around
client.get_namespace_details(namespace_id) with a specific handler for
NamespaceNotFoundException: catch NamespaceNotFoundException to detect a missing
namespace and call client.create_namespace(namespace_id) in that branch, and let
other exceptions propagate (or handle them separately) so only the intended
"namespace not found" case triggers creation; reference get_namespace_details,
NamespaceNotFoundException, and create_namespace on the client to locate the
change.
- Around line 59-60: The loop that prints updates assumes update.content is a
string and uses slicing (for update in updates -> print(f"  - {update.event}:
{update.content[:60]}...")), but EntityUpdate.content can be str | list | dict;
update.content[:60] will raise TypeError for dict/list. Fix by normalizing
content before slicing: convert non-str values (list/dict) to a string
representation (e.g., json.dumps or str()) then safely take the first 60
characters, and use that normalized variable in the print; ensure you handle
None if applicable. Reference the loop variable update, the updates iterable,
and EntityUpdate.content when applying the change.

In `@test_full_flow.py`:
- Around line 53-54: The test prints assume update.content is a string and
slices with update.content[:80], which will crash for dict/list types; update
references are instances with the field EntityUpdate.content. Fix by normalizing
content to a string before slicing—e.g., convert non-str content to a reliable
string representation (repr or json.dumps) and then take the first 80 chars—so
replace the direct slice on update.content with a safe stringification of
update.content in the loop over updates.
- Around line 16-50: The test calls EvolveClient.update_entities without
ensuring the namespace exists, which can trigger NamespaceNotFoundException from
the backend's _validate_namespace; add a pre-check similar to
test_filesystem_guidelines.py by calling EvolveClient.get_namespace or a
provided namespace-existence helper (or create the namespace via
EvolveClient.create_namespace) for evolve_config.namespace_id before invoking
update_entities, and handle or fail clearly if the namespace is missing so the
test is robust for standalone execution.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 8247da80-72c8-43af-b13f-cbadbca85d31

📥 Commits

Reviewing files that changed from the base of the PR and between 824e4d9 and ea22529.

📒 Files selected for processing (3)
  • test_filesystem_guidelines.py
  • test_full_flow.py
  • tests/unit/test_postgres_backend.py

@visahak visahak force-pushed the fix/update-test-imports-kaizen-to-evolve branch 3 times, most recently from 7b0f2db to 07545b6 Compare April 1, 2026 20:40
Added psycopg[binary]>=3.1 and pgvector>=0.3 to dev dependency group to ensure all unit tests can run during development and CI.

This fixes the test collection error for test_postgres_backend.py while keeping postgres support optional for end users (via the pgvector optional dependency group).
@visahak visahak force-pushed the fix/update-test-imports-kaizen-to-evolve branch from 07545b6 to 317217d Compare April 1, 2026 20:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant