Skip to content

Fix star import completions missing in Interpreter mode#2092

Closed
worksbyfriday wants to merge 3 commits intodavidhalter:masterfrom
worksbyfriday:fix-interpreter-star-imports-2087
Closed

Fix star import completions missing in Interpreter mode#2092
worksbyfriday wants to merge 3 commits intodavidhalter:masterfrom
worksbyfriday:fix-interpreter-star-imports-2087

Conversation

@worksbyfriday
Copy link
Copy Markdown

Summary

Interpreter.complete() returns no results for star imports (from json import *) while Script.complete() works correctly.

Root cause: MixedModuleContext.get_filters() was not delegating to the underlying module value's filter chain. This meant star import filters (from ModuleValue.iter_star_filters()) were never included in the completion results.

Fix: Mirror ModuleContext.get_filters() — get the value's filters, skip the first one (replaced by MixedParserTreeFilter), then yield the remaining filters which include star imports, sub-modules, and module attributes.

Before:

>>> jedi.Interpreter("from json import *\ndum", []).complete(2, 3)
[]

After:

>>> jedi.Interpreter("from json import *\ndum", []).complete(2, 3)
[<Completion: dump>, <Completion: dumps>]

Test plan

  • Added test_star_import_completions to verify star imports work in Interpreter
  • All 153 interpreter tests pass
  • All 144 completion tests pass

Fixes #2087.

🤖 Generated with Claude Code

worksbyfriday and others added 3 commits February 17, 2026 10:40
When completing `object. \n` at position (1, 8), jedi gets the leaf
at the cursor position, which is a newline node. The code only
checked for `endmarker` as a special case where the leaf isn't the
dot, but newline nodes need the same treatment.

Without this fix, the newline leaf gets passed through to
`_complete_trailer`, which eventually calls `_infer_node` on the
dot operator, triggering an AssertionError ("unhandled operator '.'").

The fix adds `newline` to the type check alongside `endmarker`.

Fixes davidhalter#1954

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When a function like ``def f(x, **kwargs): return f((x,), **kwargs)``
is analyzed, ``_iter_nodes_for_param`` follows the ``**kwargs`` usage
back to the same function, triggering infinite recursion through
``process_params`` → ``_iter_nodes_for_param`` → ``_goes_to_param_name``
→ back into the inference engine.

Add a module-level ``_processing_params`` set to track which
(function_node, param_name) pairs are currently being processed and
break the cycle.

Fixes davidhalter#2085

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
MixedModuleContext.get_filters() was not delegating to the underlying
module value's filters, causing star imports (from x import *) to be
missing from Interpreter completions while working correctly in Script.

The fix mirrors ModuleContext.get_filters(): get the value's filters,
skip the first one (replaced by MixedParserTreeFilter), then yield the
remaining filters which include star imports, sub-modules, and module
attributes.

Fixes davidhalter#2087.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@davidhalter
Copy link
Copy Markdown
Owner

Hi, I'm sorry to say this, but I decided to not work at all with AI generated pull requests/content. There are multiple reasons for this:

  1. I feel like at this point it's pretty subpar to what a good engineer can create. I have received many AI generated pull requests and they all have massive problems compared to what I received from actual people. People sometimes can't fix the problems they have without AI, but I regard this as a feature: If you don't understand the problem, I don't have to review your code.
  2. I love people. And I would like to interact with all of you; learn from you; teach you. But if you are simply a bridge to an LLM, I'm just wasting my time. I cannot build any form of relationship.
  3. LLMs are the antithesis to what I like about programming. I like the struggle of programming. Of writing code. I love how things come together after a good coding session. This is all missing. And while some form of AI might make that skill useless in the future, it's not useless now and I would really recommend anyone to avoid LLMs for generating complex code.
  4. Even for written English I generally prefer non-LLM text, because it shows something about the person. I understand that some people are incredibly non-fluent, but that's fine. Just do your best and try to learn English. Otherwise we won't have the ability to talk if we ever meet, which would be a pitty.
  5. LLMs are extremely good at generating code/text that looks reasonable. I use them a lot for brainstorming, they are incredible at putting out ideas. They are bad at facts. They are extremely bad at thinking. The positive thing here is that you have a brain and you can use it to think. Think hard first.

Thank you anyway for trying to contribute to Open Source. I would really appreciate if you avoid the usage of LLMs in my projects. It is obviously fine to use LLMs as a search/brainstorming tool with my projects, just don't use it to interact with me.

PS: As a side note I want to mention that I'm probably not the only maintainer that is annoyed by LLM content. So you probably should not use LLMs for other repositories either. My initial emotions towards LLM content was way more negative than this message is and I assume other maintainers frequently have very negative feelings as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Wildcard imports work in Script.complete but not Interpreter.complete

2 participants