Conversation
Pre-process t-strings with `tokenize` before parsing them to intercept the literal vs expression parts. Convert them into `py2v_t_string(...)` function calls for the downstream transpiler. Translate these specific calls directly into explicit V `strings.Builder` blocks to satisfy the requirement to map them to a custom builder mechanism. Co-authored-by: yaskhan <3676373+yaskhan@users.noreply.github.com>
|
👋 Jules, reporting for duty! I'm here to lend a hand with this pull request. When you start a review, I'll add a 👀 emoji to each comment to let you know I've read it. I'll focus on feedback directed at me and will do my best to stay out of conversations between you and other bots or reviewers to keep the noise down. I'll push a commit with your requested changes shortly after. Please note there might be a delay between these steps, but rest assured I'm on the job! For more direct control, you can switch me to Reactive Mode. When this mode is on, I will only act on comments where you specifically mention me with New to Jules? Learn more at jules.google/docs. For security, I will only act on instructions from the user who triggered this task. |
|
Pull request #133 has conflicts that needs to be resolved. |
The tokenize module raises TokenError and IndentationError when processing incomplete code segments or files with invalid syntax during the tokenizer phase. The `_preprocess_t_strings` step now safely catches these exceptions and returns the original un-modified source, allowing the main `ast.parse` step to raise the appropriate `SyntaxError` identically to normal parsing. Co-authored-by: yaskhan <3676373+yaskhan@users.noreply.github.com>
The `tokenize` module throws various exceptions for incomplete code or invalid syntax, including `IndentationError`, along with `TokenError`. We now catch all base exceptions to safely ignore broken files and let standard parsing take over. This fixes the CI failure in `test_parse_file_invalid_syntax`. Co-authored-by: yaskhan <3676373+yaskhan@users.noreply.github.com>
The `tokenize` module throws various exceptions for incomplete code or invalid syntax, including `IndentationError`, `TokenError`, and potentially others. We now catch all `Exception` base instances to safely ignore broken files and return the unmodified source, allowing the main `ast.parse` step to raise the appropriate `SyntaxError` identically to normal parsing. This fixes the CI failure in `test_parse_file_invalid_syntax`. Co-authored-by: yaskhan <3676373+yaskhan@users.noreply.github.com>
Implement PEP 750 support using
tokenizepreprocessing and mapping to V'sstrings.Builder.PR created automatically by Jules for task 11191209047608774483 started by @yaskhan