Skip to content

Update section 4#1

Merged
blengerich merged 8 commits intoAdaptInfer:mainfrom
LZYEIL:main
May 4, 2026
Merged

Update section 4#1
blengerich merged 8 commits intoAdaptInfer:mainfrom
LZYEIL:main

Conversation

@LZYEIL
Copy link
Copy Markdown

@LZYEIL LZYEIL commented Mar 5, 2026

  1. Update the section 4 with motivation and introduction
  2. Merge to check if the rendered PDF/HTML displays contents and references as expected

@LZYEIL
Copy link
Copy Markdown
Author

LZYEIL commented Apr 13, 2026

I've read several venues' (NMI Review/ACM Computing Survey) requirements: 4k words/35 pages of length with ~100 references.

Current Draft

  • Motivation + Prompting-based Adaptation + External Knowledge Injection
  • 33 references

What is Next

  • PEFT + Instruction Tuning + Tool Use
  • Expect to have 30+ references

Questions

  • I'd like to hear from you in terms of the structure of my section (length/num of references)
  • Should I continue with the current style or shorten the length? (I believe this one section could easily exceed 4k words with 60+ references)
  • Any insights on how to balance the word counts with insights (Sacrifice some foundational work intro for up-to-date work intro or vice versa?)

Comment thread content/04.adapting.md Outdated
Comment on lines +152 to +153

A core assumption of RAG is that the retrieved documents are inherently helpful. However, biomedical literature and clinical records are fraught with conflicting studies. When general-purpose LLMs are fed contradictory context—often termed "retrieval noise"—their reasoning ability can be seriously compromised. Retrieval noise actively disrupts the LLM's causal reasoning and leads to confusion. Recent work has proposed a 'self-reflection' mechanism, where the model evaluates the relevance of retrieved documents before generation [@doi:10.1093/bioinformatics/btae238]. However, such approaches remain preliminary.
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be helpful to extend this beyond 'retrieval noise' to highlight a more structural issue: RAG systems typically do not distinguish between different levels of medical evidence (e.g., RCTs vs. case reports). This lack of an 'evidence hierarchy' could be a fundamental reason why conflicting information disrupts the model's reasoning, as it cannot prioritize more rigorous studies over anecdotal ones.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've revised the structure of the section, making it more compact from a perspective-style-driven perspective, and I've briefly added this idea.

@blengerich blengerich merged commit dac3b9e into AdaptInfer:main May 4, 2026
github-actions Bot pushed a commit that referenced this pull request May 4, 2026
github-actions Bot pushed a commit that referenced this pull request May 4, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants