Conversation
…ython >= 3.12 (fix): removed incorrect pinecone client init in get_rag.py (fix): centralize groq model change from llm_config.py
…arch Tool (fix): Implement new fact check subgraph (enhancement): sentiment analysis and fact check run parallely \n clean_text->extract_claims->plan_searches->execute_searches->verify_facts
…odes i.e sentiment node and fact check node (chore): /process route awaits langgraph build compile
- Updated chunk_rag_data.py to support both new (claim, status) and old (original_claim, �erdict) key formats from the fact-checker. - Added logic to correctly parse perspective whether it is a Pydantic model or a dict. - implemented skipping of malformed facts instead of raising valueErr. - compatibility with the new parallel DuckDuckGo fact-checking workflow.
-'PerspectiveOutput' pydantic model to handle reasoning as a list for claims
📝 WalkthroughWalkthroughThis PR introduces configurable LLM model selection via environment variables, implements an asynchronous four-step fact-checking pipeline with claim extraction, search planning, and verification nodes, restructures the LangGraph workflow to support parallel sentiment and fact-checking analysis, converts the pipeline to async/await execution, updates Pinecone client instantiation, and adds a complete landing page and perspective page frontend with supporting components and data fetching hooks. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Frontend User
participant SearchBar as SearchBar Component
participant PerspectivePage as Perspective Page
participant API as /api/process & /api/bias
participant LangGraph as LangGraph Workflow
participant ParallelNode as Parallel Analysis
participant SentimentPipeline as Sentiment Analysis
participant FactCheckPipeline as Fact-Check Pipeline
User->>SearchBar: Enter URL & Select Provider
SearchBar->>SearchBar: Validate URL
SearchBar->>SearchBar: Store in sessionStorage
SearchBar->>PerspectivePage: Navigate to /perspective
PerspectivePage->>PerspectivePage: Read sessionStorage
PerspectivePage->>API: POST /api/process {url}
API->>LangGraph: invoke(state)
LangGraph->>ParallelNode: parallel_analysis(state)
par Parallel Execution
ParallelNode->>SentimentPipeline: run_sentiment_sdk
SentimentPipeline->>SentimentPipeline: Analyze sentiment
SentimentPipeline-->>ParallelNode: sentiment result
and
ParallelNode->>FactCheckPipeline: run_fact_check_pipeline
FactCheckPipeline->>FactCheckPipeline: extract_claims_node
FactCheckPipeline->>FactCheckPipeline: plan_searches_node
FactCheckPipeline->>FactCheckPipeline: execute_searches_node
FactCheckPipeline->>FactCheckPipeline: verify_facts_node
FactCheckPipeline-->>ParallelNode: claims, search_results, facts
end
ParallelNode-->>LangGraph: merged state
LangGraph->>LangGraph: generate_perspective
LangGraph->>LangGraph: judge & store
LangGraph-->>API: result with short_title, reasoning_steps
API-->>PerspectivePage: analysis data
PerspectivePage->>PerspectivePage: Render perspective + bias
PerspectivePage-->>User: Display analysis
sequenceDiagram
participant FactCheckPipeline as Fact-Check Pipeline
participant ExtractNode as extract_claims_node
participant PlanNode as plan_searches_node
participant ExecuteNode as execute_searches_node
participant VerifyNode as verify_facts_node
participant Groq as Groq LLM
participant DuckDuckGo as DuckDuckGo Search
FactCheckPipeline->>ExtractNode: state with cleaned_text
ExtractNode->>Groq: Request claim extraction (LLM_MODEL)
Groq-->>ExtractNode: List of claims
ExtractNode-->>FactCheckPipeline: {claims: [...]}
FactCheckPipeline->>PlanNode: state with claims
PlanNode->>Groq: Request search queries per claim
Groq-->>PlanNode: JSON search queries
PlanNode-->>FactCheckPipeline: {search_queries: [...]}
FactCheckPipeline->>ExecuteNode: state with search_queries
ExecuteNode->>DuckDuckGo: Execute parallel searches
DuckDuckGo-->>ExecuteNode: Results per query
ExecuteNode-->>FactCheckPipeline: {search_results: [...]}
FactCheckPipeline->>VerifyNode: state with claims & search_results
VerifyNode->>Groq: Verify claims against evidence
Groq-->>VerifyNode: JSON facts with status/reason
VerifyNode-->>FactCheckPipeline: {facts: [...], fact_check_done: true}
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
backend/app/modules/facts_check/llm_processing.py (2)
144-149:⚠️ Potential issue | 🔴 CriticalUnhandled JSON parse failure will cause
NameError.If
json.loads(content)fails,parsedis never assigned, but line 149 still attempts to append it. This will raise aNameError: name 'parsed' is not defined.🐛 Proposed fix
# Try parsing the JSON response try: parsed = json.loads(content) + results_list.append(parsed) except Exception as parse_err: logger.error(f"LLM JSON parse error: {parse_err}") - - results_list.append(parsed) + results_list.append({ + "verdict": "Unknown", + "explanation": f"Failed to parse response: {parse_err}", + "original_claim": claim, + "source_link": source, + })
151-155:⚠️ Potential issue | 🟡 MinorReturn value uses only the last
claimfrom the loop.The returned dictionary includes
"claim": claim, butclaimis the variable from the last iteration of the loop. If multiple claims are verified, this doesn't accurately represent the results. Consider removing this field or restructuring the return value.🔧 Suggested fix
return { - "claim": claim, "verifications": results_list, "status": "success", }backend/app/utils/prompt_templates.py (1)
48-60:⚠️ Potential issue | 🟡 MinorMissing closing backticks for JSON code block.
The JSON code block opens with
```jsonon line 48 but is not closed with```before the template ends. This could confuse the LLM about where the JSON format example ends.🔧 Proposed fix
"reasoning_steps": [ "<step 1>", "<step 2>", "<step 3>", "...", "<final reasoning>" ] }} +``` """)
🤖 Fix all issues with AI agents
In `@backend/app/modules/fact_check_tool.py`:
- Around line 101-110: In run_one_search, remove the unused exception binding by
changing the except clause from "except Exception as e:" to "except Exception:"
so the variable e is not declared; locate the async def run_one_search(q)
function and update only the except line to drop the unused exception variable
while preserving the existing return behavior.
- Around line 125-143: The prompt asks the model to return a top-level JSON list
but the call to client.chat.completions.create uses
response_format={"type":"json_object"}, which cannot parse bare arrays; update
the prompt or the response_format so they match: either change the system/user
prompt (the context string built in this block and sent to
client.chat.completions.create) to request an object like {"facts": [...]}
instead of a bare list, or switch response_format to a schema-based JSON format
(e.g., json_schema) that allows an array top-level; make this change where the
completion is invoked (client.chat.completions.create / LLM_MODEL /
response_format) so the model output and parser align.
In `@backend/app/modules/vector_store/chunk_rag_data.py`:
- Around line 108-110: The broad except in the chunking flow (inside
chunk_rag_data) swallows errors and redundantly formats the exception; narrow
the handler to only expected exceptions (e.g., ValueError, ParsingError) or, if
failures should be visible, log via logger.exception("Failed to chunk the data")
and re-raise the exception instead of returning [] so callers can distinguish
catastrophic failure from “no chunks”; alternatively change the function
signature to return a result tuple (chunks, error_message) and on error return
([], "Chunking failed") while logging with logger.exception("Failed to chunk the
data") — also remove the redundant {e} interpolation because logger.exception
already includes the traceback.
In `@backend/pyproject.toml`:
- Around line 18-21: The pyproject comment suggesting removal of "nltk" is
incorrect because backend/app/modules/scraper/cleaner.py directly uses nltk
(e.g., calls like nltk.download(...) and accesses stopwords/punkt via
stopwords.words or punkt tokenizers); either keep "nltk>=3.9.1" in
pyproject.toml and remove the inline “should remove this” comment, or refactor
cleaner.py to eliminate all nltk usage (replace nltk.download and stopwords/
punkt usage with an alternative library or bundled resources) before removing
the dependency—locate usages in cleaner.py and update pyproject accordingly.
In `@frontend/app/perspective/page.tsx`:
- Around line 16-18: Update the import in page.tsx to match the actual component
filename by replacing the incorrect "RightSideBar" import with "RightSidebar"
(i.e., import { RightSidebar } from "@/components/perspective/RightSidebar");
also search for and fix any other occurrences where "RightSideBar" is used to
ensure all imports match the component's filename casing.
In `@frontend/components/landing/Button.tsx`:
- Around line 5-20: Extend ButtonProps to include standard button attributes by
using React.ButtonHTMLAttributes<HTMLButtonElement> (or intersecting it) so the
Button component accepts type, aria-*, disabled, className, etc.; set a safe
default prop type="button" inside the Button component; merge incoming className
with the computed sizeClasses and existing classes; preserve and forward onClick
and all other native props (spread the remaining props onto the <button>).
Update references to ButtonProps, Button, and sizeClasses accordingly.
In `@frontend/components/landing/FeatureCard.tsx`:
- Around line 29-31: The title h3 in FeatureCard (the element rendering {title})
currently collapses `\n` to spaces; update the className on that h3 (in the
FeatureCard component) to include the Tailwind utility `whitespace-pre-line` so
embedded newlines are preserved; if classNames are composed or merged (e.g., via
a helper or variable), add `whitespace-pre-line` to that composition so the
change applies regardless of how the classes are built.
- Around line 4-25: The FeatureCard component's image prop currently allows
string which can cause Next.js runtime errors because the <Image> usage lacks
width/height/fill; change FeatureCardProps.image to only StaticImageData and add
the corresponding type import (e.g., import type { StaticImageData } from
'next/image') so callers must pass static imports, and keep the existing <Image>
usage unchanged in the FeatureCard function.
In `@frontend/components/landing/SearchBar.tsx`:
- Around line 63-69: The URL input in SearchBar (the input using value={url} and
onChange={handleUrlChange}) lacks an accessible label and a programmatic
relationship to its error text; add a visible or visually-hidden <label> tied to
the input via an id/htmlFor on the input, and give the error message element a
unique id (e.g., urlError) then set aria-describedby on the input to that id so
screen readers announce the error; also ensure the error element uses an
appropriate live region (role="alert" or aria-live="assertive") so updates are
announced.
In `@frontend/components/landing/StatsSection.tsx`:
- Line 27: The displayed rating value in the StatsSection component is formatted
as "4.3stars"; update the value prop on the Stat usage (the JSX line with Stat
value="4.3stars" label="Ratings") to a properly spaced and capitalized string
such as "4.3 Stars" (or "4.3 ★" if you prefer a symbol) so the UI shows a
polished rating; adjust only the value prop passed to the Stat component.
In `@frontend/hooks/use-perspective.ts`:
- Around line 36-66: Replace the hard-coded "http://127.0.0.1:5555" API base in
use-perspective.ts with a configurable base taken from
process.env.NEXT_PUBLIC_API_BASE_URL (with a sensible fallback like
"http://127.0.0.1:7860"); update the fetch calls that reference the bias and
process endpoints (the POST to "/api/bias" and "/api/process" inside the blocks
guarded by storedBias and storedAnalysis) to build their URLs using that base;
ensure existing logic around setLoading, setBiasData and sessionStorage remains
unchanged and that the hook reads NEXT_PUBLIC_API_BASE_URL at module load or at
hook start so deployments can override it via .env.local/.env.production.
In `@frontend/tailwind.config.ts`:
- Around line 71-73: The Tailwind fontFamily defines sora but the font isn't
loaded; in frontend/app/layout.tsx import the Sora loader from next/font/google
(Sora) and instantiate it (e.g., const sora = Sora(...)), then apply the
resulting sora.className to the document body or add the loaded font to global
CSS so the Tailwind fontFamily.sora is actually used; update references like
fontFamily.sora and components (BiasGauge.tsx, RightSidebar.tsx,
perspective/page.tsx) rely on this applied className.
🧹 Nitpick comments (9)
frontend/components/landing/Footer.tsx (1)
5-12: Make the copyright year dynamic to avoid staleness.Low-impact improvement to prevent manual updates each year.
♻️ Proposed change
- <p className="font-medium text-sm text-gray-400 text-center md:text-right"> - © 2026 AOSSIE. Combating bias through AI-powered perspective analysis. - </p> + <p className="font-medium text-sm text-gray-400 text-center md:text-right"> + © {new Date().getFullYear()} AOSSIE. Combating bias through AI-powered perspective analysis. + </p>frontend/components/landing/StatsSection.tsx (1)
1-1: Unused React import with React 19 JSX transform.With React 19 and the modern JSX transform, explicit
import React from "react"is no longer required for JSX syntax. SinceReactisn't directly referenced elsewhere in this file, this import can be removed.🧹 Suggested cleanup
-import React from "react"; -backend/app/modules/vector_store/chunk_rag_data.py (1)
52-57: Checkmodel_dump()beforedict()for Pydantic v2 compatibility.In Pydantic v2, models have both
.dict()(deprecated) and.model_dump(). The current order will use the deprecated method on v2 models sincehasattrwill find.dict()first.♻️ Proposed fix to prefer Pydantic v2 method
- if hasattr(perspective_data, "dict"): - p_data = perspective_data.dict() - elif hasattr(perspective_data, "model_dump"): + if hasattr(perspective_data, "model_dump"): p_data = perspective_data.model_dump() + elif hasattr(perspective_data, "dict"): + p_data = perspective_data.dict() elif isinstance(perspective_data, dict): p_data = perspective_datafrontend/tailwind.config.ts (2)
74-81: Consider usingremunits for font sizes to improve accessibility.Hardcoded
pxvalues for font sizes don't scale with user browser font preferences, which can impact accessibility for users who need larger text. Consider converting these toremunits (e.g.,hero: "4rem"instead of"64px").Additionally, the decimal values like
14.361pxand11.967pxappear to be directly exported from Figma—rounding to whole pixels (e.g.,14px,12px) would be cleaner without visible difference.
92-93: Consider extracting the large inline SVG to a separate file.The
hero-gradientdata URI is quite lengthy. For maintainability, consider moving this SVG to a separate file (e.g.,public/gradients/hero-gradient.svg) and referencing it asurl('/gradients/hero-gradient.svg').backend/app/llm_config.py (1)
1-4: Good centralization of model configuration.The pattern of extracting configuration to a dedicated module is clean and enables consistent model selection across the codebase.
However, the
LLM_MODELenvironment variable is not documented in the README's backend setup section (onlyHF_TOKENwas added). Consider adding it to the.envexample:LLM_MODEL = llama-3.3-70b-versatile # optional, defaults to llama-3.3-70b-versatilebackend/app/utils/fact_check_utils.py (1)
55-57: Cap fallback claims to avoid unbounded searches.If
raw_outputis verbose, this can trigger many outbound searches. Consider limiting to a small max.♻️ Proposed tweak (limit fallback claims)
- if not claims and raw_output: - claims = [line.strip() for line in raw_output.split('\n') if len(line.strip()) > 10] + if not claims and raw_output: + claims = [line.strip() for line in raw_output.split('\n') if len(line.strip()) > 10][:10]frontend/components/landing/FeaturesSection.tsx (1)
45-48: Prefer a stable key over index.Using
indexas key can cause reconciliation issues if the list ever changes.♻️ Suggested tweak
- {features.map((feature, index) => ( + {features.map((feature) => ( <FeatureCard - key={index} + key={feature.title} image={feature.image} title={feature.title} description={feature.description} /> ))}frontend/components/perspective/BiasGauge.tsx (1)
1-37: Avoid gradient ID collisions and clamp score range.
Using the score to build the gradient id can collide when multiple gauges share the same score, and out‑of‑range scores can yield invalid dash arrays. Consider clamping and usinguseId()for a stable unique id.♻️ Suggested refactor
-import React from "react"; +import React, { useId } from "react"; export function BiasGauge({ score, gradientColors, textColor, label }: BiasGaugeProps) { + const clampedScore = Math.min(100, Math.max(0, score)); + const gradientId = useId(); return ( <div className="flex flex-col items-center justify-center py-4"> <div className="relative w-[180px] h-[100px] mb-2"> <svg viewBox="0 0 100 55" className="w-full h-full overflow-visible"> {/* Background Track */} <path d="M 10 50 A 40 40 0 0 1 90 50" fill="none" stroke="#2A2E35" strokeWidth="8" strokeLinecap="round" /> {/* Progress Track */} <path d="M 10 50 A 40 40 0 0 1 90 50" fill="none" - stroke={`url(`#biasGradient-`${score})`} + stroke={`url(#${gradientId})`} strokeWidth="8" strokeLinecap="round" - strokeDasharray={`${(score / 100) * 126} 126`} + strokeDasharray={`${(clampedScore / 100) * 126} 126`} className="transition-all duration-1000 ease-out" /> <defs> - <linearGradient id={`biasGradient-${score}`} x1="0%" y1="0%" x2="100%" y2="0%"> + <linearGradient id={gradientId} x1="0%" y1="0%" x2="100%" y2="0%"> <stop offset="0%" stopColor={gradientColors[0]} /> <stop offset="100%" stopColor={gradientColors[1]} /> </linearGradient> </defs> </svg> </div> <div className="text-center"> - <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(score)}%</div> + <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(clampedScore)}%</div> <div className="text-sm text-gray-400 font-sora">{label}</div> </div> </div> ); }
| async def run_one_search(q): | ||
| try: | ||
| query_str = q.get("query") | ||
| c_id = q.get("claim_id") | ||
|
|
||
| res = await asyncio.to_thread(search_tool.invoke, query_str) | ||
| logger.info(f"Search Result for Claim {c_id}: {res[:200]}...") | ||
| return {"claim_id": c_id, "result": res} | ||
| except Exception as e: | ||
| return {"claim_id": q.get("claim_id"), "result": "Search failed"} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if file exists and read lines 101-110
if [ -f "backend/app/modules/fact_check_tool.py" ]; then
echo "=== Lines 101-110 ==="
sed -n '101,110p' backend/app/modules/fact_check_tool.py
echo ""
echo "=== Context around lines 101-110 (95-115) ==="
sed -n '95,115p' backend/app/modules/fact_check_tool.py
else
echo "File not found at backend/app/modules/fact_check_tool.py"
find . -name "fact_check_tool.py" -type f
fiRepository: AOSSIE-Org/Perspective
Length of output: 1382
Remove unused exception variable in search execution.
The exception variable e is bound but never used in the except clause. Drop the binding:
Minimal fix
- except Exception as e:
+ except Exception:
return {"claim_id": q.get("claim_id"), "result": "Search failed"}🧰 Tools
🪛 Ruff (0.14.14)
[warning] 108-108: Consider moving this statement to an else block
(TRY300)
[warning] 109-109: Do not catch blind exception: Exception
(BLE001)
[error] 109-109: Local variable e is assigned to but never used
Remove assignment to unused variable e
(F841)
🤖 Prompt for AI Agents
In `@backend/app/modules/fact_check_tool.py` around lines 101 - 110, In
run_one_search, remove the unused exception binding by changing the except
clause from "except Exception as e:" to "except Exception:" so the variable e is
not declared; locate the async def run_one_search(q) function and update only
the except line to drop the unused exception variable while preserving the
existing return behavior.
| context = "Verify these claims based on the search results:\n" | ||
| for item in results: | ||
| c_id = item["claim_id"] | ||
| if c_id < len(claims): | ||
| context += f"\nClaim: {claims[c_id]}\nEvidence: {item['result']}\n" | ||
|
|
||
| try: | ||
| response = await asyncio.to_thread( | ||
| client.chat.completions.create, | ||
| messages=[ | ||
| { | ||
| "role": "system", | ||
| "content": "You are a strict fact checker. Return a JSON list of objects with keys: 'claim', 'status' (True/False/Unverified), and 'reason'." | ||
| }, | ||
| {"role": "user", "content": context} | ||
| ], | ||
| model=LLM_MODEL, | ||
| temperature=0.0, | ||
| response_format={"type": "json_object"} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check which client is being imported in the file
head -50 backend/app/modules/fact_check_tool.py | grep -E "^import|^from"Repository: AOSSIE-Org/Perspective
Length of output: 302
🏁 Script executed:
# Look for the client initialization to confirm it's OpenAI
grep -n "client\s*=" backend/app/modules/fact_check_tool.py | head -5Repository: AOSSIE-Org/Perspective
Length of output: 117
🏁 Script executed:
# Check the imports at the top of the file to see which library is used
cat backend/app/modules/fact_check_tool.py | head -30Repository: AOSSIE-Org/Perspective
Length of output: 805
🌐 Web query:
OpenAI API response_format json_object top-level array support JSON schema
💡 Result:
-
response_format: { "type": "json_object" }(“JSON mode”) is the older mode that OpenAI documents as a JSON object response format and discusses in terms of producing/validating a “JSON object” (including edge cases where it may be an incomplete JSON object). Practically: don’t rely on it for top‑level arrays; treat it as “top-level object expected.” [1], [2] -
If you need a top-level array, use Structured Outputs:
response_format: { "type": "json_schema", "json_schema": { ... } }. With this you can set your schema’s root to{ "type": "array", ... }(or anything else allowed), and withstrict: truethe model is constrained to match the supplied schema (noting only a subset of JSON Schema is supported in strict mode). [1], [3]
Sources:
[1] OpenAI API Reference — Responses: response_format (json_object vs json_schema) (platform.openai.com)
[2] OpenAI Help Center — JSON mode notes (response_format: json_object) (help.openai.com)
[3] OpenAI — “Introducing Structured Outputs in the API” (JSON Schema via response_format, strict: true, limitations) (openai.com)
Citations:
- 1: https://platform.openai.com/docs/api-reference/responses/list?utm_source=openai
- 2: https://help.openai.com/en/articles/8555517-function-calling-in-the-openai-api%23.eps?utm_source=openai
- 3: https://openai.com/index/introducing-structured-outputs-in-the-api/?utm_source=openai
🌐 Web query:
Groq API response_format json_object specification documentation
💡 Result:
Groq API response_format: {"type":"json_object"} (JSON Object Mode)
Where it’s documented
- Groq Chat Completions API (
POST /openai/v1/chat/completions) supports aresponse_formatobject; setting it to{"type":"json_object"}enables the “older JSON mode,” which ensures the model’s message content is valid JSON syntax. [1] - Groq’s “Structured Outputs” docs describe JSON Object Mode as JSON-syntax validation only (no schema enforcement) and recommend
json_schemawhen you need schema guarantees. [2] - Groq Community FAQ reiterates usage:
response_format={"type":"json_object"}. [3]
Specification / behavior
- Request shape:
response_formatis an object with at least:type:"json_object"[1][2]
- Guarantee: valid JSON syntax (not schema adherence). [1][2]
- Recommendation: prefer
{"type":"json_schema", "json_schema": {...}}on supported models for schema-conformant outputs. [1][2] - Important requirement: include explicit instructions in your prompt telling the model to output JSON (e.g., “Respond with a JSON object only”). [2]
Minimal example (Chat Completions)
{
"model": "llama-3.3-70b-versatile",
"messages": [
{ "role": "system", "content": "Return ONLY a JSON object with keys: answer (string)." },
{ "role": "user", "content": "What is 2+2?" }
],
"response_format": { "type": "json_object" }
}Sources
[1] Groq API Reference (Chat Completions response_format) (console.groq.com)
[2] Groq Docs — Structured Outputs (JSON Object Mode section) (console.groq.com)
[3] Groq Community FAQ — JSON mode (community.groq.com)
Citations:
- 1: https://console.groq.com/docs/api-reference
- 2: https://console.groq.com/docs/structured-outputs?utm_source=openai
- 3: https://community.groq.com/t/what-is-json-mode-and-how-do-i-use-it/67?utm_source=openai
🌐 Web query:
Groq API json_object top-level array response support
💡 Result:
Groq’s response_format: { "type": "json_object" } (“JSON mode”) is intended to produce a top-level JSON object, not a bare array—so a response like [...] is not supported/guaranteed in json_object mode. (console.groq.com)
If you need a top-level array, use Structured Outputs (response_format.type = "json_schema") and set the root schema to { "type": "array", ... } (arrays are supported by Structured Outputs). (console.groq.com)
Workaround if you must stay on json_object: wrap the array in an object, e.g. { "items": [ ... ] }. (console.groq.com)
Citations:
- 1: https://console.groq.com/docs/api-reference?utm_source=openai
- 2: https://console.groq.com/docs/structured-outputs?utm_source=openai
- 3: https://console.groq.com/docs/structured-outputs
Align prompt with response_format to avoid invalid JSON.
The prompt asks for a top‑level list, but response_format={"type":"json_object"} requires an object. Groq's JSON mode does not support bare arrays at the top level. Wrap the array in an object with a facts key, or use json_schema format if available.
✅ Prompt aligned to object response
{
"role": "system",
- "content": "You are a strict fact checker. Return a JSON list of objects with keys: 'claim', 'status' (True/False/Unverified), and 'reason'."
+ "content": "You are a strict fact checker. Return a JSON object with a 'facts' array. Each item must have keys: 'claim', 'status' (True/False/Unverified), and 'reason'."
},🤖 Prompt for AI Agents
In `@backend/app/modules/fact_check_tool.py` around lines 125 - 143, The prompt
asks the model to return a top-level JSON list but the call to
client.chat.completions.create uses response_format={"type":"json_object"},
which cannot parse bare arrays; update the prompt or the response_format so they
match: either change the system/user prompt (the context string built in this
block and sent to client.chat.completions.create) to request an object like
{"facts": [...]} instead of a bare list, or switch response_format to a
schema-based JSON format (e.g., json_schema) that allows an array top-level;
make this change where the completion is invoked (client.chat.completions.create
/ LLM_MODEL / response_format) so the model output and parser align.
| except Exception as e: | ||
| logger.exception(f"Failed to chunk the data: {e}") | ||
| raise | ||
| return [] No newline at end of file |
There was a problem hiding this comment.
Broad exception handling may mask critical failures.
Catching all exceptions and returning an empty list could silently swallow serious errors (e.g., generate_id raising ValueError for invalid input). Callers cannot distinguish between "no chunks to generate" and "processing failed catastrophically."
Consider either:
- Re-raising after logging, or
- Returning a result type that indicates success/failure, or
- Narrowing the catch to expected exceptions only.
Also, per TRY401: logger.exception() already includes exception details, so e in the message is redundant.
🛡️ Proposed fix to preserve error visibility
except Exception as e:
- logger.exception(f"Failed to chunk the data: {e}")
- return []
+ logger.exception("Failed to chunk the data")
+ raiseAlternatively, if callers should handle failures gracefully:
from typing import Optional
def chunk_rag_data(data) -> tuple[list[dict], Optional[str]]:
"""Returns (chunks, error_message). error_message is None on success."""
...
return chunks, None
...
except Exception:
logger.exception("Failed to chunk the data")
return [], "Chunking failed due to an internal error"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| except Exception as e: | |
| logger.exception(f"Failed to chunk the data: {e}") | |
| raise | |
| return [] | |
| except Exception as e: | |
| logger.exception("Failed to chunk the data") | |
| raise |
🧰 Tools
🪛 Ruff (0.14.14)
[warning] 109-109: Redundant exception object included in logging.exception call
(TRY401)
🤖 Prompt for AI Agents
In `@backend/app/modules/vector_store/chunk_rag_data.py` around lines 108 - 110,
The broad except in the chunking flow (inside chunk_rag_data) swallows errors
and redundantly formats the exception; narrow the handler to only expected
exceptions (e.g., ValueError, ParsingError) or, if failures should be visible,
log via logger.exception("Failed to chunk the data") and re-raise the exception
instead of returning [] so callers can distinguish catastrophic failure from “no
chunks”; alternatively change the function signature to return a result tuple
(chunks, error_message) and on error return ([], "Chunking failed") while
logging with logger.exception("Failed to chunk the data") — also remove the
redundant {e} interpolation because logger.exception already includes the
traceback.
| "newspaper4k>=0.9.4.1", | ||
| "nltk>=3.9.1", #should remove this in next PR as this is not needed | ||
| "pinecone>=7.3.0", | ||
| "rake-nltk>=1.0.6", |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -l "import nltk|from nltk" --type pyRepository: AOSSIE-Org/Perspective
Length of output: 104
🏁 Script executed:
cd backend && pip list 2>/dev/null | grep -E "rake-nltk|nltk" || echo "Checking package info..."Repository: AOSSIE-Org/Perspective
Length of output: 90
🏁 Script executed:
# Check rake-nltk package info to confirm nltk dependency
cd backend && python3 << 'EOF'
import subprocess
import json
try:
result = subprocess.run(['pip', 'show', 'rake-nltk'], capture_output=True, text=True)
print(result.stdout)
except Exception as e:
print(f"Error: {e}")
EOFRepository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
cat -n backend/app/modules/scraper/cleaner.py | head -50Repository: AOSSIE-Org/Perspective
Length of output: 1832
🏁 Script executed:
# Check rake-nltk's dependencies (documented information)
python3 << 'EOF'
# rake-nltk is known to depend on nltk
# Let's verify the actual usage of nltk in cleaner.py
import subprocess
result = subprocess.run(['rg', '-A 3 -B 3', 'import nltk|from nltk', 'backend/app/modules/scraper/cleaner.py', '--type', 'py'],
capture_output=True, text=True)
print(result.stdout)
EOFRepository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
# Check all nltk usage in the codebase
rg "nltk\." --type py -A 2 -B 2Repository: AOSSIE-Org/Perspective
Length of output: 626
🏁 Script executed:
# Check if rake-nltk is actually used in the codebase
rg -l "rake_nltk|from rake_nltk|import.*rake" --type pyRepository: AOSSIE-Org/Perspective
Length of output: 105
🏁 Script executed:
cat -n backend/app/modules/scraper/keywords.pyRepository: AOSSIE-Org/Perspective
Length of output: 2343
🏁 Script executed:
# Check if stopwords or punkt_tab from nltk are used elsewhere
rg "stopwords|punkt_tab" --type py -lRepository: AOSSIE-Org/Perspective
Length of output: 104
Good upgrade to newspaper4k.
The comment about removing nltk in a future PR needs reconsideration. While rake-nltk>=1.0.6 does depend on nltk, the real issue is that nltk is directly used in backend/app/modules/scraper/cleaner.py for downloading and accessing the stopwords and punkt_tab corpora. Removing nltk would require refactoring the cleaner module, not just removing rake-nltk.
🤖 Prompt for AI Agents
In `@backend/pyproject.toml` around lines 18 - 21, The pyproject comment
suggesting removal of "nltk" is incorrect because
backend/app/modules/scraper/cleaner.py directly uses nltk (e.g., calls like
nltk.download(...) and accesses stopwords/punkt via stopwords.words or punkt
tokenizers); either keep "nltk>=3.9.1" in pyproject.toml and remove the inline
“should remove this” comment, or refactor cleaner.py to eliminate all nltk usage
(replace nltk.download and stopwords/ punkt usage with an alternative library or
bundled resources) before removing the dependency—locate usages in cleaner.py
and update pyproject accordingly.
| import { usePerspective } from "@/hooks/use-perspective"; | ||
| import { RightSidebar } from "@/components/perspective/RightSideBar"; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the actual RightSidebar component file
fd -a 'RightSidebar' frontend/
# Check the exact import statement in the file
rg -n 'RightSideBar|RightSidebar' frontend/app/perspective/ -t tsx -t tsRepository: AOSSIE-Org/Perspective
Length of output: 178
🏁 Script executed:
#!/bin/bash
# Check the exact import statement in page.tsx
cat -n frontend/app/perspective/page.tsx | head -30
# Also search for RightSideBar and RightSidebar occurrences across the codebase
rg 'RightSideBar|RightSidebar' frontend/ --followRepository: AOSSIE-Org/Perspective
Length of output: 1759
Fix RightSidebar import casing to avoid build failures on case-sensitive systems.
The import path uses RightSideBar but the actual file is RightSidebar.tsx. This will fail on Linux/CI environments with case-sensitive filesystems.
🛠️ Proposed fix
-import { RightSidebar } from "@/components/perspective/RightSideBar";
+import { RightSidebar } from "@/components/perspective/RightSidebar";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import { usePerspective } from "@/hooks/use-perspective"; | |
| import { RightSidebar } from "@/components/perspective/RightSideBar"; | |
| import { usePerspective } from "@/hooks/use-perspective"; | |
| import { RightSidebar } from "@/components/perspective/RightSidebar"; |
🤖 Prompt for AI Agents
In `@frontend/app/perspective/page.tsx` around lines 16 - 18, Update the import in
page.tsx to match the actual component filename by replacing the incorrect
"RightSideBar" import with "RightSidebar" (i.e., import { RightSidebar } from
"@/components/perspective/RightSidebar"); also search for and fix any other
occurrences where "RightSideBar" is used to ensure all imports match the
component's filename casing.
| <h3 className="font-bold text-lg md:text-xl text-white leading-tight"> | ||
| {title} | ||
| </h3> |
There was a problem hiding this comment.
Preserve line breaks in titles.
FeaturesSection uses \n in titles, but they will collapse to spaces. Add whitespace-pre-line so the intended line breaks render.
🎨 Suggested tweak
- <h3 className="font-bold text-lg md:text-xl text-white leading-tight">
+ <h3 className="font-bold text-lg md:text-xl text-white leading-tight whitespace-pre-line">
{title}
</h3>🤖 Prompt for AI Agents
In `@frontend/components/landing/FeatureCard.tsx` around lines 29 - 31, The title
h3 in FeatureCard (the element rendering {title}) currently collapses `\n` to
spaces; update the className on that h3 (in the FeatureCard component) to
include the Tailwind utility `whitespace-pre-line` so embedded newlines are
preserved; if classNames are composed or merged (e.g., via a helper or
variable), add `whitespace-pre-line` to that composition so the change applies
regardless of how the classes are built.
| <input | ||
| type="text" | ||
| value={url} | ||
| onChange={handleUrlChange} | ||
| placeholder="https://example.com/article" | ||
| className="flex-1 bg-transparent border-none outline-none text-white placeholder-white/50 font-light text-[15px] mr-2" | ||
| /> |
There was a problem hiding this comment.
Add accessible labeling for the URL input and its error message.
Placeholder text isn’t a label; add an explicit label and tie the error text for screen readers.
♿ Proposed a11y tweak
<input
type="text"
value={url}
onChange={handleUrlChange}
placeholder="https://example.com/article"
+ aria-label="Article URL"
+ aria-invalid={url && !isValidUrl ? "true" : "false"}
+ aria-describedby={url && !isValidUrl ? "url-error" : undefined}
className="flex-1 bg-transparent border-none outline-none text-white placeholder-white/50 font-light text-[15px] mr-2"
/>
@@
- {url && !isValidUrl && (
- <p className="text-red-400 text-xs ml-4">
+ {url && !isValidUrl && (
+ <p id="url-error" className="text-red-400 text-xs ml-4">
Please enter a valid URL
</p>
)}Also applies to: 146-150
🤖 Prompt for AI Agents
In `@frontend/components/landing/SearchBar.tsx` around lines 63 - 69, The URL
input in SearchBar (the input using value={url} and onChange={handleUrlChange})
lacks an accessible label and a programmatic relationship to its error text; add
a visible or visually-hidden <label> tied to the input via an id/htmlFor on the
input, and give the error message element a unique id (e.g., urlError) then set
aria-describedby on the input to that id so screen readers announce the error;
also ensure the error element uses an appropriate live region (role="alert" or
aria-live="assertive") so updates are announced.
| <Stat value="10k+" label="Articles Analyzed" /> | ||
| <Stat value="95%" label="Bias Detected" /> | ||
| <Stat value="98%" label="Fast Accuracy" /> | ||
| <Stat value="4.3stars" label="Ratings" /> |
There was a problem hiding this comment.
Formatting issue in ratings value.
The value "4.3stars" is missing a space and proper capitalization. This will display as "4.3stars" in the UI, which appears unpolished.
✏️ Suggested fix
- <Stat value="4.3stars" label="Ratings" />
+ <Stat value="4.3 ★" label="Ratings" />Alternatively, use "4.3 Stars" if a star symbol isn't preferred.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| <Stat value="4.3stars" label="Ratings" /> | |
| <Stat value="4.3 ★" label="Ratings" /> |
🤖 Prompt for AI Agents
In `@frontend/components/landing/StatsSection.tsx` at line 27, The displayed
rating value in the StatsSection component is formatted as "4.3stars"; update
the value prop on the Stat usage (the JSX line with Stat value="4.3stars"
label="Ratings") to a properly spaced and capitalized string such as "4.3 Stars"
(or "4.3 ★" if you prefer a symbol) so the UI shows a polished rating; adjust
only the value prop passed to the Stat component.
| if (!storedBias) { | ||
| setLoading((prev) => ({ ...prev, bias: true })); | ||
| try { | ||
| const res = await fetch("http://127.0.0.1:5555/api/bias", { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ url: storedUrl }), | ||
| }); | ||
| if (res.ok) { | ||
| const data = await res.json(); | ||
| const score = parseInt(data.bias_score, 10); | ||
| if (!isNaN(score)) { | ||
| setBiasData({ score }); | ||
| sessionStorage.setItem("BiasScore", JSON.stringify({ score })); | ||
| } | ||
| } | ||
| } catch (e) { | ||
| console.error("Bias error:", e); | ||
| } finally { | ||
| setLoading((prev) => ({ ...prev, bias: false })); | ||
| } | ||
| } | ||
|
|
||
| if (!storedAnalysis) { | ||
| setLoading((prev) => ({ ...prev, process: true })); | ||
| try { | ||
| const res = await fetch("http://127.0.0.1:5555/api/process", { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify({ url: storedUrl }), | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check if the file exists and read it
wc -l frontend/hooks/use-perspective.tsRepository: AOSSIE-Org/Perspective
Length of output: 102
🏁 Script executed:
# Read the file to see the actual code
cat -n frontend/hooks/use-perspective.tsRepository: AOSSIE-Org/Perspective
Length of output: 3910
🏁 Script executed:
# Search for environment variables used in the codebase
rg "process\.env" --type ts --type tsx --type js --type jsxRepository: AOSSIE-Org/Perspective
Length of output: 93
🏁 Script executed:
# Search for API base URL or similar configuration patterns
rg "http://" --type ts --type tsx --type js --type jsx | head -20Repository: AOSSIE-Org/Perspective
Length of output: 93
🏁 Script executed:
# Check for NEXT_PUBLIC_ environment variables
rg "NEXT_PUBLIC_" --type ts --type tsx --type js --type jsxRepository: AOSSIE-Org/Perspective
Length of output: 93
🏁 Script executed:
# Look for .env files
fd -e ".env*" -type fRepository: AOSSIE-Org/Perspective
Length of output: 236
🏁 Script executed:
# Search for environment variables with correct syntax
rg "process\.env" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"Repository: AOSSIE-Org/Perspective
Length of output: 253
🏁 Script executed:
# Search for all HTTP URLs to see hardcoding patterns
rg "http://" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"Repository: AOSSIE-Org/Perspective
Length of output: 1433
🏁 Script executed:
# Look for .env files
fd "env" -type fRepository: AOSSIE-Org/Perspective
Length of output: 236
🏁 Script executed:
# Check for next.config files
fd "next.config" -type fRepository: AOSSIE-Org/Perspective
Length of output: 236
🏁 Script executed:
# Search for API_BASE or similar configuration patterns
rg "API|BASE_URL|ENDPOINT" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"Repository: AOSSIE-Org/Perspective
Length of output: 572
🏁 Script executed:
# Check the overall project structure to understand if this is Next.js
ls -la | head -20Repository: AOSSIE-Org/Perspective
Length of output: 961
🏁 Script executed:
# Search for .env files more broadly
find . -name ".env*" -type fRepository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
# Check if there's a .env.example or similar
fd -e "env" -type fRepository: AOSSIE-Org/Perspective
Length of output: 236
🏁 Script executed:
# Check package.json to confirm Next.js
cat frontend/package.json | head -50Repository: AOSSIE-Org/Perspective
Length of output: 1743
🏁 Script executed:
# Search for where the commented-out backend_url is used
rg "backend_url|NEXT_PUBLIC_API" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx" -B 2 -A 2Repository: AOSSIE-Org/Perspective
Length of output: 623
🏁 Script executed:
# Check how the backend is set up - check if there's a .env file at the root or backend
find . -type f -name ".env*"Repository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
# Let's also check the backend API to understand the expected port
cat backend/main.py | grep -i "5555\|port" | head -20Repository: AOSSIE-Org/Perspective
Length of output: 443
🏁 Script executed:
# Check if there are any other references to API configuration
rg "5555" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx" -g "*.py"Repository: AOSSIE-Org/Perspective
Length of output: 276
Make the API base URL configurable via environment variables.
The fetch calls are hard-coded to http://127.0.0.1:5555, which will break in deployed builds and prevents configuration across environments. Additionally, the backend's default port is 7860 (via the PORT environment variable), not 5555, creating a mismatch even in development.
Use NEXT_PUBLIC_API_BASE_URL environment variable (which other files in the codebase attempt but leave incomplete) and create .env.local and .env.production files to configure endpoints per environment.
🔧 Suggested config-based base URL
const storedUrl = sessionStorage.getItem("articleUrl");
const storedAnalysis = sessionStorage.getItem("analysisResult");
const storedBias = sessionStorage.getItem("BiasScore");
+ const apiBaseUrl = process.env.NEXT_PUBLIC_API_BASE_URL ?? "";
@@
- const res = await fetch("http://127.0.0.1:5555/api/bias", {
+ const res = await fetch(`${apiBaseUrl}/api/bias`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
@@
- const res = await fetch("http://127.0.0.1:5555/api/process", {
+ const res = await fetch(`${apiBaseUrl}/api/process`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (!storedBias) { | |
| setLoading((prev) => ({ ...prev, bias: true })); | |
| try { | |
| const res = await fetch("http://127.0.0.1:5555/api/bias", { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ url: storedUrl }), | |
| }); | |
| if (res.ok) { | |
| const data = await res.json(); | |
| const score = parseInt(data.bias_score, 10); | |
| if (!isNaN(score)) { | |
| setBiasData({ score }); | |
| sessionStorage.setItem("BiasScore", JSON.stringify({ score })); | |
| } | |
| } | |
| } catch (e) { | |
| console.error("Bias error:", e); | |
| } finally { | |
| setLoading((prev) => ({ ...prev, bias: false })); | |
| } | |
| } | |
| if (!storedAnalysis) { | |
| setLoading((prev) => ({ ...prev, process: true })); | |
| try { | |
| const res = await fetch("http://127.0.0.1:5555/api/process", { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ url: storedUrl }), | |
| }); | |
| const storedUrl = sessionStorage.getItem("articleUrl"); | |
| const storedAnalysis = sessionStorage.getItem("analysisResult"); | |
| const storedBias = sessionStorage.getItem("BiasScore"); | |
| const apiBaseUrl = process.env.NEXT_PUBLIC_API_BASE_URL ?? ""; | |
| if (!storedBias) { | |
| setLoading((prev) => ({ ...prev, bias: true })); | |
| try { | |
| const res = await fetch(`${apiBaseUrl}/api/bias`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ url: storedUrl }), | |
| }); | |
| if (res.ok) { | |
| const data = await res.json(); | |
| const score = parseInt(data.bias_score, 10); | |
| if (!isNaN(score)) { | |
| setBiasData({ score }); | |
| sessionStorage.setItem("BiasScore", JSON.stringify({ score })); | |
| } | |
| } | |
| } catch (e) { | |
| console.error("Bias error:", e); | |
| } finally { | |
| setLoading((prev) => ({ ...prev, bias: false })); | |
| } | |
| } | |
| if (!storedAnalysis) { | |
| setLoading((prev) => ({ ...prev, process: true })); | |
| try { | |
| const res = await fetch(`${apiBaseUrl}/api/process`, { | |
| method: "POST", | |
| headers: { "Content-Type": "application/json" }, | |
| body: JSON.stringify({ url: storedUrl }), | |
| }); |
🤖 Prompt for AI Agents
In `@frontend/hooks/use-perspective.ts` around lines 36 - 66, Replace the
hard-coded "http://127.0.0.1:5555" API base in use-perspective.ts with a
configurable base taken from process.env.NEXT_PUBLIC_API_BASE_URL (with a
sensible fallback like "http://127.0.0.1:7860"); update the fetch calls that
reference the bias and process endpoints (the POST to "/api/bias" and
"/api/process" inside the blocks guarded by storedBias and storedAnalysis) to
build their URLs using that base; ensure existing logic around setLoading,
setBiasData and sessionStorage remains unchanged and that the hook reads
NEXT_PUBLIC_API_BASE_URL at module load or at hook start so deployments can
override it via .env.local/.env.production.
| fontFamily: { | ||
| sora: ["Sora", "sans-serif"], | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, check the tailwind.config.ts file and its context
cat -n frontend/tailwind.config.ts | head -80 | tail -20Repository: AOSSIE-Org/Perspective
Length of output: 686
🏁 Script executed:
# Search for Sora font imports in CSS files
rg -i "sora" --type=css --type=scssRepository: AOSSIE-Org/Perspective
Length of output: 94
🏁 Script executed:
# Search for next/font usage with Sora or font imports
rg -i "sora|from.*next/font" --glob "*.{tsx,ts,jsx,js}"Repository: AOSSIE-Org/Perspective
Length of output: 1539
🏁 Script executed:
# Check for Google Fonts link in layout files
rg "fonts.googleapis.com|@import.*fonts.google" --glob "*.{tsx,ts,jsx,js,html}"Repository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
# Search for local font files that might contain Sora
fd "sora|fonts" -t f frontend/Repository: AOSSIE-Org/Perspective
Length of output: 48
🏁 Script executed:
# Check the main layout file for font loading
find frontend -name "layout.tsx" -o -name "layout.ts" -o -name "_app.tsx" -o -name "_app.ts" | head -5Repository: AOSSIE-Org/Perspective
Length of output: 89
🏁 Script executed:
# Check the layout.tsx file to see how fonts are loaded
cat -n frontend/app/layout.tsx | head -50Repository: AOSSIE-Org/Perspective
Length of output: 1403
🏁 Script executed:
# Search for any `@import` statements that might load Sora font
rg "@import.*font|@import.*sora" --type=css --type=tsx --type=tsRepository: AOSSIE-Org/Perspective
Length of output: 93
🏁 Script executed:
# Check if there's a globals.css or similar file with font imports
fd "globals|style|fonts" --type f frontend/ | grep -E "\.(css|scss)$"Repository: AOSSIE-Org/Perspective
Length of output: 118
🏁 Script executed:
# Check the globals.css files for Sora font imports
cat -n frontend/app/globals.cssRepository: AOSSIE-Org/Perspective
Length of output: 3501
🏁 Script executed:
# Also check the styles/globals.css file
cat -n frontend/styles/globals.cssRepository: AOSSIE-Org/Perspective
Length of output: 3151
Load the Sora font from Google Fonts or via next/font.
The sora font is defined in the Tailwind config and used extensively throughout the application (e.g., BiasGauge.tsx, RightSidebar.tsx, perspective/page.tsx), but it is not loaded anywhere. Without loading the font via next/font/google, a @import statement, or a local font file, the browser will fall back to the generic sans-serif, making the font-sora class ineffective.
Add an import in frontend/app/layout.tsx:
import { Sora } from "next/font/google"
const sora = Sora({ subsets: ["latin"] })And include it in the body className or apply it globally in CSS.
🤖 Prompt for AI Agents
In `@frontend/tailwind.config.ts` around lines 71 - 73, The Tailwind fontFamily
defines sora but the font isn't loaded; in frontend/app/layout.tsx import the
Sora loader from next/font/google (Sora) and instantiate it (e.g., const sora =
Sora(...)), then apply the resulting sora.className to the document body or add
the loaded font to global CSS so the Tailwind fontFamily.sora is actually used;
update references like fontFamily.sora and components (BiasGauge.tsx,
RightSidebar.tsx, perspective/page.tsx) rely on this applied className.
Overview
This PR implements the new frontend design language which I sent in Discord, replacing the previous placeholder UI. It builds upon the architectural foundation laid in PR #124 to fully integrate the React frontend with the FastAPI backend.
Key Changes
UI Redesign
HeroSection,BiasGauge, and isometricFeatureCardcomponents.🔌 Backend Integration
Real Data: Replaced mock JSON with the
usePerspectivehook to fetch live data fromlocalhost:5555.Tested full user flow: Search -> Loading State -> Results -> Bias Analysis.
Confirmed compatibility with the logic introduced in Async LangGraph with DuckDuckGo fact checking and config cleanup #124.
Related to #130
Summary by CodeRabbit
Release Notes
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.