Skip to content

feat: Implement Figma Design#131

Open
khushal1512 wants to merge 19 commits intoAOSSIE-Org:mainfrom
khushal1512:fix/frontend-ui-refactor
Open

feat: Implement Figma Design#131
khushal1512 wants to merge 19 commits intoAOSSIE-Org:mainfrom
khushal1512:fix/frontend-ui-refactor

Conversation

@khushal1512
Copy link

@khushal1512 khushal1512 commented Jan 31, 2026

Overview

This PR implements the new frontend design language which I sent in Discord, replacing the previous placeholder UI. It builds upon the architectural foundation laid in PR #124 to fully integrate the React frontend with the FastAPI backend.

Key Changes

UI Redesign

  • Identity: Removed the old UI; implemented the custom dark-mode aesthetic from Figma.
  • Components: Built new HeroSection, BiasGauge, and isometric FeatureCard components.
  • Responsiveness: Fixed layout issues on mobile devices and collapsible sidebars).

🔌 Backend Integration

image image image image

Related to #130

Summary by CodeRabbit

Release Notes

  • New Features

    • Fact-checking and claim verification system with parallel processing
    • Configurable AI model selection via environment variables
    • Landing page with hero section, features showcase, and call-to-action
    • Perspective analysis page displaying bias detection and gauges
    • Search bar with provider selection (Gemini, Groq)
  • Improvements

    • Enhanced error handling and data validation throughout
    • Parallel sentiment and fact-check analysis for improved performance
    • Robust perspective and facts extraction with fallback handling

✏️ Tip: You can customize this high-level summary in your review settings.

…ython >= 3.12

(fix): removed incorrect pinecone client init in get_rag.py

(fix): centralize groq model change from llm_config.py
…arch Tool

(fix): Implement new fact check subgraph

(enhancement): sentiment analysis and fact check run parallely \n clean_text->extract_claims->plan_searches->execute_searches->verify_facts
…odes i.e sentiment node and fact check node

(chore): /process route awaits langgraph build compile
- Updated chunk_rag_data.py to support both new (claim, status) and old (original_claim, �erdict) key formats from the fact-checker.

- Added logic to correctly parse perspective whether it is a Pydantic model or a dict.

- implemented skipping of malformed facts instead of raising valueErr.

- compatibility with the new parallel DuckDuckGo fact-checking workflow.
-'PerspectiveOutput' pydantic model to handle reasoning as a list for claims
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 31, 2026

📝 Walkthrough

Walkthrough

This PR introduces configurable LLM model selection via environment variables, implements an asynchronous four-step fact-checking pipeline with claim extraction, search planning, and verification nodes, restructures the LangGraph workflow to support parallel sentiment and fact-checking analysis, converts the pipeline to async/await execution, updates Pinecone client instantiation, and adds a complete landing page and perspective page frontend with supporting components and data fetching hooks.

Changes

Cohort / File(s) Summary
LLM Model Configuration
README.md, backend/app/llm_config.py
Added HF_TOKEN environment variable documentation and introduced new llm_config.py module exposing LLM_MODEL constant that reads from environment, defaulting to "llama-3.3-70b-versatile".
Model Selection Updates Across Analysis Modules
backend/app/modules/bias_detection/check_bias.py, backend/app/modules/chat/llm_processing.py, backend/app/modules/facts_check/llm_processing.py, backend/app/modules/langgraph_nodes/judge.py
Replaced hardcoded Groq model strings ("gemma2-9b-it") with configurable LLM_MODEL imports; model selection now controlled via environment without altering function signatures or error handling.
Fact-Checking Pipeline
backend/app/modules/fact_check_tool.py
New 166-line module implementing async four-step workflow: extract_claims_node (claims extraction), plan_searches_node (query generation), execute_searches_node (parallel web searches), and verify_facts_node (claim verification); includes structured logging and fallback error handling.
LangGraph Workflow Orchestration
backend/app/modules/langgraph_builder.py, backend/app/modules/langgraph_nodes/sentiment.py
Replaced sequential sentiment+fact-check with parallel_analysis node; added run_parallel_analysis coordinating sentiment and fact-check pipelines concurrently; extended MyState with short_title, claims, search_queries, search_results fields; updated graph routing and compilation.
Async Pipeline & Route Execution
backend/app/modules/pipeline.py, backend/app/routes/routes.py
Converted run_langgraph_workflow to async, replacing synchronous invoke with await ainvoke; removed asyncio.to_thread wrapper in /process route for direct async invocation.
Data Processing & Output Schema
backend/app/modules/vector_store/chunk_rag_data.py, backend/app/utils/fact_check_utils.py, backend/app/modules/chat/get_rag_data.py, backend/app/utils/prompt_templates.py
Enhanced chunk_rag_data.py with flexible fact/perspective handling, safe field extraction, and non-fatal error recovery; updated fact_check_utils.py with result-based status checks and fallback claim extraction; modified Pinecone instantiation to use keyword argument; added short_title field to prompt output schema.
Generate Perspective Node Enhancement
backend/app/modules/langgraph_nodes/generate_perspective.py
Updated PerspectiveOutput to include short_title field and changed reasoning from string to List[str] with reasoning_steps alias; replaced hardcoded LLM string with LLM_MODEL; improved fact field extraction with safe defaults and warning logging.
Frontend Landing Page Components
frontend/components/landing/Button.tsx, frontend/components/landing/CTASection.tsx, frontend/components/landing/FeatureCard.tsx, frontend/components/landing/FeaturesSection.tsx, frontend/components/landing/Footer.tsx, frontend/components/landing/HeroSection.tsx, frontend/components/landing/Navbar.tsx, frontend/components/landing/SearchBar.tsx, frontend/components/landing/StatsSection.tsx
Added 9 new landing page components: reusable Button, CTA section, feature cards with images, features grid, footer, hero section with SearchBar, navbar, and stats display; SearchBar includes URL validation, provider selection, and sessionStorage integration.
Frontend Perspective Page & Components
frontend/app/perspective/page.tsx, frontend/components/perspective/BiasGauge.tsx, frontend/components/perspective/RightSidebar.tsx, frontend/hooks/use-perspective.ts
Introduced PerspectivePage component using usePerspective hook for data fetching; added BiasGauge SVG component for visual bias scoring; created RightSidebar with collapsible bias score, summary, and citations sections; hook manages sessionStorage caching and dual API calls (/api/process and /api/bias).
Dependencies & Styling
backend/pyproject.toml, frontend/tailwind.config.ts
Replaced newspaper3k with newspaper4k>=0.9.4.1 and removed google-search-results; restructured Tailwind config with extended theme tokens for colors, typography, spacing, shadows, and gradients.

Sequence Diagram(s)

sequenceDiagram
    participant User as Frontend User
    participant SearchBar as SearchBar Component
    participant PerspectivePage as Perspective Page
    participant API as /api/process & /api/bias
    participant LangGraph as LangGraph Workflow
    participant ParallelNode as Parallel Analysis
    participant SentimentPipeline as Sentiment Analysis
    participant FactCheckPipeline as Fact-Check Pipeline
    
    User->>SearchBar: Enter URL & Select Provider
    SearchBar->>SearchBar: Validate URL
    SearchBar->>SearchBar: Store in sessionStorage
    SearchBar->>PerspectivePage: Navigate to /perspective
    
    PerspectivePage->>PerspectivePage: Read sessionStorage
    PerspectivePage->>API: POST /api/process {url}
    
    API->>LangGraph: invoke(state)
    LangGraph->>ParallelNode: parallel_analysis(state)
    
    par Parallel Execution
        ParallelNode->>SentimentPipeline: run_sentiment_sdk
        SentimentPipeline->>SentimentPipeline: Analyze sentiment
        SentimentPipeline-->>ParallelNode: sentiment result
    and
        ParallelNode->>FactCheckPipeline: run_fact_check_pipeline
        FactCheckPipeline->>FactCheckPipeline: extract_claims_node
        FactCheckPipeline->>FactCheckPipeline: plan_searches_node
        FactCheckPipeline->>FactCheckPipeline: execute_searches_node
        FactCheckPipeline->>FactCheckPipeline: verify_facts_node
        FactCheckPipeline-->>ParallelNode: claims, search_results, facts
    end
    
    ParallelNode-->>LangGraph: merged state
    LangGraph->>LangGraph: generate_perspective
    LangGraph->>LangGraph: judge & store
    LangGraph-->>API: result with short_title, reasoning_steps
    
    API-->>PerspectivePage: analysis data
    PerspectivePage->>PerspectivePage: Render perspective + bias
    PerspectivePage-->>User: Display analysis
Loading
sequenceDiagram
    participant FactCheckPipeline as Fact-Check Pipeline
    participant ExtractNode as extract_claims_node
    participant PlanNode as plan_searches_node
    participant ExecuteNode as execute_searches_node
    participant VerifyNode as verify_facts_node
    participant Groq as Groq LLM
    participant DuckDuckGo as DuckDuckGo Search
    
    FactCheckPipeline->>ExtractNode: state with cleaned_text
    ExtractNode->>Groq: Request claim extraction (LLM_MODEL)
    Groq-->>ExtractNode: List of claims
    ExtractNode-->>FactCheckPipeline: {claims: [...]}
    
    FactCheckPipeline->>PlanNode: state with claims
    PlanNode->>Groq: Request search queries per claim
    Groq-->>PlanNode: JSON search queries
    PlanNode-->>FactCheckPipeline: {search_queries: [...]}
    
    FactCheckPipeline->>ExecuteNode: state with search_queries
    ExecuteNode->>DuckDuckGo: Execute parallel searches
    DuckDuckGo-->>ExecuteNode: Results per query
    ExecuteNode-->>FactCheckPipeline: {search_results: [...]}
    
    FactCheckPipeline->>VerifyNode: state with claims & search_results
    VerifyNode->>Groq: Verify claims against evidence
    Groq-->>VerifyNode: JSON facts with status/reason
    VerifyNode-->>FactCheckPipeline: {facts: [...], fact_check_done: true}
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

  • PR #103: Updates LangGraph pipeline nodes and prompt/LLM configuration (langgraph_builder, generate_perspective, judge) with the same architectural patterns and state structures introduced in this PR.
  • PR #113: Modifies bias_detection.check_bias, chat/llm_processing, and chat/get_rag_data modules; directly relates to LLM_MODEL introduction and Pinecone client integration in this PR.
  • PR #112: Updates bias-checking implementation to use configurable Groq model; directly related to the model selection refactoring introduced here.

Poem

🐰 Hop, hop—a fact-check hop,
Parallel paths that never stop,
Claims extracted, searches dance,
LLM models wear new pants!
Frontend blooms with landing cheer,
Perspectives crystal, bias clear!

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 5.88% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'feat: Implement Figma Design' is vague and does not clearly convey the scope of changes, which include significant backend refactoring (LLM configuration, fact-checking pipeline, async workflow) alongside frontend implementation. Consider a more specific title that captures both frontend and backend changes, such as 'feat: Implement Figma design with async LLM processing and fact-checking pipeline' or split into multiple commits/PRs by concern.
✅ Passed checks (1 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
backend/app/modules/facts_check/llm_processing.py (2)

144-149: ⚠️ Potential issue | 🔴 Critical

Unhandled JSON parse failure will cause NameError.

If json.loads(content) fails, parsed is never assigned, but line 149 still attempts to append it. This will raise a NameError: name 'parsed' is not defined.

🐛 Proposed fix
             # Try parsing the JSON response
             try:
                 parsed = json.loads(content)
+                results_list.append(parsed)
             except Exception as parse_err:
                 logger.error(f"LLM JSON parse error: {parse_err}")
-
-            results_list.append(parsed)
+                results_list.append({
+                    "verdict": "Unknown",
+                    "explanation": f"Failed to parse response: {parse_err}",
+                    "original_claim": claim,
+                    "source_link": source,
+                })

151-155: ⚠️ Potential issue | 🟡 Minor

Return value uses only the last claim from the loop.

The returned dictionary includes "claim": claim, but claim is the variable from the last iteration of the loop. If multiple claims are verified, this doesn't accurately represent the results. Consider removing this field or restructuring the return value.

🔧 Suggested fix
         return {
-            "claim": claim,
             "verifications": results_list,
             "status": "success",
         }
backend/app/utils/prompt_templates.py (1)

48-60: ⚠️ Potential issue | 🟡 Minor

Missing closing backticks for JSON code block.

The JSON code block opens with ```json on line 48 but is not closed with ``` before the template ends. This could confuse the LLM about where the JSON format example ends.

🔧 Proposed fix
   "reasoning_steps": [
     "<step 1>",
     "<step 2>",
     "<step 3>",
     "...",
     "<final reasoning>"
   ]
 }}
+```
 """)
🤖 Fix all issues with AI agents
In `@backend/app/modules/fact_check_tool.py`:
- Around line 101-110: In run_one_search, remove the unused exception binding by
changing the except clause from "except Exception as e:" to "except Exception:"
so the variable e is not declared; locate the async def run_one_search(q)
function and update only the except line to drop the unused exception variable
while preserving the existing return behavior.
- Around line 125-143: The prompt asks the model to return a top-level JSON list
but the call to client.chat.completions.create uses
response_format={"type":"json_object"}, which cannot parse bare arrays; update
the prompt or the response_format so they match: either change the system/user
prompt (the context string built in this block and sent to
client.chat.completions.create) to request an object like {"facts": [...]}
instead of a bare list, or switch response_format to a schema-based JSON format
(e.g., json_schema) that allows an array top-level; make this change where the
completion is invoked (client.chat.completions.create / LLM_MODEL /
response_format) so the model output and parser align.

In `@backend/app/modules/vector_store/chunk_rag_data.py`:
- Around line 108-110: The broad except in the chunking flow (inside
chunk_rag_data) swallows errors and redundantly formats the exception; narrow
the handler to only expected exceptions (e.g., ValueError, ParsingError) or, if
failures should be visible, log via logger.exception("Failed to chunk the data")
and re-raise the exception instead of returning [] so callers can distinguish
catastrophic failure from “no chunks”; alternatively change the function
signature to return a result tuple (chunks, error_message) and on error return
([], "Chunking failed") while logging with logger.exception("Failed to chunk the
data") — also remove the redundant {e} interpolation because logger.exception
already includes the traceback.

In `@backend/pyproject.toml`:
- Around line 18-21: The pyproject comment suggesting removal of "nltk" is
incorrect because backend/app/modules/scraper/cleaner.py directly uses nltk
(e.g., calls like nltk.download(...) and accesses stopwords/punkt via
stopwords.words or punkt tokenizers); either keep "nltk>=3.9.1" in
pyproject.toml and remove the inline “should remove this” comment, or refactor
cleaner.py to eliminate all nltk usage (replace nltk.download and stopwords/
punkt usage with an alternative library or bundled resources) before removing
the dependency—locate usages in cleaner.py and update pyproject accordingly.

In `@frontend/app/perspective/page.tsx`:
- Around line 16-18: Update the import in page.tsx to match the actual component
filename by replacing the incorrect "RightSideBar" import with "RightSidebar"
(i.e., import { RightSidebar } from "@/components/perspective/RightSidebar");
also search for and fix any other occurrences where "RightSideBar" is used to
ensure all imports match the component's filename casing.

In `@frontend/components/landing/Button.tsx`:
- Around line 5-20: Extend ButtonProps to include standard button attributes by
using React.ButtonHTMLAttributes<HTMLButtonElement> (or intersecting it) so the
Button component accepts type, aria-*, disabled, className, etc.; set a safe
default prop type="button" inside the Button component; merge incoming className
with the computed sizeClasses and existing classes; preserve and forward onClick
and all other native props (spread the remaining props onto the <button>).
Update references to ButtonProps, Button, and sizeClasses accordingly.

In `@frontend/components/landing/FeatureCard.tsx`:
- Around line 29-31: The title h3 in FeatureCard (the element rendering {title})
currently collapses `\n` to spaces; update the className on that h3 (in the
FeatureCard component) to include the Tailwind utility `whitespace-pre-line` so
embedded newlines are preserved; if classNames are composed or merged (e.g., via
a helper or variable), add `whitespace-pre-line` to that composition so the
change applies regardless of how the classes are built.
- Around line 4-25: The FeatureCard component's image prop currently allows
string which can cause Next.js runtime errors because the <Image> usage lacks
width/height/fill; change FeatureCardProps.image to only StaticImageData and add
the corresponding type import (e.g., import type { StaticImageData } from
'next/image') so callers must pass static imports, and keep the existing <Image>
usage unchanged in the FeatureCard function.

In `@frontend/components/landing/SearchBar.tsx`:
- Around line 63-69: The URL input in SearchBar (the input using value={url} and
onChange={handleUrlChange}) lacks an accessible label and a programmatic
relationship to its error text; add a visible or visually-hidden <label> tied to
the input via an id/htmlFor on the input, and give the error message element a
unique id (e.g., urlError) then set aria-describedby on the input to that id so
screen readers announce the error; also ensure the error element uses an
appropriate live region (role="alert" or aria-live="assertive") so updates are
announced.

In `@frontend/components/landing/StatsSection.tsx`:
- Line 27: The displayed rating value in the StatsSection component is formatted
as "4.3stars"; update the value prop on the Stat usage (the JSX line with Stat
value="4.3stars" label="Ratings") to a properly spaced and capitalized string
such as "4.3 Stars" (or "4.3 ★" if you prefer a symbol) so the UI shows a
polished rating; adjust only the value prop passed to the Stat component.

In `@frontend/hooks/use-perspective.ts`:
- Around line 36-66: Replace the hard-coded "http://127.0.0.1:5555" API base in
use-perspective.ts with a configurable base taken from
process.env.NEXT_PUBLIC_API_BASE_URL (with a sensible fallback like
"http://127.0.0.1:7860"); update the fetch calls that reference the bias and
process endpoints (the POST to "/api/bias" and "/api/process" inside the blocks
guarded by storedBias and storedAnalysis) to build their URLs using that base;
ensure existing logic around setLoading, setBiasData and sessionStorage remains
unchanged and that the hook reads NEXT_PUBLIC_API_BASE_URL at module load or at
hook start so deployments can override it via .env.local/.env.production.

In `@frontend/tailwind.config.ts`:
- Around line 71-73: The Tailwind fontFamily defines sora but the font isn't
loaded; in frontend/app/layout.tsx import the Sora loader from next/font/google
(Sora) and instantiate it (e.g., const sora = Sora(...)), then apply the
resulting sora.className to the document body or add the loaded font to global
CSS so the Tailwind fontFamily.sora is actually used; update references like
fontFamily.sora and components (BiasGauge.tsx, RightSidebar.tsx,
perspective/page.tsx) rely on this applied className.
🧹 Nitpick comments (9)
frontend/components/landing/Footer.tsx (1)

5-12: Make the copyright year dynamic to avoid staleness.

Low-impact improvement to prevent manual updates each year.

♻️ Proposed change
-      <p className="font-medium text-sm text-gray-400 text-center md:text-right">
-        © 2026 AOSSIE. Combating bias through AI-powered perspective analysis.
-      </p>
+      <p className="font-medium text-sm text-gray-400 text-center md:text-right">
+        © {new Date().getFullYear()} AOSSIE. Combating bias through AI-powered perspective analysis.
+      </p>
frontend/components/landing/StatsSection.tsx (1)

1-1: Unused React import with React 19 JSX transform.

With React 19 and the modern JSX transform, explicit import React from "react" is no longer required for JSX syntax. Since React isn't directly referenced elsewhere in this file, this import can be removed.

🧹 Suggested cleanup
-import React from "react";
-
backend/app/modules/vector_store/chunk_rag_data.py (1)

52-57: Check model_dump() before dict() for Pydantic v2 compatibility.

In Pydantic v2, models have both .dict() (deprecated) and .model_dump(). The current order will use the deprecated method on v2 models since hasattr will find .dict() first.

♻️ Proposed fix to prefer Pydantic v2 method
-        if hasattr(perspective_data, "dict"):
-            p_data = perspective_data.dict()
-        elif hasattr(perspective_data, "model_dump"):
+        if hasattr(perspective_data, "model_dump"):
             p_data = perspective_data.model_dump()
+        elif hasattr(perspective_data, "dict"):
+            p_data = perspective_data.dict()
         elif isinstance(perspective_data, dict):
             p_data = perspective_data
frontend/tailwind.config.ts (2)

74-81: Consider using rem units for font sizes to improve accessibility.

Hardcoded px values for font sizes don't scale with user browser font preferences, which can impact accessibility for users who need larger text. Consider converting these to rem units (e.g., hero: "4rem" instead of "64px").

Additionally, the decimal values like 14.361px and 11.967px appear to be directly exported from Figma—rounding to whole pixels (e.g., 14px, 12px) would be cleaner without visible difference.


92-93: Consider extracting the large inline SVG to a separate file.

The hero-gradient data URI is quite lengthy. For maintainability, consider moving this SVG to a separate file (e.g., public/gradients/hero-gradient.svg) and referencing it as url('/gradients/hero-gradient.svg').

backend/app/llm_config.py (1)

1-4: Good centralization of model configuration.

The pattern of extracting configuration to a dedicated module is clean and enables consistent model selection across the codebase.

However, the LLM_MODEL environment variable is not documented in the README's backend setup section (only HF_TOKEN was added). Consider adding it to the .env example:

LLM_MODEL = llama-3.3-70b-versatile  # optional, defaults to llama-3.3-70b-versatile
backend/app/utils/fact_check_utils.py (1)

55-57: Cap fallback claims to avoid unbounded searches.

If raw_output is verbose, this can trigger many outbound searches. Consider limiting to a small max.

♻️ Proposed tweak (limit fallback claims)
-    if not claims and raw_output:
-        claims = [line.strip() for line in raw_output.split('\n') if len(line.strip()) > 10]
+    if not claims and raw_output:
+        claims = [line.strip() for line in raw_output.split('\n') if len(line.strip()) > 10][:10]
frontend/components/landing/FeaturesSection.tsx (1)

45-48: Prefer a stable key over index.

Using index as key can cause reconciliation issues if the list ever changes.

♻️ Suggested tweak
-        {features.map((feature, index) => (
+        {features.map((feature) => (
           <FeatureCard
-            key={index}
+            key={feature.title}
             image={feature.image}
             title={feature.title}
             description={feature.description}
           />
         ))}
frontend/components/perspective/BiasGauge.tsx (1)

1-37: Avoid gradient ID collisions and clamp score range.
Using the score to build the gradient id can collide when multiple gauges share the same score, and out‑of‑range scores can yield invalid dash arrays. Consider clamping and using useId() for a stable unique id.

♻️ Suggested refactor
-import React from "react";
+import React, { useId } from "react";

 export function BiasGauge({ score, gradientColors, textColor, label }: BiasGaugeProps) {
+  const clampedScore = Math.min(100, Math.max(0, score));
+  const gradientId = useId();
   return (
     <div className="flex flex-col items-center justify-center py-4">
       <div className="relative w-[180px] h-[100px] mb-2">
         <svg viewBox="0 0 100 55" className="w-full h-full overflow-visible">
           {/* Background Track */}
           <path d="M 10 50 A 40 40 0 0 1 90 50" fill="none" stroke="#2A2E35" strokeWidth="8" strokeLinecap="round" />
           {/* Progress Track */}
           <path
             d="M 10 50 A 40 40 0 0 1 90 50"
             fill="none"
-            stroke={`url(`#biasGradient-`${score})`}
+            stroke={`url(#${gradientId})`}
             strokeWidth="8"
             strokeLinecap="round"
-            strokeDasharray={`${(score / 100) * 126} 126`}
+            strokeDasharray={`${(clampedScore / 100) * 126} 126`}
             className="transition-all duration-1000 ease-out"
           />
           <defs>
-            <linearGradient id={`biasGradient-${score}`} x1="0%" y1="0%" x2="100%" y2="0%">
+            <linearGradient id={gradientId} x1="0%" y1="0%" x2="100%" y2="0%">
               <stop offset="0%" stopColor={gradientColors[0]} />
               <stop offset="100%" stopColor={gradientColors[1]} />
             </linearGradient>
           </defs>
         </svg>
       </div>
       <div className="text-center">
-        <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(score)}%</div>
+        <div className={`text-3xl font-bold font-sora ${textColor}`}>{Math.round(clampedScore)}%</div>
         <div className="text-sm text-gray-400 font-sora">{label}</div>
       </div>
     </div>
   );
 }

Comment on lines +101 to +110
async def run_one_search(q):
try:
query_str = q.get("query")
c_id = q.get("claim_id")

res = await asyncio.to_thread(search_tool.invoke, query_str)
logger.info(f"Search Result for Claim {c_id}: {res[:200]}...")
return {"claim_id": c_id, "result": res}
except Exception as e:
return {"claim_id": q.get("claim_id"), "result": "Search failed"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check if file exists and read lines 101-110
if [ -f "backend/app/modules/fact_check_tool.py" ]; then
  echo "=== Lines 101-110 ==="
  sed -n '101,110p' backend/app/modules/fact_check_tool.py
  echo ""
  echo "=== Context around lines 101-110 (95-115) ==="
  sed -n '95,115p' backend/app/modules/fact_check_tool.py
else
  echo "File not found at backend/app/modules/fact_check_tool.py"
  find . -name "fact_check_tool.py" -type f
fi

Repository: AOSSIE-Org/Perspective

Length of output: 1382


Remove unused exception variable in search execution.

The exception variable e is bound but never used in the except clause. Drop the binding:

Minimal fix
-        except Exception as e:
+        except Exception:
             return {"claim_id": q.get("claim_id"), "result": "Search failed"}
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 108-108: Consider moving this statement to an else block

(TRY300)


[warning] 109-109: Do not catch blind exception: Exception

(BLE001)


[error] 109-109: Local variable e is assigned to but never used

Remove assignment to unused variable e

(F841)

🤖 Prompt for AI Agents
In `@backend/app/modules/fact_check_tool.py` around lines 101 - 110, In
run_one_search, remove the unused exception binding by changing the except
clause from "except Exception as e:" to "except Exception:" so the variable e is
not declared; locate the async def run_one_search(q) function and update only
the except line to drop the unused exception variable while preserving the
existing return behavior.

Comment on lines +125 to +143
context = "Verify these claims based on the search results:\n"
for item in results:
c_id = item["claim_id"]
if c_id < len(claims):
context += f"\nClaim: {claims[c_id]}\nEvidence: {item['result']}\n"

try:
response = await asyncio.to_thread(
client.chat.completions.create,
messages=[
{
"role": "system",
"content": "You are a strict fact checker. Return a JSON list of objects with keys: 'claim', 'status' (True/False/Unverified), and 'reason'."
},
{"role": "user", "content": context}
],
model=LLM_MODEL,
temperature=0.0,
response_format={"type": "json_object"}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Check which client is being imported in the file
head -50 backend/app/modules/fact_check_tool.py | grep -E "^import|^from"

Repository: AOSSIE-Org/Perspective

Length of output: 302


🏁 Script executed:

# Look for the client initialization to confirm it's OpenAI
grep -n "client\s*=" backend/app/modules/fact_check_tool.py | head -5

Repository: AOSSIE-Org/Perspective

Length of output: 117


🏁 Script executed:

# Check the imports at the top of the file to see which library is used
cat backend/app/modules/fact_check_tool.py | head -30

Repository: AOSSIE-Org/Perspective

Length of output: 805


🌐 Web query:

OpenAI API response_format json_object top-level array support JSON schema

💡 Result:

  • response_format: { "type": "json_object" } (“JSON mode”) is the older mode that OpenAI documents as a JSON object response format and discusses in terms of producing/validating a “JSON object” (including edge cases where it may be an incomplete JSON object). Practically: don’t rely on it for top‑level arrays; treat it as “top-level object expected.” [1], [2]

  • If you need a top-level array, use Structured Outputs:
    response_format: { "type": "json_schema", "json_schema": { ... } }. With this you can set your schema’s root to { "type": "array", ... } (or anything else allowed), and with strict: true the model is constrained to match the supplied schema (noting only a subset of JSON Schema is supported in strict mode). [1], [3]

Sources:
[1] OpenAI API Reference — Responses: response_format (json_object vs json_schema) (platform.openai.com)
[2] OpenAI Help Center — JSON mode notes (response_format: json_object) (help.openai.com)
[3] OpenAI — “Introducing Structured Outputs in the API” (JSON Schema via response_format, strict: true, limitations) (openai.com)

Citations:


🌐 Web query:

Groq API response_format json_object specification documentation

💡 Result:

Groq API response_format: {"type":"json_object"} (JSON Object Mode)

Where it’s documented

  • Groq Chat Completions API (POST /openai/v1/chat/completions) supports a response_format object; setting it to {"type":"json_object"} enables the “older JSON mode,” which ensures the model’s message content is valid JSON syntax. [1]
  • Groq’s “Structured Outputs” docs describe JSON Object Mode as JSON-syntax validation only (no schema enforcement) and recommend json_schema when you need schema guarantees. [2]
  • Groq Community FAQ reiterates usage: response_format={"type":"json_object"}. [3]

Specification / behavior

  • Request shape:
    response_format is an object with at least:
    • type: "json_object" [1][2]
  • Guarantee: valid JSON syntax (not schema adherence). [1][2]
  • Recommendation: prefer {"type":"json_schema", "json_schema": {...}} on supported models for schema-conformant outputs. [1][2]
  • Important requirement: include explicit instructions in your prompt telling the model to output JSON (e.g., “Respond with a JSON object only”). [2]

Minimal example (Chat Completions)

{
  "model": "llama-3.3-70b-versatile",
  "messages": [
    { "role": "system", "content": "Return ONLY a JSON object with keys: answer (string)." },
    { "role": "user", "content": "What is 2+2?" }
  ],
  "response_format": { "type": "json_object" }
}

Sources
[1] Groq API Reference (Chat Completions response_format) (console.groq.com)
[2] Groq Docs — Structured Outputs (JSON Object Mode section) (console.groq.com)
[3] Groq Community FAQ — JSON mode (community.groq.com)

Citations:


🌐 Web query:

Groq API json_object top-level array response support

💡 Result:

Groq’s response_format: { "type": "json_object" } (“JSON mode”) is intended to produce a top-level JSON object, not a bare array—so a response like [...] is not supported/guaranteed in json_object mode. (console.groq.com)

If you need a top-level array, use Structured Outputs (response_format.type = "json_schema") and set the root schema to { "type": "array", ... } (arrays are supported by Structured Outputs). (console.groq.com)

Workaround if you must stay on json_object: wrap the array in an object, e.g. { "items": [ ... ] }. (console.groq.com)

Citations:


Align prompt with response_format to avoid invalid JSON.

The prompt asks for a top‑level list, but response_format={"type":"json_object"} requires an object. Groq's JSON mode does not support bare arrays at the top level. Wrap the array in an object with a facts key, or use json_schema format if available.

✅ Prompt aligned to object response
                 {
                     "role": "system", 
-                    "content": "You are a strict fact checker. Return a JSON list of objects with keys: 'claim', 'status' (True/False/Unverified), and 'reason'."
+                    "content": "You are a strict fact checker. Return a JSON object with a 'facts' array. Each item must have keys: 'claim', 'status' (True/False/Unverified), and 'reason'."
                 },
🤖 Prompt for AI Agents
In `@backend/app/modules/fact_check_tool.py` around lines 125 - 143, The prompt
asks the model to return a top-level JSON list but the call to
client.chat.completions.create uses response_format={"type":"json_object"},
which cannot parse bare arrays; update the prompt or the response_format so they
match: either change the system/user prompt (the context string built in this
block and sent to client.chat.completions.create) to request an object like
{"facts": [...]} instead of a bare list, or switch response_format to a
schema-based JSON format (e.g., json_schema) that allows an array top-level;
make this change where the completion is invoked (client.chat.completions.create
/ LLM_MODEL / response_format) so the model output and parser align.

Comment on lines 108 to +110
except Exception as e:
logger.exception(f"Failed to chunk the data: {e}")
raise
return [] No newline at end of file
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Broad exception handling may mask critical failures.

Catching all exceptions and returning an empty list could silently swallow serious errors (e.g., generate_id raising ValueError for invalid input). Callers cannot distinguish between "no chunks to generate" and "processing failed catastrophically."

Consider either:

  1. Re-raising after logging, or
  2. Returning a result type that indicates success/failure, or
  3. Narrowing the catch to expected exceptions only.

Also, per TRY401: logger.exception() already includes exception details, so e in the message is redundant.

🛡️ Proposed fix to preserve error visibility
     except Exception as e:
-        logger.exception(f"Failed to chunk the data: {e}")
-        return []
+        logger.exception("Failed to chunk the data")
+        raise

Alternatively, if callers should handle failures gracefully:

from typing import Optional

def chunk_rag_data(data) -> tuple[list[dict], Optional[str]]:
    """Returns (chunks, error_message). error_message is None on success."""
    ...
    return chunks, None
    ...
    except Exception:
        logger.exception("Failed to chunk the data")
        return [], "Chunking failed due to an internal error"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
except Exception as e:
logger.exception(f"Failed to chunk the data: {e}")
raise
return []
except Exception as e:
logger.exception("Failed to chunk the data")
raise
🧰 Tools
🪛 Ruff (0.14.14)

[warning] 109-109: Redundant exception object included in logging.exception call

(TRY401)

🤖 Prompt for AI Agents
In `@backend/app/modules/vector_store/chunk_rag_data.py` around lines 108 - 110,
The broad except in the chunking flow (inside chunk_rag_data) swallows errors
and redundantly formats the exception; narrow the handler to only expected
exceptions (e.g., ValueError, ParsingError) or, if failures should be visible,
log via logger.exception("Failed to chunk the data") and re-raise the exception
instead of returning [] so callers can distinguish catastrophic failure from “no
chunks”; alternatively change the function signature to return a result tuple
(chunks, error_message) and on error return ([], "Chunking failed") while
logging with logger.exception("Failed to chunk the data") — also remove the
redundant {e} interpolation because logger.exception already includes the
traceback.

Comment on lines +18 to 21
"newspaper4k>=0.9.4.1",
"nltk>=3.9.1", #should remove this in next PR as this is not needed
"pinecone>=7.3.0",
"rake-nltk>=1.0.6",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

rg -l "import nltk|from nltk" --type py

Repository: AOSSIE-Org/Perspective

Length of output: 104


🏁 Script executed:

cd backend && pip list 2>/dev/null | grep -E "rake-nltk|nltk" || echo "Checking package info..."

Repository: AOSSIE-Org/Perspective

Length of output: 90


🏁 Script executed:

# Check rake-nltk package info to confirm nltk dependency
cd backend && python3 << 'EOF'
import subprocess
import json
try:
    result = subprocess.run(['pip', 'show', 'rake-nltk'], capture_output=True, text=True)
    print(result.stdout)
except Exception as e:
    print(f"Error: {e}")
EOF

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

cat -n backend/app/modules/scraper/cleaner.py | head -50

Repository: AOSSIE-Org/Perspective

Length of output: 1832


🏁 Script executed:

# Check rake-nltk's dependencies (documented information)
python3 << 'EOF'
# rake-nltk is known to depend on nltk
# Let's verify the actual usage of nltk in cleaner.py
import subprocess
result = subprocess.run(['rg', '-A 3 -B 3', 'import nltk|from nltk', 'backend/app/modules/scraper/cleaner.py', '--type', 'py'], 
                       capture_output=True, text=True)
print(result.stdout)
EOF

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

# Check all nltk usage in the codebase
rg "nltk\." --type py -A 2 -B 2

Repository: AOSSIE-Org/Perspective

Length of output: 626


🏁 Script executed:

# Check if rake-nltk is actually used in the codebase
rg -l "rake_nltk|from rake_nltk|import.*rake" --type py

Repository: AOSSIE-Org/Perspective

Length of output: 105


🏁 Script executed:

cat -n backend/app/modules/scraper/keywords.py

Repository: AOSSIE-Org/Perspective

Length of output: 2343


🏁 Script executed:

# Check if stopwords or punkt_tab from nltk are used elsewhere
rg "stopwords|punkt_tab" --type py -l

Repository: AOSSIE-Org/Perspective

Length of output: 104


Good upgrade to newspaper4k.

The comment about removing nltk in a future PR needs reconsideration. While rake-nltk>=1.0.6 does depend on nltk, the real issue is that nltk is directly used in backend/app/modules/scraper/cleaner.py for downloading and accessing the stopwords and punkt_tab corpora. Removing nltk would require refactoring the cleaner module, not just removing rake-nltk.

🤖 Prompt for AI Agents
In `@backend/pyproject.toml` around lines 18 - 21, The pyproject comment
suggesting removal of "nltk" is incorrect because
backend/app/modules/scraper/cleaner.py directly uses nltk (e.g., calls like
nltk.download(...) and accesses stopwords/punkt via stopwords.words or punkt
tokenizers); either keep "nltk>=3.9.1" in pyproject.toml and remove the inline
“should remove this” comment, or refactor cleaner.py to eliminate all nltk usage
(replace nltk.download and stopwords/ punkt usage with an alternative library or
bundled resources) before removing the dependency—locate usages in cleaner.py
and update pyproject accordingly.

Comment on lines +16 to +18
import { usePerspective } from "@/hooks/use-perspective";
import { RightSidebar } from "@/components/perspective/RightSideBar";

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find the actual RightSidebar component file
fd -a 'RightSidebar' frontend/

# Check the exact import statement in the file
rg -n 'RightSideBar|RightSidebar' frontend/app/perspective/ -t tsx -t ts

Repository: AOSSIE-Org/Perspective

Length of output: 178


🏁 Script executed:

#!/bin/bash
# Check the exact import statement in page.tsx
cat -n frontend/app/perspective/page.tsx | head -30

# Also search for RightSideBar and RightSidebar occurrences across the codebase
rg 'RightSideBar|RightSidebar' frontend/ --follow

Repository: AOSSIE-Org/Perspective

Length of output: 1759


Fix RightSidebar import casing to avoid build failures on case-sensitive systems.

The import path uses RightSideBar but the actual file is RightSidebar.tsx. This will fail on Linux/CI environments with case-sensitive filesystems.

🛠️ Proposed fix
-import { RightSidebar } from "@/components/perspective/RightSideBar";
+import { RightSidebar } from "@/components/perspective/RightSidebar";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
import { usePerspective } from "@/hooks/use-perspective";
import { RightSidebar } from "@/components/perspective/RightSideBar";
import { usePerspective } from "@/hooks/use-perspective";
import { RightSidebar } from "@/components/perspective/RightSidebar";
🤖 Prompt for AI Agents
In `@frontend/app/perspective/page.tsx` around lines 16 - 18, Update the import in
page.tsx to match the actual component filename by replacing the incorrect
"RightSideBar" import with "RightSidebar" (i.e., import { RightSidebar } from
"@/components/perspective/RightSidebar"); also search for and fix any other
occurrences where "RightSideBar" is used to ensure all imports match the
component's filename casing.

Comment on lines +29 to +31
<h3 className="font-bold text-lg md:text-xl text-white leading-tight">
{title}
</h3>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Preserve line breaks in titles.

FeaturesSection uses \n in titles, but they will collapse to spaces. Add whitespace-pre-line so the intended line breaks render.

🎨 Suggested tweak
-        <h3 className="font-bold text-lg md:text-xl text-white leading-tight">
+        <h3 className="font-bold text-lg md:text-xl text-white leading-tight whitespace-pre-line">
             {title}
         </h3>
🤖 Prompt for AI Agents
In `@frontend/components/landing/FeatureCard.tsx` around lines 29 - 31, The title
h3 in FeatureCard (the element rendering {title}) currently collapses `\n` to
spaces; update the className on that h3 (in the FeatureCard component) to
include the Tailwind utility `whitespace-pre-line` so embedded newlines are
preserved; if classNames are composed or merged (e.g., via a helper or
variable), add `whitespace-pre-line` to that composition so the change applies
regardless of how the classes are built.

Comment on lines +63 to +69
<input
type="text"
value={url}
onChange={handleUrlChange}
placeholder="https://example.com/article"
className="flex-1 bg-transparent border-none outline-none text-white placeholder-white/50 font-light text-[15px] mr-2"
/>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add accessible labeling for the URL input and its error message.

Placeholder text isn’t a label; add an explicit label and tie the error text for screen readers.

♿ Proposed a11y tweak
         <input
           type="text"
           value={url}
           onChange={handleUrlChange}
           placeholder="https://example.com/article"
+          aria-label="Article URL"
+          aria-invalid={url && !isValidUrl ? "true" : "false"}
+          aria-describedby={url && !isValidUrl ? "url-error" : undefined}
           className="flex-1 bg-transparent border-none outline-none text-white placeholder-white/50 font-light text-[15px] mr-2"
         />
@@
-      {url && !isValidUrl && (
-        <p className="text-red-400 text-xs ml-4">
+      {url && !isValidUrl && (
+        <p id="url-error" className="text-red-400 text-xs ml-4">
           Please enter a valid URL
         </p>
       )}

Also applies to: 146-150

🤖 Prompt for AI Agents
In `@frontend/components/landing/SearchBar.tsx` around lines 63 - 69, The URL
input in SearchBar (the input using value={url} and onChange={handleUrlChange})
lacks an accessible label and a programmatic relationship to its error text; add
a visible or visually-hidden <label> tied to the input via an id/htmlFor on the
input, and give the error message element a unique id (e.g., urlError) then set
aria-describedby on the input to that id so screen readers announce the error;
also ensure the error element uses an appropriate live region (role="alert" or
aria-live="assertive") so updates are announced.

<Stat value="10k+" label="Articles Analyzed" />
<Stat value="95%" label="Bias Detected" />
<Stat value="98%" label="Fast Accuracy" />
<Stat value="4.3stars" label="Ratings" />
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Formatting issue in ratings value.

The value "4.3stars" is missing a space and proper capitalization. This will display as "4.3stars" in the UI, which appears unpolished.

✏️ Suggested fix
-      <Stat value="4.3stars" label="Ratings" />
+      <Stat value="4.3 ★" label="Ratings" />

Alternatively, use "4.3 Stars" if a star symbol isn't preferred.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
<Stat value="4.3stars" label="Ratings" />
<Stat value="4.3 ★" label="Ratings" />
🤖 Prompt for AI Agents
In `@frontend/components/landing/StatsSection.tsx` at line 27, The displayed
rating value in the StatsSection component is formatted as "4.3stars"; update
the value prop on the Stat usage (the JSX line with Stat value="4.3stars"
label="Ratings") to a properly spaced and capitalized string such as "4.3 Stars"
(or "4.3 ★" if you prefer a symbol) so the UI shows a polished rating; adjust
only the value prop passed to the Stat component.

Comment on lines +36 to +66
if (!storedBias) {
setLoading((prev) => ({ ...prev, bias: true }));
try {
const res = await fetch("http://127.0.0.1:5555/api/bias", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
if (res.ok) {
const data = await res.json();
const score = parseInt(data.bias_score, 10);
if (!isNaN(score)) {
setBiasData({ score });
sessionStorage.setItem("BiasScore", JSON.stringify({ score }));
}
}
} catch (e) {
console.error("Bias error:", e);
} finally {
setLoading((prev) => ({ ...prev, bias: false }));
}
}

if (!storedAnalysis) {
setLoading((prev) => ({ ...prev, process: true }));
try {
const res = await fetch("http://127.0.0.1:5555/api/process", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check if the file exists and read it
wc -l frontend/hooks/use-perspective.ts

Repository: AOSSIE-Org/Perspective

Length of output: 102


🏁 Script executed:

# Read the file to see the actual code
cat -n frontend/hooks/use-perspective.ts

Repository: AOSSIE-Org/Perspective

Length of output: 3910


🏁 Script executed:

# Search for environment variables used in the codebase
rg "process\.env" --type ts --type tsx --type js --type jsx

Repository: AOSSIE-Org/Perspective

Length of output: 93


🏁 Script executed:

# Search for API base URL or similar configuration patterns
rg "http://" --type ts --type tsx --type js --type jsx | head -20

Repository: AOSSIE-Org/Perspective

Length of output: 93


🏁 Script executed:

# Check for NEXT_PUBLIC_ environment variables
rg "NEXT_PUBLIC_" --type ts --type tsx --type js --type jsx

Repository: AOSSIE-Org/Perspective

Length of output: 93


🏁 Script executed:

# Look for .env files
fd -e ".env*" -type f

Repository: AOSSIE-Org/Perspective

Length of output: 236


🏁 Script executed:

# Search for environment variables with correct syntax
rg "process\.env" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"

Repository: AOSSIE-Org/Perspective

Length of output: 253


🏁 Script executed:

# Search for all HTTP URLs to see hardcoding patterns
rg "http://" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"

Repository: AOSSIE-Org/Perspective

Length of output: 1433


🏁 Script executed:

# Look for .env files
fd "env" -type f

Repository: AOSSIE-Org/Perspective

Length of output: 236


🏁 Script executed:

# Check for next.config files
fd "next.config" -type f

Repository: AOSSIE-Org/Perspective

Length of output: 236


🏁 Script executed:

# Search for API_BASE or similar configuration patterns
rg "API|BASE_URL|ENDPOINT" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx"

Repository: AOSSIE-Org/Perspective

Length of output: 572


🏁 Script executed:

# Check the overall project structure to understand if this is Next.js
ls -la | head -20

Repository: AOSSIE-Org/Perspective

Length of output: 961


🏁 Script executed:

# Search for .env files more broadly
find . -name ".env*" -type f

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

# Check if there's a .env.example or similar
fd -e "env" -type f

Repository: AOSSIE-Org/Perspective

Length of output: 236


🏁 Script executed:

# Check package.json to confirm Next.js
cat frontend/package.json | head -50

Repository: AOSSIE-Org/Perspective

Length of output: 1743


🏁 Script executed:

# Search for where the commented-out backend_url is used
rg "backend_url|NEXT_PUBLIC_API" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx" -B 2 -A 2

Repository: AOSSIE-Org/Perspective

Length of output: 623


🏁 Script executed:

# Check how the backend is set up - check if there's a .env file at the root or backend
find . -type f -name ".env*"

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

# Let's also check the backend API to understand the expected port
cat backend/main.py | grep -i "5555\|port" | head -20

Repository: AOSSIE-Org/Perspective

Length of output: 443


🏁 Script executed:

# Check if there are any other references to API configuration
rg "5555" -g "*.ts" -g "*.tsx" -g "*.js" -g "*.jsx" -g "*.py"

Repository: AOSSIE-Org/Perspective

Length of output: 276


Make the API base URL configurable via environment variables.

The fetch calls are hard-coded to http://127.0.0.1:5555, which will break in deployed builds and prevents configuration across environments. Additionally, the backend's default port is 7860 (via the PORT environment variable), not 5555, creating a mismatch even in development.

Use NEXT_PUBLIC_API_BASE_URL environment variable (which other files in the codebase attempt but leave incomplete) and create .env.local and .env.production files to configure endpoints per environment.

🔧 Suggested config-based base URL
     const storedUrl = sessionStorage.getItem("articleUrl");
     const storedAnalysis = sessionStorage.getItem("analysisResult");
     const storedBias = sessionStorage.getItem("BiasScore");
+    const apiBaseUrl = process.env.NEXT_PUBLIC_API_BASE_URL ?? "";
@@
-          const res = await fetch("http://127.0.0.1:5555/api/bias", {
+          const res = await fetch(`${apiBaseUrl}/api/bias`, {
             method: "POST",
             headers: { "Content-Type": "application/json" },
             body: JSON.stringify({ url: storedUrl }),
           });
@@
-          const res = await fetch("http://127.0.0.1:5555/api/process", {
+          const res = await fetch(`${apiBaseUrl}/api/process`, {
             method: "POST",
             headers: { "Content-Type": "application/json" },
             body: JSON.stringify({ url: storedUrl }),
           });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (!storedBias) {
setLoading((prev) => ({ ...prev, bias: true }));
try {
const res = await fetch("http://127.0.0.1:5555/api/bias", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
if (res.ok) {
const data = await res.json();
const score = parseInt(data.bias_score, 10);
if (!isNaN(score)) {
setBiasData({ score });
sessionStorage.setItem("BiasScore", JSON.stringify({ score }));
}
}
} catch (e) {
console.error("Bias error:", e);
} finally {
setLoading((prev) => ({ ...prev, bias: false }));
}
}
if (!storedAnalysis) {
setLoading((prev) => ({ ...prev, process: true }));
try {
const res = await fetch("http://127.0.0.1:5555/api/process", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
const storedUrl = sessionStorage.getItem("articleUrl");
const storedAnalysis = sessionStorage.getItem("analysisResult");
const storedBias = sessionStorage.getItem("BiasScore");
const apiBaseUrl = process.env.NEXT_PUBLIC_API_BASE_URL ?? "";
if (!storedBias) {
setLoading((prev) => ({ ...prev, bias: true }));
try {
const res = await fetch(`${apiBaseUrl}/api/bias`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
if (res.ok) {
const data = await res.json();
const score = parseInt(data.bias_score, 10);
if (!isNaN(score)) {
setBiasData({ score });
sessionStorage.setItem("BiasScore", JSON.stringify({ score }));
}
}
} catch (e) {
console.error("Bias error:", e);
} finally {
setLoading((prev) => ({ ...prev, bias: false }));
}
}
if (!storedAnalysis) {
setLoading((prev) => ({ ...prev, process: true }));
try {
const res = await fetch(`${apiBaseUrl}/api/process`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ url: storedUrl }),
});
🤖 Prompt for AI Agents
In `@frontend/hooks/use-perspective.ts` around lines 36 - 66, Replace the
hard-coded "http://127.0.0.1:5555" API base in use-perspective.ts with a
configurable base taken from process.env.NEXT_PUBLIC_API_BASE_URL (with a
sensible fallback like "http://127.0.0.1:7860"); update the fetch calls that
reference the bias and process endpoints (the POST to "/api/bias" and
"/api/process" inside the blocks guarded by storedBias and storedAnalysis) to
build their URLs using that base; ensure existing logic around setLoading,
setBiasData and sessionStorage remains unchanged and that the hook reads
NEXT_PUBLIC_API_BASE_URL at module load or at hook start so deployments can
override it via .env.local/.env.production.

Comment on lines +71 to +73
fontFamily: {
sora: ["Sora", "sans-serif"],
},
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, check the tailwind.config.ts file and its context
cat -n frontend/tailwind.config.ts | head -80 | tail -20

Repository: AOSSIE-Org/Perspective

Length of output: 686


🏁 Script executed:

# Search for Sora font imports in CSS files
rg -i "sora" --type=css --type=scss

Repository: AOSSIE-Org/Perspective

Length of output: 94


🏁 Script executed:

# Search for next/font usage with Sora or font imports
rg -i "sora|from.*next/font" --glob "*.{tsx,ts,jsx,js}"

Repository: AOSSIE-Org/Perspective

Length of output: 1539


🏁 Script executed:

# Check for Google Fonts link in layout files
rg "fonts.googleapis.com|@import.*fonts.google" --glob "*.{tsx,ts,jsx,js,html}"

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

# Search for local font files that might contain Sora
fd "sora|fonts" -t f frontend/

Repository: AOSSIE-Org/Perspective

Length of output: 48


🏁 Script executed:

# Check the main layout file for font loading
find frontend -name "layout.tsx" -o -name "layout.ts" -o -name "_app.tsx" -o -name "_app.ts" | head -5

Repository: AOSSIE-Org/Perspective

Length of output: 89


🏁 Script executed:

# Check the layout.tsx file to see how fonts are loaded
cat -n frontend/app/layout.tsx | head -50

Repository: AOSSIE-Org/Perspective

Length of output: 1403


🏁 Script executed:

# Search for any `@import` statements that might load Sora font
rg "@import.*font|@import.*sora" --type=css --type=tsx --type=ts

Repository: AOSSIE-Org/Perspective

Length of output: 93


🏁 Script executed:

# Check if there's a globals.css or similar file with font imports
fd "globals|style|fonts" --type f frontend/ | grep -E "\.(css|scss)$"

Repository: AOSSIE-Org/Perspective

Length of output: 118


🏁 Script executed:

# Check the globals.css files for Sora font imports
cat -n frontend/app/globals.css

Repository: AOSSIE-Org/Perspective

Length of output: 3501


🏁 Script executed:

# Also check the styles/globals.css file
cat -n frontend/styles/globals.css

Repository: AOSSIE-Org/Perspective

Length of output: 3151


Load the Sora font from Google Fonts or via next/font.

The sora font is defined in the Tailwind config and used extensively throughout the application (e.g., BiasGauge.tsx, RightSidebar.tsx, perspective/page.tsx), but it is not loaded anywhere. Without loading the font via next/font/google, a @import statement, or a local font file, the browser will fall back to the generic sans-serif, making the font-sora class ineffective.

Add an import in frontend/app/layout.tsx:

import { Sora } from "next/font/google"

const sora = Sora({ subsets: ["latin"] })

And include it in the body className or apply it globally in CSS.

🤖 Prompt for AI Agents
In `@frontend/tailwind.config.ts` around lines 71 - 73, The Tailwind fontFamily
defines sora but the font isn't loaded; in frontend/app/layout.tsx import the
Sora loader from next/font/google (Sora) and instantiate it (e.g., const sora =
Sora(...)), then apply the resulting sora.className to the document body or add
the loaded font to global CSS so the Tailwind fontFamily.sora is actually used;
update references like fontFamily.sora and components (BiasGauge.tsx,
RightSidebar.tsx, perspective/page.tsx) rely on this applied className.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant