Skip to content

Feature/sentiment analysis#62

Open
pexp13 wants to merge 3 commits into
OpenLake:mainfrom
pexp13:feature/sentiment-analysis
Open

Feature/sentiment analysis#62
pexp13 wants to merge 3 commits into
OpenLake:mainfrom
pexp13:feature/sentiment-analysis

Conversation

@pexp13
Copy link
Copy Markdown

@pexp13 pexp13 commented May 14, 2026

Summary

Closes #42

Added sentiment analysis for course/professor reviews.

Changes

Added src/lib/sentiment.ts with sentiment utility using the sentiment npm package, and scripts/add_sentiment.sql with migration to add sentiment_label column to the ratings table.

How it works

Combines text analysis with optional numeric rating (60/40 weight). Returns positive, negative, or neutral label.

Integration

Call analyzeSentiment(comment, ratingMetrics.overall) in POST /api/ratings before insert, store result in sentiment_label.

Notes

Thresholds and weights are heuristic. As a higher-accuracy alternative, @huggingface/inference with distilbert-base-uncased-finetuned-sst-2-english could be used (requires adding HF_API_TOKEN to env).

Summary by CodeRabbit

  • New Features

    • Ratings now auto-classify comments as positive, negative, or neutral.
    • Rating API responses include sentiment labels along with rating details.
  • Chores

    • Database schema updated to store sentiment labels and existing ratings backfilled as neutral.

Review Change Stack

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 14, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: f66cdedd-76f2-4a02-872d-e80c40871a0f

📥 Commits

Reviewing files that changed from the base of the PR and between 1f1a28c and 0c9b7c3.

📒 Files selected for processing (1)
  • src/lib/sentiment.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/lib/sentiment.ts

Walkthrough

This PR adds automatic sentiment classification to user ratings. A new sentiment analysis library computes sentiment labels from review text and optional ratings, the database schema adds a sentiment_label column with validation, and the ratings API endpoints now compute and persist sentiment during POST requests and return sentiment data in GET responses.

Changes

Sentiment Analysis Feature

Layer / File(s) Summary
Sentiment dependency setup
package.json
Adds sentiment npm package and @types/sentiment type definitions to support sentiment analysis.
Sentiment analysis library
src/lib/sentiment.ts
Defines SentimentLabel type and SentimentResult interface; exports analyzeSentiment(text, overallRating?) that analyzes text using the sentiment library, optionally blends an overall rating, and maps the final score to a positive/negative/neutral label using thresholds.
Database schema extension
scripts/add_sentiment.sql
Adds nullable sentiment_label column to ratings table with check constraint restricting values to positive, negative, neutral, or null; backfills existing rows to neutral.
Supabase type definitions
src/types/supabase.ts
Updates ratings table Row, Insert, and Update type shapes to include sentiment_label with the matching union/null type.
API route integration
src/pages/api/ratings/route.ts
POST handler computes sentiment via analyzeSentiment and stores sentiment_label in the inserted row, returning it alongside id and displayDate; GET handler includes sentiment_label in the ratings query results.

Sequence Diagram

sequenceDiagram
  participant Client
  participant POST_api_ratings as "POST /api/ratings"
  participant analyzeSentiment
  participant Supabase
  
  Client->>POST_api_ratings: POST {comment, rating}
  POST_api_ratings->>analyzeSentiment: analyzeSentiment(text, overallRating)
  analyzeSentiment-->>POST_api_ratings: SentimentResult {label, score}
  POST_api_ratings->>Supabase: INSERT ratings {sentiment_label, ...}
  Supabase-->>POST_api_ratings: Created rating row
  POST_api_ratings-->>Client: {id, sentimentLabel, displayDate}
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐰 I sniff the words, I count the tone,
I blend a score with stars well-known,
Positive hops, neutral sits still,
Negative thumps keep me chill.
I tuck a label in each review's hill.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Feature/sentiment analysis' directly reflects the main feature addition in the changeset.
Linked Issues check ✅ Passed The PR successfully implements sentiment analysis to classify reviews as Positive, Negative, or Neutral, fulfilling issue #42 requirements.
Out of Scope Changes check ✅ Passed All changes are directly scoped to implementing sentiment analysis feature; no unrelated modifications detected.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

src/lib/sentiment.ts

Parsing error: The keyword 'import' is reserved

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (2)
src/lib/sentiment.ts (1)

13-13: 💤 Low value

Handle missing text more explicitly.

Using text || '' as a fallback silently treats null, undefined, or empty strings the same way. When text is missing, the sentiment will be based solely on the rating (if provided), which may be intentional. However, making this explicit would improve clarity.

💡 Consider making the fallback more explicit
-  const { comparative } = analyzer.analyze(text || '');
+  // When no text is provided, sentiment is based on rating only
+  const { comparative } = analyzer.analyze(text ?? '');
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@src/lib/sentiment.ts` at line 13, The line using analyzer.analyze(text || '')
should be made explicit about missing text: check whether text is
null/undefined/empty before calling analyzer.analyze and only call
analyzer.analyze when there is actual text; otherwise set comparative (or
sentiment score) to a clear default (e.g., 0) or derive it solely from rating.
Update the code around analyzer.analyze and the variable comparative to branch
on a hasText check so the behavior is explicit (use analyzer.analyze(text) when
hasText, else assign the chosen default comparative and document the branch).
scripts/add_sentiment.sql (1)

13-15: ⚖️ Poor tradeoff

Consider leaving historical ratings as NULL instead of 'neutral'.

The backfill sets all existing ratings to 'neutral' without analyzing their actual sentiment. Since the constraint permits NULL values, consider leaving historical ratings as NULL to distinguish between "analyzed as neutral" vs "not yet analyzed."

Alternatively, if you want to backfill with analyzed values, you could write a script to recompute sentiment for existing ratings based on their comment and rating_metrics.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@scripts/add_sentiment.sql` around lines 13 - 15, The current backfill
unconditionally sets existing rows to 'neutral' (UPDATE ratings SET
sentiment_label = 'neutral' WHERE sentiment_label IS NULL;), which hides whether
a rating was actually analyzed; either remove this UPDATE so historical
sentiment_label values remain NULL, or replace it with a proper recomputation
step that derives labels from the rating content (e.g., call your sentiment
computation routine against comment and rating_metrics and UPDATE ratings SET
sentiment_label = <computed_label> WHERE sentiment_label IS NULL); locate the
UPDATE in scripts/add_sentiment.sql and implement the chosen approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Nitpick comments:
In `@scripts/add_sentiment.sql`:
- Around line 13-15: The current backfill unconditionally sets existing rows to
'neutral' (UPDATE ratings SET sentiment_label = 'neutral' WHERE sentiment_label
IS NULL;), which hides whether a rating was actually analyzed; either remove
this UPDATE so historical sentiment_label values remain NULL, or replace it with
a proper recomputation step that derives labels from the rating content (e.g.,
call your sentiment computation routine against comment and rating_metrics and
UPDATE ratings SET sentiment_label = <computed_label> WHERE sentiment_label IS
NULL); locate the UPDATE in scripts/add_sentiment.sql and implement the chosen
approach.

In `@src/lib/sentiment.ts`:
- Line 13: The line using analyzer.analyze(text || '') should be made explicit
about missing text: check whether text is null/undefined/empty before calling
analyzer.analyze and only call analyzer.analyze when there is actual text;
otherwise set comparative (or sentiment score) to a clear default (e.g., 0) or
derive it solely from rating. Update the code around analyzer.analyze and the
variable comparative to branch on a hasText check so the behavior is explicit
(use analyzer.analyze(text) when hasText, else assign the chosen default
comparative and document the branch).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 8fda3d0b-beeb-4839-a2e3-80e2d5b028ec

📥 Commits

Reviewing files that changed from the base of the PR and between 5ec9043 and 1f1a28c.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (5)
  • package.json
  • scripts/add_sentiment.sql
  • src/lib/sentiment.ts
  • src/pages/api/ratings/route.ts
  • src/types/supabase.ts

@pexp13
Copy link
Copy Markdown
Author

pexp13 commented May 16, 2026

@CodeRabbit review

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 16, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FOSSOVERFLOW-25] Sentiment Analysis

1 participant