A local, privacy-first resume evaluator that scores your CV against a job description the way a senior technical recruiter and an enterprise ATS actually would — then rewrites it, highlights every change, and produces a tailored cover letter. 100% offline, 100% on your machine, powered by Ollama + DeepSeek-R1.
- What it does
- Why it's different
- Prerequisites
- Installation
- Running the app
- Environment variables
- How to use the app
- How the analysis works
- How matching is scored (full rubric)
- How the rewrite works
- How the cover letter is written
- How the before/after comparison works
- Architecture
- API reference
- Project structure
- Error handling
- Performance & streaming
- Troubleshooting
- Privacy
- License
Upload a PDF (or paste text) of your resume, paste a job description, and the app:
- Analyses the resume against the job description using a professional ATS rubric and produces a detailed, recruiter-grade report.
- Displays a Summary dashboard: score (0-100), five-category breakdown, recruiter's verdict, job-fit assessment, ATS parseability verdict, must-do changes, weak points, missing keywords, and improvement suggestions.
- Lets you tick checkboxes to choose what to generate next:
- Rewrite my resume — produces an ATS-optimised rewrite with inline yellow highlights on every changed segment and per-section change cards (Added / Modified / Removed / Reordered).
- Generate a cover letter — 280-380 words, European business format, tailored to the job.
- Lets you download both the improved resume and the cover letter as Word (
.doc) files that open natively in Microsoft Word, LibreOffice, and Google Docs — fully editable. - Provides a Before / After comparison table with one-sentence excerpts from the original vs. the improved version and a recruiter-style reason for every change.
- No cloud, no account, no upload. The resume and job description never leave your computer. The LLM runs locally via Ollama.
- Recruiter-grade rubric, not a toy keyword counter. The scoring prompt encodes the exact norms used by enterprise ATS platforms (Workday, Taleo, Greenhouse, iCIMS): Tier-1 vs Tier-2 keyword coverage, action-verb + quantified-result bullets, reverse-chronological structure, consistent date formatting, etc.
- Two-stage pipeline with opt-in generation. Analyse first (fast), then pick whether you want a rewrite, a cover letter, or both. You don't pay the 5-minute cost of a full rewrite just to see your score.
- Inline highlighted rewrite. You see exactly which segments of your new resume changed, and for each highlight a recruiter-style reason is attached.
- Streaming + idle timeout. As long as the model is producing tokens the request cannot be killed by a fixed timer — so the rewrite works even on slow CPUs.
- Word download that is actually editable. The export is Word-flavored HTML with proper headings, bullets, spacing, and A4 page margins.
| Requirement | Version | Check | Notes |
|---|---|---|---|
| Node.js | 18.17+ (20+ recommended) | node -v |
Next.js 14 minimum |
| npm | 9+ | npm -v |
Ships with Node |
| Ollama | 0.1.34+ | ollama -v |
Required for local inference |
| Disk | ~6 GB free | — | For the 8B model |
| RAM | 16 GB recommended | — | 8 GB works but is slow on CPU |
| GPU | optional | — | 6 GB+ VRAM massively accelerates Ollama |
ollama pull deepseek-r1:8bAlternative models:
ollama pull deepseek-r1:1.5b # fastest, lower quality — good for smoke tests
ollama pull deepseek-r1:14b # best quality, needs ~10 GB VRAM or 24 GB RAMcurl http://localhost:11434/api/tagsYou should see JSON listing your installed models. If the command errors, start Ollama — on macOS/Windows it runs in the system tray; on Linux: ollama serve.
cd "Resume Checker"
npm installThis installs Next.js 14, React 18, Tailwind CSS, pdf-parse, and TypeScript. No other runtime dependencies.
npm run devOpen http://localhost:3000.
npm run build
npm run startSame URL.
# macOS / Linux
OLLAMA_LOG_PROGRESS=1 npm run dev
# Windows PowerShell
$env:OLLAMA_LOG_PROGRESS="1"; npm run devThe server will print a [ollama] streaming… N chars so far heartbeat every 10 seconds, so you can see the model is working and not frozen.
All are optional. Defaults are shown.
| Variable | Default | Purpose |
|---|---|---|
OLLAMA_URL |
http://localhost:11434/api/generate |
Endpoint of your local Ollama instance |
OLLAMA_MODEL |
deepseek-r1:8b |
Model tag to use for every call |
OLLAMA_KEEP_ALIVE |
30m |
How long Ollama keeps the model loaded between calls (prevents reloading between the four endpoints) |
OLLAMA_IDLE_TIMEOUT_MS |
120000 |
Fail a request only after this many ms without a new token. Raise on very slow CPUs. |
OLLAMA_HARD_TIMEOUT_MS |
1800000 |
Absolute ceiling per request. Protects against runaway jobs. |
OLLAMA_LOG_PROGRESS |
(unset) | Set to 1 to print a 10-second progress heartbeat during streaming |
Per-endpoint limits (in each route file) override the defaults where appropriate — e.g. rewrite uses a 3 min idle + 30 min hard limit and a larger num_predict.
On the home page:
- Resume: either drag-and-drop a PDF (up to 5 MB) or switch to the Paste Text tab and paste plain text.
- Job description: paste the full JD including the responsibilities, requirements, and "nice-to-have" sections.
Click Analyze Resume. If you uploaded a PDF it's sent to /api/parse-pdf which extracts the text; then the app navigates to the results page.
Analysis runs automatically. Typical times:
- GPU: 15-40 seconds.
- Fast CPU (M-series / recent Ryzen): 60-120 seconds.
- Older CPU: 2-4 minutes.
When it finishes you see:
- Score (0-100) with a circular gauge coloured by band (Excellent / Good / Needs Work / Poor).
- Breakdown bars for Grammar / Keywords / Skills / Experience / Format with max values 20 / 30 / 25 / 15 / 10.
- Recruiter's Verdict — 3-4 sentences of professional first-read impression.
- Job Fit — explicit mapping of what the JD needs vs what your resume proves.
- Format / ATS Parseability — verdict on whether an ATS can parse your layout, plus specific issues.
- Must-Do Changes — 3-5 numbered, priority-ordered fixes that are non-negotiable for this specific job.
- Weak Points and Missing Keywords — as side-by-side cards.
- Feedback tabs — the same data re-organised into Missing Keywords / Weak Points / Improvements / Format Issues for tabbed browsing.
Below the summary, two checkboxes let you choose:
- ☐ Rewrite my resume — runs
/api/rewritethen/api/compare. - ☐ Generate a cover letter — runs
/api/cover-letter.
Both are ticked by default. A progress stepper shows which step is active.
After generation, two (or three) more tabs appear:
- Improved Resume: the rewritten CV with yellow-highlighted segments marking every change. Hover a highlight to see the reason. Toggle Highlighted / Edit if you want to tweak the text yourself. Buttons: Copy, .txt, Download Word.
- Cover Letter: the tailored letter, with the same download options.
- Before / After: a table of short excerpts — original text on the left, improved text on the right, and a recruiter-style reason for each change.
Downloaded .doc files are valid Word documents (Word-XML HTML inside) and open in:
- Microsoft Word (native),
- LibreOffice Writer,
- Google Docs (upload and open).
They include A4 margins, section headings, bulleted lists, and a serif title — ready to edit.
The analyse step sends one prompt to Ollama. The prompt has three parts:
- Persona — sets the model as "a senior technical recruiter and ATS specialist with 15+ years screening resumes at Fortune 500 and European multinationals, using the rubric of Workday / Taleo / Greenhouse / iCIMS."
- Evaluation norms — an explicit list of what recruiters and ATSes actually look for in 2024-era hiring (see How matching is scored below).
- Strict JSON output schema — the model must return exactly one JSON object with no commentary, no markdown, no
<think>blocks.
The server then:
- Strips any residual
<think>…</think>blocks DeepSeek-R1 sometimes leaks. - Removes fenced-code wrappers.
- Extracts the outer
{…}if the model wrote commentary around the JSON. - Runs a two-pass JSON repair (trailing commas, unquoted keys) before parsing.
- Sanitises and clamps every field:
grammarto 0-20,keywordsto 0-30, etc., so a hallucinated 42-pointformatcan't break the UI. - Recomputes
scoreas the sum of the five sub-scores if the model's ownscoredisagrees.
The total is 100. Categories and what moves points within each:
Weighted towards language quality, not just spelling.
- Active voice vs passive. "Built a system that reduced latency by 40%" beats "A system was built that reduced latency."
- Tense consistency. Past tense for prior roles, present tense for the current role, consistent throughout.
- No first-person pronouns. Standard résumé register — no "I", "my", or article-led phrases.
- Professional register. No colloquialisms, no emoji, no filler ("responsible for", "helped with").
- Zero spelling and punctuation errors. Any typo in the first 10 lines is an instant mark-down — real recruiters reject on this.
Tier-1 keywords (the job title itself, the top 5-8 hard skills explicitly named as requirements, the core technologies) must appear verbatim with the exact casing the JD uses. These drive ATS keyword matching.
Tier-2 keywords (industry terminology, methodologies such as Agile/Scrum, required certifications, required qualifications) add breadth.
Placement matters. Keywords in a generic "Skills" list score less than the same keywords demonstrated in Experience bullets — recruiters look for evidence of use, not just declaration. An ATS that ranks by TF-IDF will also reward placement in multiple sections.
Density without stuffing. Natural integration, not a keyword salad. Repeating the same keyword six times in a bullet is penalised.
- Every required hard skill named in the JD must appear explicitly in the resume.
- Proficiency / years signal (e.g. "5+ years of Python") when the JD requests one.
- Soft skills must be backed by evidence in experience bullets — "strong communicator" alone scores zero; "presented quarterly roadmap to 3 VPs" scores.
- Tool-chain completeness: if the JD says "React, TypeScript, Node, AWS", all four should be findable, not just two.
- Years match. Does the candidate meet or exceed the JD's minimum years?
- Seniority signals. Verbs like led, owned, architected, managed X people, drove are expected at mid/senior level.
- Every bullet follows the formula: strong action verb + what was done + how + a quantified result (%, €, $, time saved, users, scale). A bullet without numbers loses points.
- Progression. Increasing responsibility across roles should be visible chronologically.
- Relevance. Experience in unrelated industries is worth less than directly-applicable experience, regardless of seniority.
ATS parseability is make-or-break — a beautiful PDF with text in columns can score zero on real ATSes.
- Clean, standard section headers: SUMMARY / EXPERIENCE / EDUCATION / SKILLS / CERTIFICATIONS / LANGUAGES.
- Reverse-chronological order within each section.
- Consistent date format (
MMM YYYY – MMM YYYYorPresent). - A contact block at the top with email, phone, city/country, LinkedIn.
- No tables for layout, no text columns, no headers/footers, no images, no text boxes, no icons — all of these confuse ATS parsers.
- Appropriate length (one page for <10 YoE, two for senior).
| Range | Label | Meaning |
|---|---|---|
| 80-100 | Excellent | Strong match; small polish would help |
| 60-79 | Good | Competitive; apply missing keywords and metrics |
| 40-59 | Needs Work | Parseable but failing on keyword density or experience framing |
| 0-39 | Poor | Major rewrite required before this resume would pass screening |
The server clamps the score to this range; the UI colours the gauge accordingly (components/ScoreCard.tsx).
When the user picks Rewrite my resume, the app sends the original resume + job description to /api/rewrite. The prompt instructs the model to:
- Produce a full rewrite using Europass-compatible sections: Contact · Professional Summary · Key Skills · Professional Experience · Education · Certifications · Languages.
- Use plain-text, ATS-safe formatting only — no tables, columns, graphics, or emoji.
- Every experience bullet follows: action verb + what + how + quantified result.
- Integrate Tier-1 keywords from the JD into the Summary, Skills, and Experience sections (not just one).
- Preserve all truthful facts — never invent employers, dates, titles, or skills.
- Tailor the Professional Summary (3-4 lines) specifically to the target role.
The model returns two things in the same response:
improved_resume— full plain text of the rewrite.highlighted_changes— an array of per-section change objects:{ section: "Summary" | "Skills" | "Experience" | "Education" | "Certifications" | "Languages" | "Contact" | "Format", type: "added" | "modified" | "removed" | "reordered", original: string, // short excerpt updated: string, // short excerpt reason: string // ATS / recruiter benefit }
The UI uses highlighted_changes[*].updated to find those segments in improved_resume and wraps them in <mark> tags — giving you inline yellow highlights over the actual new resume, not a separate diff.
Below the resume, the Highlighted Changes card lists every change as a tagged card (colour-coded by type) with a "Why" line.
The server validates the response:
sectionmust be one of the eight allowed values (falls back toFormat).typemust be one of the four allowed values (falls back tomodified).- Entries without a
reasonare discarded.
When the user picks Generate a cover letter, /api/cover-letter sends the resume + JD to the model with a strict four-paragraph template:
- Opening (2-3 sentences): names the role, where it was seen, and a specific hook tied to the company or product.
- Why you (3-5 sentences): ties 2-3 concrete achievements from the resume (with numbers) to the top requirements in the JD.
- Why this company (2-3 sentences): references something from the JD — mission, product, value — and explains alignment.
- Closing (2-3 sentences): explicit call to interview, thanks, sign-off.
Rules enforced by the prompt:
- Salutation exactly
Dear Hiring Manager,. - Sign-off
Sincerely,followed by the candidate's real name as it appears in the resume. - 280-380 words.
- 4-6 Tier-1 JD keywords integrated naturally.
- No clichés ("I am writing to apply", "detail-oriented team player" without evidence).
- European business tone — confident, specific, not arrogant.
/api/compare takes the original resume and the rewritten resume and asks the model to produce a separate, compact comparison table — 6 to 12 of the most impactful changes, each with:
original— one-sentence excerpt.updated— one-sentence excerpt.reason— what ATS or recruiter benefit the change delivers (keyword added, metric added, passive → active, format fix, etc.).
This is deliberately a second pass rather than reusing highlighted_changes, because a dedicated prompt is better at picking the most impactful transformations rather than every small edit. The two views complement each other: highlighted_changes is exhaustive; the Before/After table is curated.
┌────────────┐ PDF / text + JD ┌───────────────────────┐
│ Home Page │ ───────────────────▶ │ /api/parse-pdf │
└────────────┘ └───────────────────────┘
│
▼
sessionStorage → /results page
│
▼
/api/analyze (auto)
│
▼
┌─ SUMMARY TAB shown to the user ─┐
│ · score + breakdown │
│ · recruiter's verdict │
│ · job-fit assessment │
│ · must-do changes │
│ · weak points + missing kw │
│ · checkboxes + Generate │
└──────────────┬───────────────────┘
│ (user picks)
┌────────────────┼────────────────┐
▼ ▼ ▼
/api/rewrite /api/cover-letter (nothing)
│
▼
/api/compare
│
▼
Ollama (deepseek-r1:8b, streaming)
- Stage 1 is a single
/api/analyzecall so the user sees the recruiter-grade summary quickly. - Stage 2 is opt-in: only the endpoints the user ticks will run. This is crucial on CPU-only machines where a rewrite alone can take 5 minutes.
- Every call uses streaming (
stream: true) so the idle timeout resets on every token. A fixed total timeout is never used. - Every call sets
keep_alive: '30m'so the model stays resident in memory between the 2-4 sequential endpoints (otherwise each call would pay the cold-start cost). - Every call uses
format: 'json'(Ollama's JSON mode) plus a server-side<think>-stripping, JSON-repair, and schema-clamping pipeline.
All endpoints are POST with JSON bodies unless stated otherwise.
multipart/form-data with field file (application/pdf, max 5 MB).
Response : { text: string }
Errors : 400 INVALID_TYPE | 400 NO_FILE | 413 FILE_TOO_LARGE | 422 NO_TEXTRequest : { resume_text: string; job_description: string }
Response : {
score: number; // 0-100
breakdown: {
grammar: number; // 0-20
keywords: number; // 0-30
skills: number; // 0-25
experience: number; // 0-15
format: number; // 0-10
};
summary: string; // 3-4 sentence recruiter verdict
job_fit: string; // match vs gap
format_assessment: string; // ATS parseability verdict
missing_keywords: string[]; // up to 12
weak_points: string[]; // 4-6
improvements: string[]; // 4-6 actionable rewrites
format_issues: string[]; // up to 6
must_do_changes: string[]; // 3-5 priority-ordered
}Request : { resume_text: string; job_description: string }
Response : {
improved_resume: string; // full plain-text resume
changes_made: string[]; // 5-10 short summary bullets
highlighted_changes: {
section: "Summary" | "Skills" | "Experience" | "Education"
| "Certifications" | "Languages" | "Contact" | "Format";
type: "added" | "modified" | "removed" | "reordered";
original: string;
updated: string;
reason: string;
}[]; // 6-12 per-section changes
}Request : { resume_text: string; job_description: string }
Response : { cover_letter: string } // 280-380 words, 4 paragraphsRequest : { original_resume: string; improved_resume: string }
Response : {
changes: {
original: string;
updated: string;
reason: string;
}[]; // 6-12 curated impactful changes
}resume-ats-analyzer/
├── app/
│ ├── api/
│ │ ├── analyze/route.ts # ATS scoring + recruiter summary
│ │ ├── rewrite/route.ts # ATS-optimised rewrite + highlights
│ │ ├── cover-letter/route.ts # Cover letter generator
│ │ ├── compare/route.ts # Curated before/after table
│ │ └── parse-pdf/route.ts # PDF text extraction
│ ├── results/page.tsx # Two-stage results dashboard
│ ├── layout.tsx
│ ├── page.tsx # Upload + paste page
│ └── globals.css
├── components/
│ ├── FileUpload.tsx # Drag-drop PDF uploader (5MB cap)
│ ├── ScoreCard.tsx # SVG radial gauge + bar breakdown
│ ├── SummaryReport.tsx # Recruiter verdict, job-fit, must-dos
│ ├── GenerationOptions.tsx # Checkboxes + Generate button
│ ├── FeedbackSection.tsx # 4-tab feedback explorer
│ ├── ResumeEditor.tsx # Inline-highlighted rewrite + Word DL
│ ├── HighlightedChangesList.tsx # Per-section change cards
│ ├── CoverLetterDisplay.tsx # Letter display + Word DL
│ ├── ComparisonTable.tsx # Before/after curated table
│ ├── ProgressStepper.tsx # Dynamic per-run step indicator
│ └── Toast.tsx
├── lib/
│ ├── ollama.ts # Streaming client, JSON repair, retries
│ ├── pdf-parser.ts # pdf-parse wrapper + size/format guards
│ ├── prompts.ts # All prompts + ATS norms
│ └── word-export.ts # Client-side Word (.doc) generator
├── types/index.ts # Shared TypeScript interfaces
├── package.json
├── tsconfig.json
├── tailwind.config.ts
├── postcss.config.js
├── next.config.js
└── README.md
User-facing messages:
| Condition | Message |
|---|---|
| Ollama not running | "Ollama is not running. Please start Ollama and try again." |
| Model not pulled | "Model "deepseek-r1:8b" not found. Run: ollama pull deepseek-r1:8b" |
| Invalid/corrupt PDF | "Unable to read PDF. Please upload a valid resume file." |
| Scanned/image-only PDF | "PDF contains no extractable text. It may be an image-based scan." |
| PDF > 5 MB | "PDF exceeds the 5MB size limit." |
| AI returned non-JSON | "AI response was not valid JSON" (server auto-retries) |
| Model stalls (no tokens) | "Ollama stalled with no new tokens for N s." |
| Hard ceiling exceeded | "Ollama exceeded the hard ceiling of N s." |
All errors are surfaced as a red Toast in the UI with a Start Over option on the results page.
The backend uses token-level streaming for every Ollama call. Two independent timers protect the request:
| Timer | Default | Behaviour |
|---|---|---|
| Idle | 120000 ms (2 min) | Resets on every new token from Ollama. Fires only if the model truly stalls. |
| Hard | 1800000 ms (30 min) | Absolute ceiling. Fires even if tokens are arriving. |
Rewrite gets a larger idle (3 min) and hard ceiling (30 min) because it emits the longest output. With these, a rewrite that genuinely takes 7 minutes on an old CPU will complete successfully — a fixed total timeout would have killed it at 3 minutes.
Typical wall-clock times per endpoint with deepseek-r1:8b:
| Endpoint | GPU (RTX 3060) | Fast CPU (M2 / 7950X) | Older CPU |
|---|---|---|---|
/api/analyze |
15-30 s | 45-90 s | 90-180 s |
/api/rewrite |
40-90 s | 2-4 min | 4-8 min |
/api/compare |
20-40 s | 60-120 s | 2-3 min |
/api/cover-letter |
15-30 s | 30-60 s | 60-120 s |
The OLLAMA_KEEP_ALIVE=30m setting keeps the model resident between endpoints, avoiding a cold-start penalty on each call.
"AI response was not valid JSON"
The server auto-retries (twice for analyze, once for the heavier endpoints) with exponential backoff. If it still fails, the model is either confused by an unusually long input or genuinely hallucinating. Try a larger model (deepseek-r1:14b) or trim the job description.
Rewrite appears to hang.
Set OLLAMA_LOG_PROGRESS=1 to see a token heartbeat in the server logs. If tokens are arriving, just wait — the 3-minute idle timer will reset on every one. If tokens aren't arriving at all, check whether another process has loaded a different model and evicted yours.
"Ollama is not running."
Check curl http://localhost:11434/api/tags. On Linux run ollama serve; on macOS/Windows check the tray icon.
"Model "deepseek-r1:8b" not found."
Run ollama pull deepseek-r1:8b. To use a different tag set OLLAMA_MODEL=... before npm run dev.
PDF extraction returns empty text.
Your PDF is almost certainly a scanned image rather than actual text. Either OCR it first (e.g. ocrmypdf input.pdf output.pdf) or switch to the Paste Text tab.
Scores look random or wildly inconsistent.
DeepSeek-R1 is a reasoning model — small prompt differences can shift scores a few points across runs. The temperature is already set to 0.2 for consistency. For truly stable evaluations upgrade to deepseek-r1:14b.
Word download opens as raw HTML in Pages.
Use Microsoft Word, LibreOffice, or upload to Google Docs. Apple Pages is known to occasionally render Word-HTML as raw source. Or use the .txt download and paste into your preferred editor.
Everything is slow.
You're running on CPU without an accelerator. Expected. Install a GPU build of Ollama if you have an NVIDIA card, or switch to deepseek-r1:1.5b for rapid iteration (quality drops noticeably).
- No telemetry, no analytics, no tracking.
- The resume text and job description are held only in the browser's
sessionStorageand are sent only to your local Ollama instance. - No outbound network calls are made other than to
OLLAMA_URL(which defaults tolocalhost). - When you click Analyze Another Resume, all stored data is wiped from
sessionStorage.
MIT. Use it, fork it, ship it.