feat: restructure monorepo, add shared package, and background remover#17
feat: restructure monorepo, add shared package, and background remover#17gaureshpai wants to merge 52 commits intomainfrom
Conversation
- Introduce 'package/' directory for shared CLI and core utilities. - Centralize dependency management using pnpm workspace catalogs. - Add ImageBackgroundRemover component to the frontend. - Update backend and frontend to align with the new workspace structure. - Replace markdownlint with Biome and refine Turbo tasks. - Perform a global refactor of routes, middleware, and UI components.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughBackend: module-aliasing, Jimp-based image handling, DNS/IP hardening, Supabase cleanup gating (cron secret + admin JWT), TotalUsage key + migration, filename sanitization, and widespread route validations. Frontend: migration to alias imports, added ImageBackgroundRemover, tightened validations/timeouts, and many formatting/linting updates. CI: always-run release bumps and split website/package versions. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant CI as Workflow (trigger)
participant Caller as External Cron/Caller
participant API as Backend API
participant Secret as CronSecretMiddleware
participant Auth as JWTAuthMiddleware
participant Cleaner as cleanSupabaseStorage
participant Storage as Supabase Storage
CI->>API: POST /api/clean-supabase/trigger (x-cron-secret)
API->>Secret: validate x-cron-secret
Secret-->>API: allowed / rejected
Caller->>API: POST /api/clean-supabase (x-auth-token)
API->>Auth: verify JWT & admin role
Auth-->>API: authorized / unauthorized
API->>Cleaner: invoke cleanSupabaseStorage()
Cleaner->>Storage: list objects → filter by age → remove old files
Storage-->>Cleaner: deletion results
Cleaner-->>API: report status
API-->>Caller: 200 / 401 / 403 / 500
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
|
This comment was marked as outdated.
This comment was marked as outdated.
- Security: - Implemented robust filename sanitization for all file uploads. - Added UI warnings for insecure MD5 hash generation. - Added validation/length limits for analytics tracking inputs. - Switched API tracking to use req.path to prevent query string leakage. - Reliability: - Added 5s timeouts and 1MB size limits to outbound redirect checks. - Improved frontend state management to clear file selections on validation failure. - Added try/finally blocks to ensure loading states are always reset. - Improved error handling for image resizing and base64 conversion. - Media Processing: - Integrated @jimp/wasm-webp and @jimp/wasm-avif for improved format support. - Fixed background removal configuration for node environments. - Updated Jimp usage to use modern buffer reading patterns.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
- Add SSRF protection to redirect checker with DNS validation - Add auth middleware to clean-supabase endpoint - Fix TotalUsage model key field - Update Jimp library usage with defaultFormats/plugins - Fix PDF compression options - Various frontend component fixes (error handling, quality clamping, image loading check, diff display, timeout cleanup)
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as outdated.
This comment was marked as outdated.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as resolved.
This comment was marked as resolved.
Fixed 1 file(s) based on 2 unresolved review comments. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
Docstrings generation was requested by @gaureshpai. The following files were modified: * `frontend/src/App.jsx` These files were kept as they were: * `backend/routes/redirectChecker.js` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Actionable comments posted: 2
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
♻️ Duplicate comments (6)
frontend/src/components/HashGenerator.jsx (1)
1-1:⚠️ Potential issue | 🔴 CriticalRemove the UTF-8 BOM at Line 1.
Line 1 still contains a leading BOM character (
\uFEFF) before theimport, which can break parsing/tooling in some environments.#!/bin/bash python - <<'PY' from pathlib import Path target = Path("frontend/src/components/HashGenerator.jsx") data = target.read_bytes() print(f"{target}: starts_with_bom={data.startswith(b'\\xef\\xbb\\xbf')}") PY🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/HashGenerator.jsx` at line 1, The file contains a UTF-8 BOM before the first token (the leading \uFEFF) which can break parsers; open frontend/src/components/HashGenerator.jsx and remove the BOM so the first character is the "i" of the import (remove the invisible byte and re-save the file as UTF-8 without BOM), then verify by confirming the import line `import useAnalytics from "@frontend/utils/useAnalytics";` starts at the first column and by re-running the provided BOM-check script.frontend/src/components/ImageCropper.jsx (1)
129-130:⚠️ Potential issue | 🟡 MinorDerive the extension from
blob.type, not the requested MIME.This still has the old filename/MIME mismatch edge case in a smaller form:
toBlob(..., mimeType)may fall back to another format, so the extension should come from the resultingblob.typeinstead ofmimeType.💡 Minimal fix
- const extension = mimeType.split("/")[1] || "png"; + const extension = blob.type.split("/")[1] || "png";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 129 - 130, The extension is being derived from the requested mimeType instead of the actual resulting blob, which can differ; update the code where the Blob is available (the toBlob callback) to derive extension from blob.type (e.g., const extension = (blob.type || "image/png").split("/")[1] || "png") and then call handleDownload(objectUrl, `dkutils-cropped-image-${Date.now()}.${extension}`) so the filename matches the actual blob format; ensure you reference blob.type, not mimeType, when building the extension.backend/docs/README.md (1)
441-450:⚠️ Potential issue | 🟡 MinorFix the malformed fenced block in the 401 response example.
At Line 441-Line 450, the block is still mixed/imbalanced (indented fence + extra closing fence), which keeps markdownlint noisy and can render incorrectly.
Suggested patch
- **401** - Unauthorized - Content-Type: `application/json` - Schema: - ```json - { - - "msg": "Invalid or missing cron secret" - } - - ``` - - ``` + ```json + { + "msg": "Invalid or missing cron secret" + } + ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/README.md` around lines 441 - 450, Fix the malformed fenced block in the 401 response example: replace the mixed/indented fences and extra closing fence with a single proper JSON fenced block by consolidating the opening fence to "```json", ensure the JSON object is properly formatted with "msg": "Invalid or missing cron secret", and close with a single "```" fence; update the example section in README.md (the 401 response example) so the block matches the suggested patch.backend/index.js (1)
27-34:⚠️ Potential issue | 🟠 MajorRe-throw the Supabase readiness failure.
startServer()awaits this check, but the catch block only logs and returns. That means invalid Supabase credentials or a missing bucket still let the API boot and report healthy until a storage-backed route fails.Suggested fix
const testSupabaseConnection = async () => { try { const { data: bucket, error: getBucketError } = await supabase.storage.getBucket("utilityhub"); if (getBucketError) throw getBucketError; console.log(`Supabase Storage connected!\nBucket '${bucket.name}' found.`); } catch (error) { console.error("Supabase Storage connection failed:", error.message); + throw error; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/index.js` around lines 27 - 34, The testSupabaseConnection function currently catches Supabase readiness errors and only logs them, allowing startServer() to continue; update testSupabaseConnection (or its catch block) to re-throw the caught error after logging (or remove the catch so the promise rejects) so that startServer() receives the rejection and aborts startup when Supabase credentials/bucket are invalid; ensure references to supabase.storage.getBucket and the existing console.error remain but follow with throw error to propagate the failure back to startServer.backend/routes/imageConverter.js (1)
133-145:⚠️ Potential issue | 🟠 MajorKeep the
_dkutils_shortcut basename-only.This still lets nested storage keys bypass the prefix allowlist as long as the basename contains
_dkutils_(for example,private/path/foo_dkutils_bar.png). The shortcut should only apply when the requested key is itself just the generated basename.Suggested fix
const baseName = path.basename(filename); + const isWhitelistedPath = allowedPrefixes.some((prefix) => filename.startsWith(prefix)); + const isGeneratedArtifact = filename === baseName && baseName.includes("_dkutils_"); const allowedPrefixes = [ "dkutils_", "dkutils-", "screenshot-", "screenshots/screenshot-", "favicons/", ]; if ( filename.includes("..") || - (!allowedPrefixes.some((prefix) => filename.startsWith(prefix)) && - !baseName.includes("_dkutils_")) + (!isWhitelistedPath && !isGeneratedArtifact) ) { return res.status(403).json({ msg: "Invalid filename." }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/imageConverter.js` around lines 133 - 145, The current check uses baseName.includes("_dkutils_") which permits nested keys like "private/path/foo_dkutils_bar.png"; change the shortcut so it only applies when the requested key is exactly the generated basename. In the filename validation inside imageConverter.js (look for variables filename, baseName, allowedPrefixes and the condition using startsWith and baseName.includes), replace the baseName.includes("_dkutils_") allowance with a stricter check that the filename equals the baseName (or has no directory component) AND the baseName contains "_dkutils_".frontend/src/components/ImageToPdfConverter.jsx (1)
18-20:⚠️ Potential issue | 🟠 MajorFile changes still don't invalidate the in-flight conversion.
conversionIdRefonly advances on submit. If the user starts conversion A and then changes the file picker before the response returns,currentConversionId === conversionIdRef.currentstill holds, so A is downloaded and shown against the new selection.Suggested fix
const onFileChange = (e) => { + conversionIdRef.current += 1; setConvertedFile(null); const files = Array.from(e.target.files);<input accept="image/*" className="block w-full text-sm text-foreground border border-input rounded-lg cursor-pointer bg-background focus:outline-none file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-semibold file:bg-primary/10 file:text-primary hover:file:bg-primary/10" id="multiple_files" type="file" multiple + disabled={loading} onChange={onFileChange} />Also applies to: 65-67, 89-105, 140-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 18 - 20, The bug is that conversionIdRef is only incremented on submit so an in-flight conversion response can still match the currentConversionId after the user changes files; update the logic in onFileChange (and symmetrically in handlers around the submit/response code paths that check conversionIdRef.current, e.g., where currentConversionId is used) to advance or reset conversionIdRef when the file input changes (or explicitly mark/cancel the in-flight conversion) so any incoming response with the old currentConversionId is ignored; locate and modify conversionIdRef, onFileChange, and the submit/response handlers (the functions that compare currentConversionId === conversionIdRef.current) to ensure the ref is bumped on file change and responses validate against the newest ref.
🟠 Major comments (23)
frontend/src/components/ImageResizer.jsx-104-107 (1)
104-107:⚠️ Potential issue | 🟠 MajorDo not preserve the original extension for canvas output.
canvas.toDataURL(originalImage.type)may silently fall back to PNG when that encoder is unsupported, but the download name still keeps the original extension. That can produce files likeresized-photo.gifwhose bytes are actually PNG. Use a known output format and derive the filename extension from the actual MIME type.Suggested fix
- const dataUrl = canvas.toDataURL(originalImage.type); + const outputType = + originalImage.type === "image/jpeg" || originalImage.type === "image/png" + ? originalImage.type + : "image/png"; + const dataUrl = canvas.toDataURL(outputType); + const extension = outputType === "image/jpeg" ? "jpg" : "png"; setResizedImageSrc(dataUrl); - handleDownload(dataUrl, `resized-${originalImage ? originalImage.name : "image"}`); + handleDownload(dataUrl, `resized-image.${extension}`); setLoading(false);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 104 - 107, canvas.toDataURL should use a known output MIME (e.g., "image/png" or "image/jpeg") and the download filename must be derived from the actual dataUrl MIME, not originalImage.type; change the call in ImageResizer.jsx to generate the dataUrl with a fixed MIME (canvas.toDataURL("image/png") or chosen output), then parse the resulting dataUrl to extract its MIME (from the "data:...;base64" prefix), map that MIME to a file extension (png, jpg, etc.), construct the download name using originalImage.name without its original extension (or a fallback base name) plus the derived extension, and pass that filename to handleDownload while still calling setResizedImageSrc(dataUrl).frontend/src/components/QrCodeScanner.jsx-10-10 (1)
10-10:⚠️ Potential issue | 🟠 MajorPrevent stale async scan results from overwriting newer uploads.
Concurrent uploads can resolve out of order and show the wrong QR data.
Suggested fix
const [qrData, setQrData] = useState(""); const lastTrackedQrDataRef = useRef(""); + const scanIdRef = useRef(0); @@ const handleImageUpload = async (e) => { + const scanId = ++scanIdRef.current; const file = e.target.files[0]; @@ const reader = new FileReader(); reader.onload = (event) => { + if (scanId !== scanIdRef.current) return; const img = new Image(); img.onload = () => { + if (scanId !== scanIdRef.current) return; const MAX_DIMENSION = 1000;Also applies to: 26-86
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/QrCodeScanner.jsx` at line 10, The scan handler is allowing stale async scan/upload results to overwrite newer QR data; use the existing lastTrackedQrDataRef (or add a scanCounterRef) to ignore out-of-order resolutions: when a scan is initiated (e.g., in the QR scan callback such as handleScan/onDetected), set lastTrackedQrDataRef.current to the scanned value or a new incremental token, then when the async upload or processing promise resolves check that the ref/token still matches before updating state or UI; if it doesn't match, discard the result. Ensure this guard is applied to all async scan flows referenced in the file (lines ~26-86) so only the latest scan result is accepted.frontend/src/components/QrCodeScanner.jsx-35-38 (1)
35-38:⚠️ Potential issue | 🟠 MajorAdd
FileReader/Imageerror paths to avoid stuck “Scanning...” state.If file read or image decode fails, there’s no handler, so users can be left without a terminal state.
Suggested fix
const reader = new FileReader(); + reader.onerror = () => { + setQrData(""); + toast.error("Failed to read the selected image."); + }; reader.onload = (event) => { + const result = event?.target?.result; + if (typeof result !== "string") { + setQrData(""); + toast.error("Unsupported image data."); + return; + } const img = new Image(); + img.onerror = () => { + setQrData(""); + toast.error("Unable to load the selected image."); + }; img.onload = () => { const MAX_DIMENSION = 1000; @@ - img.src = event.target.result; + img.src = result; };Also applies to: 82-85
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/QrCodeScanner.jsx` around lines 35 - 38, The FileReader/Image flow in QrCodeScanner uses reader.onload and img.onload but lacks error handlers, so add reader.onerror and img.onerror to both places where FileReader/Image are used (the reader and img created in reader.onload and the similar block around lines 82-85) to clear the "scanning" state and surface a user-friendly error; specifically, update the reader variable to set reader.onerror = (e) => { setScanning(false); setError("Failed to read file"); } and set img.onerror = () => { setScanning(false); setError("Failed to decode image"); } (or call the component's existing error/cleanup functions) so the component always reaches a terminal state on read or decode failure.backend/utils/pdfErrorHandler.js-74-85 (1)
74-85:⚠️ Potential issue | 🟠 MajorAvoid hard-failing on
%PDF-only at byte 0.Lines 74–85 assume the PDF header starts at offset 0, but the function later checks header presence in the first 1KB. This mismatch can reject valid files with a small preamble and also parses version from a fixed offset.
🛠️ Proposed fix
- // Magic bytes validation for PDF files - const pdfSignature = Buffer.from([0x25, 0x50, 0x44, 0x46, 0x2d]); // %PDF- - const fileSignature = file.buffer.slice(0, 5); - - if (!pdfSignature.equals(fileSignature)) { - throw new Error("Invalid PDF file signature. The file is not a valid PDF."); - } - - // Additional validation: check for PDF version - const pdfVersion = file.buffer.slice(5, 8).toString(); - const validVersions = ["1.0", "1.1", "1.2", "1.3", "1.4", "1.5", "1.6", "1.7", "2.0"]; - - if (!validVersions.includes(pdfVersion)) { + // Header/version validation in first 1KB window + const headerWindow = file.buffer.toString("latin1", 0, Math.min(file.buffer.length, 1024)); + const headerMatch = headerWindow.match(/%PDF-(1\.[0-7]|2\.0)/); + if (!headerMatch) { + throw new Error("Invalid PDF file signature. The file is not a valid PDF."); + } + const pdfVersion = headerMatch[1]; + const validVersions = ["1.0", "1.1", "1.2", "1.3", "1.4", "1.5", "1.6", "1.7", "2.0"]; + if (!validVersions.includes(pdfVersion)) { throw new Error( `Unsupported PDF version: ${pdfVersion}. Please use a PDF with version 1.0-2.0.`, ); } @@ - // First validate PDF header - if (!bufferStr.includes("%PDF")) { + // First validate PDF header presence + if (!bufferStr.includes("%PDF")) { throw new Error("Invalid PDF file. Missing PDF header."); }Also applies to: 92-94
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/pdfErrorHandler.js` around lines 74 - 85, The code currently compares pdfSignature to fileSignature and slices file.buffer at fixed offsets (e.g., pdfVersion = file.buffer.slice(5, 8)) which assumes the PDF header starts at byte 0; update the logic to locate the PDF header ("%PDF-") within the first 1KB (use the same header scan used elsewhere), reject only if no header is found, and compute the version slice relative to the found header index (e.g., headerIndex + 5 through headerIndex + 8) before checking against validVersions; apply the same relative-offset approach to the other checks around pdfSignature/pdfVersion referenced later (lines noted 92–94) and keep the same error messages when validation fails.frontend/src/components/ImageCropper.jsx-113-131 (1)
113-131:⚠️ Potential issue | 🟠 MajorBlock file input changes while a crop is in flight.
The file input is not disabled while
loadingis true, allowing users to select a new file before the previous crop completes. The pendingtoBlob()callback will then set and download the old crop result with the new image state, producing the wrong output. Additionally, if the component unmounts before the callback returns, the newly created blob URL is set to state (ignored post-unmount) but the object URL remains alive and unreferenced in state—leaking the blob URL indefinitely.Disable the file input when
loadingis true, or use a mounted ref/request token to invalidate stale callbacks before state updates.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 113 - 131, Disable the file input while a crop is in flight and guard the canvas.toBlob callback against stale/unmounted state: add a loading-based disable on the file input (use the existing loading state) so users cannot pick a new file during an in-progress crop, and also introduce a mounted/ref token (e.g., isMountedRef or currentCropId) checked inside the canvas.toBlob callback before calling setCroppedImageSrc, handleDownload, or setLoading; if the token indicates stale or unmounted, revoke the created object URL immediately and skip state updates to avoid leaking blob URLs (also revoke any previous croppedImageSrc that startsWith("blob:") before creating the new one).frontend/src/components/PasswordStrengthChecker.jsx-13-35 (1)
13-35:⚠️ Potential issue | 🟠 MajorFix stale response handling and debounce cleanup.
The component has two concurrent issues:
Race condition on async responses (line 22): Overlapping requests can resolve out-of-order, with older responses overwriting newer password strength results.
Debounce timer leak on unmount (lines 58–60): The
useEffectthat triggers the debounced check has no cleanup function to clear the pending timeout if the component unmounts while a timer is active.Suggested fix
const PasswordStrengthChecker = () => { + const latestRequestIdRef = useRef(0); + const debounceTimeoutRef = useRef(null); const checkStrength = useCallback(async (pwd) => { if (pwd.length === 0) { + latestRequestIdRef.current += 1; setStrengthScore(0); setFeedback([]); return; } + const requestId = ++latestRequestIdRef.current; setLoading(true); try { const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, { password: pwd, }); + if (requestId !== latestRequestIdRef.current) return; setStrengthScore(res.data.score); setFeedback(res.data.feedback); } catch (err) { + if (requestId !== latestRequestIdRef.current) return; console.error("Error checking password strength:", err); setStrengthScore(0); setFeedback(["Error checking strength."]); toast.error("Failed to check password strength."); } finally { - setLoading(false); + if (requestId === latestRequestIdRef.current) setLoading(false); } }, []); useEffect(() => { - debouncedCheckStrengthRef.current(password); + clearTimeout(debounceTimeoutRef.current); + debounceTimeoutRef.current = setTimeout(() => { + checkStrengthRef.current(password); + }, 500); + return () => clearTimeout(debounceTimeoutRef.current); }, [password]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordStrengthChecker.jsx` around lines 13 - 35, Fix the race and debounce cleanup by making checkStrength cancel or ignore stale requests and by clearing the debounce timeout on unmount: keep a mutable ref (e.g., currentRequestRef or currentAbortRef) and before starting a new axios call in checkStrength, cancel the prior request (using AbortController or axios CancelToken) or increment a requestId and capture it locally so you only call setStrengthScore/setFeedback if the response matches the latest id; also store the debounce timer id in a ref used by the useEffect that schedules checkStrength and return a cleanup function that clearsTimeout(timerRef.current) to avoid timer leaks on unmount.frontend/src/components/QrCodeGenerator.jsx-3-3 (1)
3-3:⚠️ Potential issue | 🟠 MajorChange import to named export syntax for jsPDF compatibility.
Line 3 (and line 62) uses default import
import jsPDF from "jspdf", but jsPDF v4.0.0 exportsjsPDFas a named export. This will cause a runtime error when attempting to create a document instance. Change to:import { jsPDF } from "jspdf";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/QrCodeGenerator.jsx` at line 3, The file imports jsPDF using a default import which is incompatible with jsPDF v4; update the import in QrCodeGenerator.jsx to use the named export { jsPDF } and ensure all usages (e.g., the place that constructs the document with new jsPDF(...) in the QrCodeGenerator component) continue to reference jsPDF as before so instantiation works at runtime.backend/docs/api-documentation.json-935-983 (1)
935-983:⚠️ Potential issue | 🟠 MajorAPI documentation out of sync with route definitions for clean-supabase endpoints.
The verification confirms
/api/clean-supabase/triggeris incorrectly documented asGETwhen the actual route usesPOST(line 54 of cleanSupabase.js). Additionally, both endpoints have empty middleware arrays in the JSON documentation, but the actual routes definerequireAuthandrequireSecretmiddleware respectively. Regenerate the API documentation to reflect the correct HTTP methods and middleware requirements.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/api-documentation.json` around lines 935 - 983, The OpenAPI JSON for the clean-supabase routes is out of sync: update the "/api/clean-supabase/trigger" operation to use POST (not GET) to match the actual route in cleanSupabase.js, and populate the "middleware" arrays to reflect the real middleware attached to each route (ensure "/api/clean-supabase" lists requireAuth and "/api/clean-supabase/trigger" lists requireSecret as defined in cleanSupabase.js); regenerate or edit the API documentation so the HTTP methods and middleware entries match the route definitions.frontend/src/components/JsonFormatterValidator.jsx-45-48 (1)
45-48:⚠️ Potential issue | 🟠 MajorHandle clipboard write errors with async/await pattern.
navigator.clipboard.writeTextis Promise-based; current code shows a success toast without awaiting, so users see false success notifications. This is inconsistent with all other components in the codebase that properly use async/await with error handling.🛠️ Suggested fix
- const copyToClipboard = (textToCopy) => { - navigator.clipboard.writeText(textToCopy); - toast.success("Copied to clipboard!"); - }; + const copyToClipboard = async (textToCopy) => { + try { + await navigator.clipboard.writeText(textToCopy); + toast.success("Copied to clipboard!"); + } catch (error) { + console.error("Failed to copy to clipboard:", error); + toast.error("Failed to copy to clipboard. Please try again."); + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/JsonFormatterValidator.jsx` around lines 45 - 48, The copyToClipboard function currently calls navigator.clipboard.writeText without awaiting or handling errors; modify the copyToClipboard function to be async, await navigator.clipboard.writeText(textToCopy) inside a try/catch, call toast.success only on successful await, and call toast.error (with a helpful message) in the catch block to surface failures; ensure the function signature and any callers (copyToClipboard) are updated to handle the async function if necessary.frontend/src/components/PasswordGenerator.jsx-24-42 (1)
24-42:⚠️ Potential issue | 🟠 MajorSelected character classes are not actually guaranteed.
With this approach, a password generated with “Include Numbers” and “Include Symbols” checked can still contain neither if random sampling misses those buckets. For a password generator, those toggles usually mean “must contain at least one.” Seed the result with one character from each enabled set, fill the remaining length from the merged charset, then shuffle.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordGenerator.jsx` around lines 24 - 42, The generator currently builds a merged charset and samples entirely from it, which does not guarantee inclusion of selected classes; change PasswordGenerator.jsx so that when includeUppercase/includeLowercase/includeNumbers/includeSymbols are true you first push one random character from each corresponding class into newPassword (using their specific strings like "ABCDEFGHIJKLMNOPQRSTUVWXYZ", etc.), then compute remaining characters = validLength - seededCount and fill those from the combined charset as currently done, then perform a cryptographically-secure shuffle of the resulting character array before joining and calling setPassword; keep using crypto.getRandomValues for randomness and still enforce the length bounds (validLength) and the empty-selection check.frontend/src/components/Base64TextConverter.jsx-16-43 (1)
16-43:⚠️ Potential issue | 🟠 Major
btoa/atobcannot encode or decode non-ASCII text.This converter will fail with a
TypeErrorfor emojis and throw for many Unicode inputs (accented characters, CJK), and will silently corrupt non-Latin-1 output during decoding, because the native APIs only operate on Latin-1 byte strings. UseTextEncoder/TextDecoderto safely round-trip UTF-8:Suggested change
const encodeBase64 = () => { setLoading(true); trackToolUsage("Base64TextConverter", "text"); setTimeout(() => { try { - setConvertedText(btoa(text)); + const bytes = new TextEncoder().encode(text); + const binary = Array.from(bytes, (byte) => String.fromCharCode(byte)).join(""); + setConvertedText(btoa(binary)); } catch (error) { setConvertedText(""); toast.error("Failed to encode text."); } finally { setLoading(false); } }, 500); }; const decodeBase64 = () => { setLoading(true); trackToolUsage("Base64TextConverter", "text"); setTimeout(() => { try { - setConvertedText(atob(text)); + const binary = atob(text); + const bytes = Uint8Array.from(binary, (char) => char.charCodeAt(0)); + setConvertedText(new TextDecoder().decode(bytes)); } catch (error) { setConvertedText(""); toast.error("Failed to decode text."); } finally { setLoading(false); } }, 500); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/Base64TextConverter.jsx` around lines 16 - 43, The current encodeBase64 and decodeBase64 use btoa/atob which fail on non-ASCII; replace their implementations to UTF-8 safe variants: for encodeBase64, use TextEncoder to get a Uint8Array from text, convert that byte array to a binary string and then base64-encode it (preserving existing setLoading, trackToolUsage, try/catch/toast/error handling); for decodeBase64, reverse the process: base64-decode to a binary string with atob, convert to a Uint8Array, then use TextDecoder to get the UTF-8 string and setConvertedText, keeping the same error handling and setLoading behavior.backend/routes/shortener.js-23-25 (1)
23-25:⚠️ Potential issue | 🟠 MajorNormalize
originalUrlbefore saving and redirecting.At Line 23, protocol-less URLs are accepted by regex, but at Line 69 those values are redirected as-is, which can yield broken relative redirects (e.g.,
www.example.com). Normalize once and use the normalized value for lookup/save.Suggested patch
if (!urlRegex.test(originalUrl)) { return res.status(400).json({ msg: "Please enter a valid URL." }); } + const normalizedOriginalUrl = + originalUrl.startsWith("http://") || originalUrl.startsWith("https://") + ? originalUrl + : `https://${originalUrl}`; try { - let url = await Url.findOne({ originalUrl }); + let url = await Url.findOne({ originalUrl: normalizedOriginalUrl }); @@ url = new Url({ - originalUrl, + originalUrl: normalizedOriginalUrl, shortUrl, urlCode, date: new Date(), });Also applies to: 28-29, 46-48, 69-69
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/shortener.js` around lines 23 - 25, The route currently validates protocol-less inputs with urlRegex but later redirects/saves the unnormalized originalUrl, causing broken relative redirects; normalize originalUrl (e.g., prepend "http://" when missing) immediately after validation where originalUrl is checked so the normalized value is used for subsequent lookup/save operations (refer to the originalUrl variable and the urlRegex check), ensure the same normalized originalUrl is used in the creation/save code path (the logic around generating/saving the short URL) and in the redirect handler that performs the lookup and res.redirect, updating any references that currently use the raw originalUrl so all DB lookups, responses, and res.redirect calls use the normalized value.backend/routes/officeConverter.js-17-20 (1)
17-20:⚠️ Potential issue | 🟠 MajorAdd strict PDF MIME validation in both PDF routes.
These handlers assume
req.fileis a PDF, but shared upload filtering allows other types. Non-PDF uploads can fall through and surface as server errors instead of a client validation error.🛡️ Suggested fix
const { file } = req; if (!file) { return res.status(400).json({ msg: "No PDF file uploaded." }); } + if (file.mimetype !== "application/pdf") { + return res.status(400).json({ msg: "Only PDF files are supported." }); + } @@ const { file } = req; if (!file) { return res.status(400).json({ msg: "No PDF file uploaded." }); } + if (file.mimetype !== "application/pdf") { + return res.status(400).json({ msg: "Only PDF files are supported." }); + }Also applies to: 124-127
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/officeConverter.js` around lines 17 - 20, The handlers currently assume req.file is a PDF; add strict MIME-type validation in both PDF endpoints by checking file.mimetype (e.g., === 'application/pdf') after const { file } = req and return res.status(400).json({ msg: "Uploaded file must be a PDF." }) when it fails; apply the same check in the second PDF route block around the code referenced at lines 124-127 so non-PDF uploads produce a clear 400 validation response rather than causing server errors.backend/routes/auth.js-28-40 (1)
28-40:⚠️ Potential issue | 🟠 MajorReturn sanitized validation errors instead of raw payload values.
You sanitize errors (including password redaction) but respond with
errors.array(). This can echo sensitive input values back in API responses.🔒 Suggested fix
if (!errors.isEmpty()) { // Sanitize errors before logging to prevent password exposure const sanitizedErrors = errors.array().map((error) => { const { value, ...sanitizedError } = error; if (error.path === "password") { return { ...sanitizedError, value: "[REDACTED]" }; } return sanitizedError; }); @@ - return res.status(400).json({ errors: errors.array() }); + return res.status(400).json({ errors: sanitizedErrors }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/auth.js` around lines 28 - 40, The handler currently builds sanitizedErrors but still logs and responds with errors.array(), potentially leaking sensitive values; update the response and non-production console log to use sanitizedErrors (keep the existing password redaction logic) so that res.status(400).json(...) returns sanitizedErrors instead of errors.array(), and ensure any console.log("Validation errors:", ...) also prints sanitizedErrors.backend/routes/textToPdf.js-10-15 (1)
10-15:⚠️ Potential issue | 🟠 MajorAdd an upper bound for input size before in-memory PDF generation.
This endpoint buffers the full PDF in memory; without a max text length guard, oversized payloads can cause avoidable memory spikes.
🔧 Suggested fix
const router = require("express").Router(); const PDFDocument = require("pdfkit"); +const MAX_TEXT_LENGTH = 200_000; // `@route` POST /api/convert/text-to-pdf // `@desc` Convert text to PDF and send for direct download // `@access` Public router.post("/text-to-pdf", async (req, res) => { const { text } = req.body; // Validate input: ensure text is a non-empty string - if (typeof text !== "string" || text.trim().length === 0) { + if ( + typeof text !== "string" || + text.trim().length === 0 || + text.length > MAX_TEXT_LENGTH + ) { return res.status(400).json({ - msg: "Text is required and must be a non-empty string", + msg: `Text is required and must be a non-empty string up to ${MAX_TEXT_LENGTH} characters.`, }); }Also applies to: 21-33
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/textToPdf.js` around lines 10 - 15, Add a hard upper bound on the incoming text size before buffering/generating the PDF: introduce a constant like MAX_TEXT_LENGTH (e.g. 50k–200k chars depending on expected usage) and in the existing validation block that checks the variable text reject requests that exceed that length with a 413 (Payload Too Large) and a clear message; update the same validation used before the in-memory PDF generation (the validation handling text in this route) so oversized payloads are rejected early to prevent large memory spikes.backend/routes/screenshot.js-35-39 (1)
35-39:⚠️ Potential issue | 🟠 MajorAvoid exposing raw upstream error details in logs and responses.
Full error object logging and returning raw
err.messagecan leak sensitive integration details (including request config context).🛡️ Suggested fix
if (error) { - console.error("Supabase upload error:", error); + console.error("Supabase upload error:", error.message); return res.status(500).json({ msg: "Failed to upload screenshot to Supabase", - error: error.message, }); } @@ - } catch (err) { - console.error("Error generating screenshot:", err); + } catch (err) { + console.error("Error generating screenshot:", { + message: err.message, + code: err.code, + status: err.response?.status, + }); // Handle timeout errors specifically if (err.code === "ECONNABORTED") { return res.status(408).json({ msg: "Request timeout. The screenshot API took too long to respond.", - error: err.message, }); } return res.status(500).json({ msg: "Failed to generate screenshot. Please check the URL and API key.", - error: err.message, }); }Also applies to: 46-59
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/screenshot.js` around lines 35 - 39, Do not log or return the raw upstream error object or error.message from the Supabase upload; instead in the screenshot upload route replace console.error("Supabase upload error:", error) and the res.status(...).json(...) payload with a sanitized log and a generic client-facing message: log only a short contextual string plus a non-sensitive identifier (e.g., "Supabase upload failed", request id or upload id) and optionally a truncated/hashed error code, and return a generic JSON error like { msg: "Failed to upload screenshot to storage" } (no raw error.message). Apply the same change to the other Supabase error handling block in this file (the second block around lines 46-59) so neither console.error nor response bodies expose upstream details.backend/index.js-126-130 (1)
126-130:⚠️ Potential issue | 🟠 MajorAvoid logging the raw error object in the global handler.
console.error(err)will dump full stacks and any attached request/upstream metadata. For public routes that proxy external calls, that can leak user-supplied URLs or other sensitive context into logs. Log a sanitized shape instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/index.js` around lines 126 - 130, The global error handler (the app.use((err, req, res, next) => { ... }) middleware) currently calls console.error(err) which can leak stacks and request/upstream metadata; replace that raw dump with a sanitized log object containing only safe fields (e.g. err.message, err.status, request id or method+path if available) and conditionally include stack only in non-production (process.env.NODE_ENV !== 'production'); ensure the response continues to use err.status and err.message but do not log the full err object.frontend/src/App.jsx-53-63 (1)
53-63:⚠️ Potential issue | 🟠 MajorTreat tokens without a numeric
expas invalid.
decoded.exp < currentTimeonly catches expired tokens. Ifexpis missing or malformed, this branch falls through toLOGINand the client keeps using a bad token until the API rejects it. Guard withtypeof decoded.exp !== "number"before accepting the token.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/App.jsx` around lines 53 - 63, The token acceptance logic currently only checks decoded.exp < currentTime and can accept tokens missing or with malformed exp; in the jwtDecode handling where you use decoded, first verify typeof decoded.exp === "number" and only if that is true and decoded.exp >= currentTime proceed to call setAuthToken(token) and dispatch({ type: "LOGIN", payload: { token, user: decoded.user } }); otherwise treat the token as invalid by calling setAuthToken(null), removing the token from localStorage, and dispatching({ type: "LOGOUT" }) so malformed or missing exp values are rejected.backend/utils/filenameSanitizer.js-9-11 (1)
9-11:⚠️ Potential issue | 🟠 MajorSanitize the fallback before returning it.
Both error paths return
fallbackverbatim, so a caller can pass something like../../fooand bypass the entire sanitization pipeline. That breaks the utility’s “safe filename” contract exactly when the primary input is invalid or sanitizes to empty.Also applies to: 42-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/filenameSanitizer.js` around lines 9 - 11, The function sanitizeFilename currently returns the fallback value verbatim when the primary filename is invalid (in sanitizeFilename) — update the logic so any fallback returned is passed through the same sanitization routine as normal input (reuse the internal sanitization steps used for valid filenames) before returning; apply this change to both early-return spots (the initial invalid-type/empty branch and the later branch around lines 42-45) so callers cannot bypass sanitization by supplying a malicious fallback.backend/routes/seoTools.js-40-55 (1)
40-55:⚠️ Potential issue | 🟠 MajorHandle
options.all === truein the custom DNS lookup resolver.The callback currently always returns a single
(address, family)pair. When Node.js invokes a customlookupwithoptions.all === true, it expects an array of{ address, family }objects as the second callback argument. Without this, requests will fail when family auto-selection attempts to use the pinned resolver withall: true. Mirror the pattern frombackend/routes/redirectChecker.js:Correct pattern
const returnAll = typeof options === "object" && options?.all === true; if (returnAll) { const validatedAddressesFiltered = family ? validatedAddresses.filter((entry) => entry.family === family) : validatedAddresses; if (validatedAddressesFiltered.length === 0) { return callback(new Error("No validated DNS records available")); } return callback(null, validatedAddressesFiltered.map((e) => ({ address: e.address, family: e.family }))); } else { // existing single-pair logic }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/seoTools.js` around lines 40 - 55, The custom DNS resolver pinnedLookup currently always returns a single (address, family) pair and fails to handle options.all === true; update pinnedLookup (used with parsedUrl and validatedAddresses) to detect when typeof options === "object" && options.all === true, filter validatedAddresses by family if provided, return an error if the filtered list is empty, and otherwise call callback(null, mappedArray) where mappedArray is an array of { address, family } objects; keep the existing single-address branch for the non-all case so callback still returns (address, family) as before.frontend/src/components/ImageCompressor.jsx-84-125 (1)
84-125:⚠️ Potential issue | 🟠 MajorBlock mid-flight file changes during compression.
This request has no stale-response guard, and the picker stays enabled while
loadingis true. If the user starts compressing one batch and then changes the selection before the response returns, the old result is still downloaded and the newer selection gets cleared.Suggested fix
const onSubmit = async (e) => { e.preventDefault(); + if (loading) return; if (selectedFiles.length === 0) { toast.error("Please select at least one image file."); return; @@ <input ref={fileInputRef} accept="image/*" className="block w-full text-sm text-foreground border border-input rounded-lg cursor-pointer bg-background focus:outline-none file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-semibold file:bg-primary/10 file:text-primary hover:file:bg-primary/10" id="multiple_files" type="file" multiple + disabled={loading} onChange={onFileChange} />Also applies to: 139-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCompressor.jsx` around lines 84 - 125, The submit flow (onSubmit) needs a stale-response guard and the file picker disabled while loading: add a request identifier (e.g., currentRequestId ref) or use an Axios CancelToken and set it when onSubmit starts; set the picker/input disabled whenever loading is true (use fileInputRef or the input's disabled prop). When the response arrives, verify the response belongs to the latest requestId (or that the request was not cancelled) before calling handleDownload or clearing selectedFiles/fileInputRef; if it does not match, ignore the response. Also cancel any previous in-flight request when starting a new onSubmit to avoid races.frontend/src/components/PdfRotator.jsx-39-85 (1)
39-85:⚠️ Potential issue | 🟠 MajorPrevent stale rotate responses from applying after the user changes files.
While
loadingis true, the file input stays active andonSubmithas no re-entry guard. If the user submits file A, then picks file B before the request finishes, the success path still downloads A and clears B from the UI.Suggested fix
const onSubmit = async (e) => { e.preventDefault(); + if (loading) return; if (!selectedFile) { toast.error("Please select a PDF file first."); return; }<input ref={fileInputRef} className="block w-full text-sm text-foreground border border-input rounded-lg cursor-pointer bg-muted/30 focus:outline-none file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-semibold file:bg-primary/10 file:text-primary hover:file:bg-primary/10" id="pdf_file" type="file" + disabled={loading} onChange={onFileChange} accept=".pdf" />🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfRotator.jsx` around lines 39 - 85, The submit handler onSubmit can apply a stale response to a newly selected file because there's no re-entry guard or response validation; fix it by preventing re-entry and validating the response belongs to the file that was submitted: at the start of onSubmit, return early if loading is true (or set an internal requestId by capturing a unique token like selectedFile.name or a generated UUID and save it to a ref/currentRequestId), send that token with the request (or just keep it locally), and on success/error compare the saved token/currentRequestId to the latest selectedFile (or ref) before performing download, clearing selectedFile, resetting fileInputRef, or calling setRotationAngle; also disable the file input while setLoading(true) to avoid user changes during the request.backend/routes/pdfConverter.js-295-305 (1)
295-305:⚠️ Potential issue | 🟠 Major
highcompression level is misleading and uses a redundant setting.
objectsPerTickonly controls how serialization is scheduled (object batching before event-loop yields), not output file size. Since bothmediumandhighenableuseObjectStreams: true, they produce identical compression. Additionally, settingobjectsPerTick: 50in thehighcase is redundant—it's already the default. Users expecting stronger compression fromhighwon't see any difference in output file size compared tomedium.Either remove the
highlevel and renamemediumto reflect the actual behavior, or implement real compression options (e.g., image downsampling, content stream optimization).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/pdfConverter.js` around lines 295 - 305, The "high" compression case is misleading because it only toggles objectsPerTick (which is the default) and both "medium" and "high" set compressionOptions.useObjectStreams = true, producing identical output; update the switch that reads compressionLevel to remove the "high" branch and consolidate behavior into a single "optimized" (or rename "medium") case, eliminate the redundant objectsPerTick assignment (since it's the default), and adjust any references to "high" elsewhere to use the new name; ensure you modify the switch handling compressionLevel and the compressionOptions object (useObjectStreams, objectsPerTick) accordingly so compression levels reflect actual differences or only expose the accurate options.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 49986eb0-3392-4eb2-ab55-d1ba795d0292
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (138)
.github/workflows/publish.yml.github/workflows/trigger-supabase-cleanup.yml.markdownlint.json.markdownlintignoreREADME.mdbackend/docs/README.mdbackend/docs/api-documentation.jsonbackend/index.jsbackend/middleware/apiActivityTracker.jsbackend/middleware/auth.jsbackend/middleware/authorize.jsbackend/middleware/uploadLimiter.jsbackend/models/ApiActivity.jsbackend/models/ServiceUsage.jsbackend/models/ToolUsage.jsbackend/models/Url.jsbackend/models/User.jsbackend/package.jsonbackend/routes/analytics.jsbackend/routes/auth.jsbackend/routes/cleanSupabase.jsbackend/routes/favicon.jsbackend/routes/imageConverter.jsbackend/routes/jsonXmlConverter.jsbackend/routes/keepAlive.jsbackend/routes/officeConverter.jsbackend/routes/passwordStrength.jsbackend/routes/pdfConverter.jsbackend/routes/redirectChecker.jsbackend/routes/screenshot.jsbackend/routes/seoTools.jsbackend/routes/shortener.jsbackend/routes/textConverter.jsbackend/routes/textToPdf.jsbackend/scripts/migrateTotalUsageKey.jsbackend/utils/filenameSanitizer.jsbackend/utils/ipValidation.jsbackend/utils/pdfErrorHandler.jsbackend/utils/supabaseCleaner.jsbackend/utils/supabaseClient.jsbiome.jsonfrontend/package.jsonfrontend/public/manifest.jsonfrontend/src/App.jsxfrontend/src/components/Base64TextConverter.jsxfrontend/src/components/CsvToJsonConverter.jsxfrontend/src/components/ExcelToPdfConverter.jsxfrontend/src/components/FaviconExtractor.jsxfrontend/src/components/Footer.jsxfrontend/src/components/HashGenerator.jsxfrontend/src/components/HtmlToMarkdownConverter.jsxfrontend/src/components/ImageBackgroundRemover.jsxfrontend/src/components/ImageCompressor.jsxfrontend/src/components/ImageCropper.jsxfrontend/src/components/ImageFlipper.jsxfrontend/src/components/ImageFormatConverter.jsxfrontend/src/components/ImageGrayscaler.jsxfrontend/src/components/ImageResizer.jsxfrontend/src/components/ImageToBase64Converter.jsxfrontend/src/components/ImageToPdfConverter.jsxfrontend/src/components/JsonFormatterValidator.jsxfrontend/src/components/JsonXmlConverter.jsxfrontend/src/components/LinkShortener.jsxfrontend/src/components/MarkdownToHtmlConverter.jsxfrontend/src/components/Navbar.jsxfrontend/src/components/PasswordGenerator.jsxfrontend/src/components/PasswordStrengthChecker.jsxfrontend/src/components/PdfCompressor.jsxfrontend/src/components/PdfMerger.jsxfrontend/src/components/PdfPageDeleter.jsxfrontend/src/components/PdfRotator.jsxfrontend/src/components/PdfSplitter.jsxfrontend/src/components/PdfToExcelConverter.jsxfrontend/src/components/PdfToTextConverter.jsxfrontend/src/components/PdfToWordConverter.jsxfrontend/src/components/PngToJpgConverter.jsxfrontend/src/components/QrCodeGenerator.jsxfrontend/src/components/QrCodeScanner.jsxfrontend/src/components/SeoTools.jsxfrontend/src/components/TextCaseConverter.jsxfrontend/src/components/TextDifferenceChecker.jsxfrontend/src/components/TextToPdfGenerator.jsxfrontend/src/components/ToolCard.jsxfrontend/src/components/UrlRedirectChecker.jsxfrontend/src/components/WebsiteScreenshotGenerator.jsxfrontend/src/components/auth/Login.jsxfrontend/src/components/auth/Register.jsxfrontend/src/components/mode-toggle.jsxfrontend/src/components/theme-provider.jsxfrontend/src/components/ui/badge.jsxfrontend/src/components/ui/button.jsxfrontend/src/components/ui/card.jsxfrontend/src/components/ui/input.jsxfrontend/src/components/ui/label.jsxfrontend/src/components/ui/textarea.jsxfrontend/src/context/AuthContext.jsxfrontend/src/index.cssfrontend/src/index.jsxfrontend/src/lib/utils.jsfrontend/src/pages/HomePage.jsxfrontend/src/pages/ImageToolsPage.jsxfrontend/src/pages/PdfToolsPage.jsxfrontend/src/pages/TextToolsPage.jsxfrontend/src/pages/WebToolsPage.jsxfrontend/src/utils/setAuthToken.jsfrontend/src/utils/useAnalytics.jsfrontend/src/utils/useSortedTools.jsfrontend/tailwind.config.jsfrontend/vercel.jsonfrontend/vite.config.jspackage.jsonpackage/README.mdpackage/package.jsonpackage/rslib.config.tspackage/scripts/postbuild.mjspackage/src/branding.tspackage/src/cli.tspackage/src/constants/index.tspackage/src/image/index.tspackage/src/index.tspackage/src/interactive.tspackage/src/interfaces/common.tspackage/src/interfaces/index.tspackage/src/pdf/index.tspackage/src/types/common.tspackage/src/types/shims.d.tspackage/src/utils/branding.tspackage/src/utils/config.tspackage/src/utils/errors.tspackage/src/utils/ffmpeg.tspackage/src/utils/files.tspackage/src/utils/index.tspackage/src/utils/watermark.tspackage/src/video/index.tspackage/tests/dist.test.jspackage/tsconfig.jsonpnpm-workspace.yamlturbo.json
💤 Files with no reviewable changes (2)
- .markdownlintignore
- .markdownlint.json
📜 Review details
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.0)
backend/docs/README.md
[warning] 441-441: Code block style
Expected: fenced; Actual: indented
(MD046, code-block-style)
[warning] 447-447: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| const validateUrl = async (url) => { | ||
| try { | ||
| const urlObj = new URL(url); | ||
|
|
||
| // Only allow http and https protocols | ||
| if (!["http:", "https:"].includes(urlObj.protocol)) { | ||
| throw new Error("Only HTTP and HTTPS protocols are allowed"); | ||
| } | ||
|
|
||
| // Extract hostname and resolve to check IP addresses (dual-stack support) | ||
| const hostname = urlObj.hostname; | ||
| const addresses = await new Promise((resolve, reject) => { | ||
| // Run both IPv4 and IPv6 lookups in parallel for dual-stack support | ||
| const ipv4Promise = new Promise((res) => { | ||
| dns.resolve4(hostname, (err, addresses) => { | ||
| res(err ? [] : addresses); | ||
| }); | ||
| }); | ||
|
|
||
| const ipv6Promise = new Promise((res) => { | ||
| dns.resolve6(hostname, (err, addresses) => { | ||
| res(err ? [] : addresses); | ||
| }); | ||
| }); | ||
|
|
||
| Promise.all([ipv4Promise, ipv6Promise]) | ||
| .then(([ipv4Addresses, ipv6Addresses]) => { | ||
| const allAddresses = [...ipv4Addresses, ...ipv6Addresses]; | ||
| if (allAddresses.length === 0) { | ||
| reject(new Error("DNS resolution failed for both IPv4 and IPv6")); | ||
| } else { | ||
| resolve(allAddresses); | ||
| } | ||
| }) | ||
| .catch((err) => | ||
| reject(new Error(`DNS resolution failed: ${err.message}`)), | ||
| ); | ||
| }); | ||
|
|
||
| // Check if any resolved IP is private | ||
| for (const ip of addresses) { | ||
| if (isPrivateIP(ip)) { | ||
| throw new Error("Private IP ranges not allowed"); | ||
| } | ||
| } | ||
|
|
||
| return true; | ||
| } catch (error) { | ||
| throw new Error(`URL validation failed: ${error.message}`); | ||
| } | ||
| try { | ||
| const urlObj = new URL(url); | ||
|
|
||
| // Only allow http and https protocols | ||
| if (!["http:", "https:"].includes(urlObj.protocol)) { | ||
| throw new Error("Only HTTP and HTTPS protocols are allowed"); | ||
| } | ||
|
|
||
| // Extract hostname and resolve to check IP addresses using system resolver | ||
| const hostname = urlObj.hostname; | ||
| const lookupResults = await dns.lookup(hostname, { all: true }); | ||
|
|
||
| if (!lookupResults || lookupResults.length === 0) { | ||
| throw new Error("DNS resolution failed - no addresses returned"); | ||
| } | ||
|
|
||
| const addresses = lookupResults.map((result) => result.address); | ||
|
|
||
| // Check if any resolved IP is private | ||
| for (const ip of addresses) { | ||
| if (isPrivateIP(ip)) { | ||
| throw new Error("Private IP ranges not allowed"); | ||
| } | ||
| } | ||
|
|
||
| return true; | ||
| } catch (error) { | ||
| throw new Error(`URL validation failed: ${error.message}`); | ||
| } |
There was a problem hiding this comment.
The DNS safety check is still TOCTOU and can be bypassed.
validateUrl() checks one DNS resolution result, but the later axios.get(...) calls resolve the hostname again at connect time. A rebinding/dual-answer host can pass this preflight and then send the real request to a private address. To close the SSRF gap, bind the validated address into the actual request path instead of validating and then re-resolving later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/favicon.js` around lines 11 - 40, validateUrl() is vulnerable
to TOCTOU because you validate DNS with dns.lookup but later re-resolve when
calling axios.get(...); instead, resolve and pick a validated IP inside
validateUrl (using dns.lookup with { all: true }), ensure none are private via
isPrivateIP, then return the chosen IP and original hostname; in the request
path (where axios.get is used) use the validated IP in the URL and set the Host
header to the original hostname and configure the HTTPS agent (servername) to
the original hostname so SNI/TLS still match—update the axios.get call to use
the returned IP/hostname pair rather than re-resolving the hostname.
| router.post("/", async (req, res) => { | ||
| try { | ||
| const fileSizeInBytes = 1024; | ||
| const randomText = generateRandomString(fileSizeInBytes); | ||
| const fileName = `keep-alive-${Date.now()}.txt`; | ||
| try { | ||
| const fileSizeInBytes = 1024; | ||
| const randomText = generateRandomString(fileSizeInBytes); | ||
| const fileName = `keep-alive-${Date.now()}.txt`; | ||
|
|
||
| const { data, error } = await supabase.storage | ||
| .from("utilityhub") | ||
| .upload(`keep-alive/${fileName}`, randomText, { | ||
| contentType: "text/plain", | ||
| upsert: true, | ||
| }); | ||
| const { data, error } = await supabase.storage | ||
| .from("utilityhub") | ||
| .upload(`keep-alive/${fileName}`, randomText, { | ||
| contentType: "text/plain", | ||
| upsert: true, | ||
| }); | ||
|
|
||
| if (error) { | ||
| console.error("Supabase upload error:", error); | ||
| return res.status(500).json({ | ||
| msg: "Failed to upload file to Supabase", | ||
| error: error.message, | ||
| }); | ||
| } | ||
| if (error) { | ||
| console.error("Supabase upload error:", error); | ||
| return res.status(500).json({ | ||
| msg: "Failed to upload file to Supabase", | ||
| error: error.message, | ||
| }); | ||
| } | ||
|
|
||
| return res | ||
| .status(200) | ||
| .json({ msg: "Keep-alive file uploaded successfully", path: data.path }); | ||
| } catch (err) { | ||
| console.error("Server error in keep-alive endpoint:", err); | ||
| return res.status(500).json({ msg: "Server Error" }); | ||
| } | ||
| return res.status(200).json({ msg: "Keep-alive file uploaded successfully", path: data.path }); | ||
| } catch (err) { | ||
| console.error("Server error in keep-alive endpoint:", err); | ||
| return res.status(500).json({ msg: "Server Error" }); | ||
| } | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Where keep-alive route is mounted
rg -n --type=js -C4 'keepAlive|/api/keep-alive|routes/keepAlive'
# Nearby auth/secret middleware patterns
rg -n --type=js -C4 'requireAuth|auth|cron secret|SUPABASE_CLEANUP_CRON_SECRET|x-cron-secret|authorization'Repository: gaureshpai/dkutils
Length of output: 17190
Add cron secret validation to prevent unauthorized storage writes.
The route is mounted with authMiddleware at line 83 of backend/index.js, but this middleware does not require authentication—it only validates tokens if present. Unauthenticated requests proceed unchecked. For an internal-only endpoint that performs storage writes, add explicit secret validation following the pattern used in cleanSupabase.js:
Suggested guard
router.post("/", async (req, res) => {
+ const expectedSecret = process.env.KEEP_ALIVE_CRON_SECRET;
+ if (!expectedSecret || req.get("x-cron-secret") !== expectedSecret) {
+ return res.status(401).json({ msg: "Unauthorized" });
+ }
+
try {
const fileSizeInBytes = 1024;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| router.post("/", async (req, res) => { | |
| try { | |
| const fileSizeInBytes = 1024; | |
| const randomText = generateRandomString(fileSizeInBytes); | |
| const fileName = `keep-alive-${Date.now()}.txt`; | |
| try { | |
| const fileSizeInBytes = 1024; | |
| const randomText = generateRandomString(fileSizeInBytes); | |
| const fileName = `keep-alive-${Date.now()}.txt`; | |
| const { data, error } = await supabase.storage | |
| .from("utilityhub") | |
| .upload(`keep-alive/${fileName}`, randomText, { | |
| contentType: "text/plain", | |
| upsert: true, | |
| }); | |
| const { data, error } = await supabase.storage | |
| .from("utilityhub") | |
| .upload(`keep-alive/${fileName}`, randomText, { | |
| contentType: "text/plain", | |
| upsert: true, | |
| }); | |
| if (error) { | |
| console.error("Supabase upload error:", error); | |
| return res.status(500).json({ | |
| msg: "Failed to upload file to Supabase", | |
| error: error.message, | |
| }); | |
| } | |
| if (error) { | |
| console.error("Supabase upload error:", error); | |
| return res.status(500).json({ | |
| msg: "Failed to upload file to Supabase", | |
| error: error.message, | |
| }); | |
| } | |
| return res | |
| .status(200) | |
| .json({ msg: "Keep-alive file uploaded successfully", path: data.path }); | |
| } catch (err) { | |
| console.error("Server error in keep-alive endpoint:", err); | |
| return res.status(500).json({ msg: "Server Error" }); | |
| } | |
| return res.status(200).json({ msg: "Keep-alive file uploaded successfully", path: data.path }); | |
| } catch (err) { | |
| console.error("Server error in keep-alive endpoint:", err); | |
| return res.status(500).json({ msg: "Server Error" }); | |
| } | |
| }); | |
| router.post("/", async (req, res) => { | |
| const expectedSecret = process.env.KEEP_ALIVE_CRON_SECRET; | |
| if (!expectedSecret || req.get("x-cron-secret") !== expectedSecret) { | |
| return res.status(401).json({ msg: "Unauthorized" }); | |
| } | |
| try { | |
| const fileSizeInBytes = 1024; | |
| const randomText = generateRandomString(fileSizeInBytes); | |
| const fileName = `keep-alive-${Date.now()}.txt`; | |
| const { data, error } = await supabase.storage | |
| .from("utilityhub") | |
| .upload(`keep-alive/${fileName}`, randomText, { | |
| contentType: "text/plain", | |
| upsert: true, | |
| }); | |
| if (error) { | |
| console.error("Supabase upload error:", error); | |
| return res.status(500).json({ | |
| msg: "Failed to upload file to Supabase", | |
| error: error.message, | |
| }); | |
| } | |
| return res.status(200).json({ msg: "Keep-alive file uploaded successfully", path: data.path }); | |
| } catch (err) { | |
| console.error("Server error in keep-alive endpoint:", err); | |
| return res.status(500).json({ msg: "Server Error" }); | |
| } | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/keepAlive.js` around lines 17 - 43, The keep-alive POST
handler (router.post in backend/routes/keepAlive.js) allows unauthenticated
writes because authMiddleware only validates tokens if present; add an explicit
cron secret guard like cleanSupabase.js uses: read the expected secret from an
env variable (e.g., process.env.CRON_SECRET), read the incoming secret from a
header or body (e.g., req.headers['x-cron-secret'] or req.body.cronSecret),
compare them at the top of the router.post handler and immediately return
401/403 JSON if they don’t match, so only requests with the valid secret proceed
to the Supabase upload logic.
…edirectResponse function
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
frontend/src/components/ImageCropper.jsx (1)
110-130:⚠️ Potential issue | 🟡 MinorUse
blob.typefor the download extension.Line 128 still derives the filename from the requested
mimeType, butcanvas.toBlob()can fall back to PNG when that encoder is unavailable. That reintroduces the mislabeled-download issue. Build the extension fromblob.type || mimeTypeinstead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 110 - 130, The filename extension is derived from the requested mimeType but canvas.toBlob may produce a different format; inside the canvas.toBlob callback (the function that handles blob, where croppedImageSrc is revoked, objectUrl created with URL.createObjectURL, setCroppedImageSrc, and handleDownload is called), derive the extension from blob.type first (use blob.type || mimeType) and split that to compute the extension before calling handleDownload so the downloaded filename matches the real blob format.frontend/src/components/ImageToPdfConverter.jsx (1)
18-20:⚠️ Potential issue | 🟠 MajorInvalidate the active conversion when the selection changes.
Line 66 only rotates
conversionIdRefon submit. Because the picker at Line 140 stays active, a user can choose a new set of files while the previous request is still running; when the older response returns, Lines 90-93 still treat it as current and auto-download the stale PDF against the new selection. Increment the request id inonFileChangeas well, or disable the file input whileloading.Also applies to: 65-68, 89-93, 140-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 18 - 20, The bug is that old conversion responses can still be treated as current because conversionIdRef is only incremented on submit; update onFileChange to invalidate active conversions by incrementing conversionIdRef (the same token used in the submit/response handling) whenever the user picks new files and/or set loading state to prevent new picks while a request is in flight; ensure the same conversionIdRef check used in the response handler (the block that auto-downloads the PDF) will ignore responses whose id no longer matches, and also consider disabling the file input when loading is true to prevent race conditions.
🧹 Nitpick comments (3)
backend/routes/analytics.js (2)
250-252: Make/popularordering stable for equal usage counts.When multiple tools tie on
usageCount, MongoDB can return them in arbitrary order, so the top-10 list may flap between requests. A secondary sort keeps rankings predictable.Suggested change
const tools = await ToolUsage.find(query) - .sort({ usageCount: -1 }) + .sort({ usageCount: -1, toolName: 1 }) .limit(10) .select("toolName usageCount");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/analytics.js` around lines 250 - 252, The popularity query using ToolUsage.find(...).sort({ usageCount: -1 }).limit(10) is unstable for ties; update the sort to include a deterministic secondary key (e.g., .sort({ usageCount: -1, _id: 1 }) or .sort({ usageCount: -1, toolName: 1 })) so tied usageCount entries are consistently ordered—locate the sort call on ToolUsage in the `/popular` handler and add the secondary sort field.
210-212: Add a deterministic tie-breaker to paginated stats sorting.Sorting only by
usageCountmakes pagination unstable when several tools share the same count; items can shift between pages or appear twice across requests. Add a secondary sort key so page boundaries stay consistent.Suggested change
const stats = await ToolUsage.find(query) - .sort({ usageCount: -1 }) + .sort({ usageCount: -1, toolName: 1, category: 1 }) .skip((pageNum - 1) * limitNum) .limit(limitNum) .select("toolName category usageCount lastUsed");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/analytics.js` around lines 210 - 212, The pagination is unstable because the query sorts only by usageCount; update the sort to include a deterministic tie-breaker (e.g., _id or tool identifier) so results stay consistent across pages — change the call using ToolUsage.find(...).sort({ usageCount: -1 }) to .sort({ usageCount: -1, _id: 1 }) (or .sort({ usageCount: -1, toolId: 1 }) if a stable tool key exists) while keeping the existing skip/limit logic.frontend/src/components/ImageToPdfConverter.jsx (1)
71-73: Track usage after a successful conversion.Line 72 records usage before the backend call finishes, so failed conversions are counted the same as completed ones. If these analytics are meant to represent successful tool runs, move
trackToolUsageinto the success branch near Line 93.Also applies to: 89-93
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 71 - 73, The analytics call trackToolUsage in ImageToPdfConverter is invoked before the backend conversion completes, causing failed runs to be counted; move the trackToolUsage("ImageToPdfConverter", "image") call out of the pre-request area (where setLoading(true) and FormData are prepared) into the success branch that handles the successful conversion response (the block around the current success handling near the code that processes the backend response, e.g., the handler after the fetch/axios success), and ensure any existing failure/exception branches do not call it so only successful conversions are tracked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@frontend/src/components/ImageCropper.jsx`:
- Around line 72-79: The crop flow can apply stale blob callbacks when a new
image is picked; update handleCrop (and the canvas.toBlob callback) to ignore
stale callbacks by adding an operation id (e.g., a useRef counter like
cropOpIdRef) that you increment at the start of handleCrop and capture as a
local const inside the toBlob callback, then before any state updates or
downloads verify the captured id matches cropOpIdRef.current; alternatively (or
in addition) disable the file input element used to pick images while
loading/cropping by binding its disabled prop to the existing loading state so
the user cannot pick a new image until the current toBlob callback completes
(apply changes in the handleCrop function, the canvas.toBlob callback block, and
the file input JSX referenced around the file input lines).
---
Duplicate comments:
In `@frontend/src/components/ImageCropper.jsx`:
- Around line 110-130: The filename extension is derived from the requested
mimeType but canvas.toBlob may produce a different format; inside the
canvas.toBlob callback (the function that handles blob, where croppedImageSrc is
revoked, objectUrl created with URL.createObjectURL, setCroppedImageSrc, and
handleDownload is called), derive the extension from blob.type first (use
blob.type || mimeType) and split that to compute the extension before calling
handleDownload so the downloaded filename matches the real blob format.
In `@frontend/src/components/ImageToPdfConverter.jsx`:
- Around line 18-20: The bug is that old conversion responses can still be
treated as current because conversionIdRef is only incremented on submit; update
onFileChange to invalidate active conversions by incrementing conversionIdRef
(the same token used in the submit/response handling) whenever the user picks
new files and/or set loading state to prevent new picks while a request is in
flight; ensure the same conversionIdRef check used in the response handler (the
block that auto-downloads the PDF) will ignore responses whose id no longer
matches, and also consider disabling the file input when loading is true to
prevent race conditions.
---
Nitpick comments:
In `@backend/routes/analytics.js`:
- Around line 250-252: The popularity query using ToolUsage.find(...).sort({
usageCount: -1 }).limit(10) is unstable for ties; update the sort to include a
deterministic secondary key (e.g., .sort({ usageCount: -1, _id: 1 }) or .sort({
usageCount: -1, toolName: 1 })) so tied usageCount entries are consistently
ordered—locate the sort call on ToolUsage in the `/popular` handler and add the
secondary sort field.
- Around line 210-212: The pagination is unstable because the query sorts only
by usageCount; update the sort to include a deterministic tie-breaker (e.g., _id
or tool identifier) so results stay consistent across pages — change the call
using ToolUsage.find(...).sort({ usageCount: -1 }) to .sort({ usageCount: -1,
_id: 1 }) (or .sort({ usageCount: -1, toolId: 1 }) if a stable tool key exists)
while keeping the existing skip/limit logic.
In `@frontend/src/components/ImageToPdfConverter.jsx`:
- Around line 71-73: The analytics call trackToolUsage in ImageToPdfConverter is
invoked before the backend conversion completes, causing failed runs to be
counted; move the trackToolUsage("ImageToPdfConverter", "image") call out of the
pre-request area (where setLoading(true) and FormData are prepared) into the
success branch that handles the successful conversion response (the block around
the current success handling near the code that processes the backend response,
e.g., the handler after the fetch/axios success), and ensure any existing
failure/exception branches do not call it so only successful conversions are
tracked.
🪄 Autofix (Beta)
✅ Autofix completed
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 3543b317-4976-4e79-bc9d-2f9ee3b7c5d2
📒 Files selected for processing (8)
backend/routes/analytics.jsbackend/routes/favicon.jsbackend/routes/redirectChecker.jsbackend/scripts/migrateTotalUsageKey.jsbackend/utils/ipValidation.jsfrontend/src/components/ImageCropper.jsxfrontend/src/components/ImageToPdfConverter.jsxfrontend/src/components/PasswordGenerator.jsx
✅ Files skipped from review due to trivial changes (1)
- frontend/src/components/PasswordGenerator.jsx
🚧 Files skipped from review as they are similar to previous changes (4)
- backend/utils/ipValidation.js
- backend/scripts/migrateTotalUsageKey.js
- backend/routes/favicon.js
- backend/routes/redirectChecker.js
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` These files were kept as they were: * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
This comment was marked as resolved.
This comment was marked as resolved.
Fixed 1 file(s) based on 1 unresolved review comment. Co-authored-by: CodeRabbit <noreply@coderabbit.ai>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
backend/routes/redirectChecker.js (1)
64-76:⚠️ Potential issue | 🔴 CriticalBlock expanded IPv6 loopback literals in safety checks.
isLoopback()only flags the exact::1string for IPv6. Equivalent loopback forms (for example0:0:0:0:0:0:0:1) can passcheckIPSafety()as “safe” and be requested, which reopens a loopback SSRF path.Please normalize/canonicalize IPv6 loopback representations before comparison (preferably in shared IP validation so every caller benefits).
Also applies to: 107-115
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/redirectChecker.js` around lines 64 - 76, The IPv6 loopback check in isLoopback only matches the literal "::1" and misses expanded/alternate forms; update isLoopback to canonicalize the address before comparing by parsing the input (reuse or add a shared IPv6 normalization helper, e.g., normalizeIPv6) and compare the parsed/canonical form to the canonical loopback (e.g., "::1") or use a library parse method that can detect loopback; update callers (and the similar block referenced at lines 107-115) to use the same normalization helper so all IP safety checks consistently recognize all IPv6 loopback representations (reference functions isLoopback and normalizeIPv4Mapped in the diff).
🧹 Nitpick comments (1)
backend/routes/redirectChecker.js (1)
285-352: Extract shared request+redirect handling to reduce drift risk.HEAD/GET request config, chain push, and redirect continuation logic are duplicated across multiple branches. This makes future security fixes easy to miss in one path.
Refactor sketch
+async function requestWithFallback(url, agents) { + let response = await axios.head(url, { + maxRedirects: 0, + validateStatus: () => true, + timeout: TIMEOUT_MS, + maxContentLength: 1024 * 1024, + httpAgent: agents.httpAgent, + httpsAgent: agents.httpsAgent, + }); + if (response.status === 405 || response.status === 501) { + response = await axios.get(url, { + maxRedirects: 0, + validateStatus: () => true, + timeout: TIMEOUT_MS, + maxContentLength: 1024 * 1024, + httpAgent: agents.httpAgent, + httpsAgent: agents.httpsAgent, + }); + } + return response; +}Also applies to: 364-392
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/redirectChecker.js` around lines 285 - 352, The HEAD/GET request, redirectChain.push, and handleRedirectResponse logic is duplicated; extract a single helper (e.g., performRequestAndHandleRedirect or fetchAndProcessRedirect) that accepts (method, currentUrl, currentHostname, currentSafeAddresses) and internally creates agents via createPinnedAgents, issues axios request with the shared config (timeout, maxContentLength, maxRedirects:0, validateStatus), pushes to redirectChain, and calls handleRedirectResponse to return {shouldContinue,nextUrl,nextHostname,nextSafeAddresses} or throw; then replace the duplicated HEAD, GET-fallback, and the final GET branches to call this helper (try HEAD then call helper with GET on 405/501 or on catch) so all request/redirect handling is centralized.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@backend/routes/redirectChecker.js`:
- Around line 64-76: The IPv6 loopback check in isLoopback only matches the
literal "::1" and misses expanded/alternate forms; update isLoopback to
canonicalize the address before comparing by parsing the input (reuse or add a
shared IPv6 normalization helper, e.g., normalizeIPv6) and compare the
parsed/canonical form to the canonical loopback (e.g., "::1") or use a library
parse method that can detect loopback; update callers (and the similar block
referenced at lines 107-115) to use the same normalization helper so all IP
safety checks consistently recognize all IPv6 loopback representations
(reference functions isLoopback and normalizeIPv4Mapped in the diff).
---
Nitpick comments:
In `@backend/routes/redirectChecker.js`:
- Around line 285-352: The HEAD/GET request, redirectChain.push, and
handleRedirectResponse logic is duplicated; extract a single helper (e.g.,
performRequestAndHandleRedirect or fetchAndProcessRedirect) that accepts
(method, currentUrl, currentHostname, currentSafeAddresses) and internally
creates agents via createPinnedAgents, issues axios request with the shared
config (timeout, maxContentLength, maxRedirects:0, validateStatus), pushes to
redirectChain, and calls handleRedirectResponse to return
{shouldContinue,nextUrl,nextHostname,nextSafeAddresses} or throw; then replace
the duplicated HEAD, GET-fallback, and the final GET branches to call this
helper (try HEAD then call helper with GET on 405/501 or on catch) so all
request/redirect handling is centralized.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: af1e79f9-aaad-4ad0-99cd-eaf93a1e3d71
📒 Files selected for processing (1)
backend/routes/redirectChecker.js
📜 Review details
🔇 Additional comments (1)
backend/routes/redirectChecker.js (1)
212-245: Goodlookup(..., { all: true })contract handling.Nice update here: returning an address array for
options.all === trueand(address, family)for single-result mode is the right behavior for pinned DNS lookup callbacks.
Docstrings generation was requested by @gaureshpai. The following files were modified: * `backend/routes/redirectChecker.js` * `frontend/src/App.jsx` These file types are not supported: * `.github/workflows/publish.yml` * `.github/workflows/trigger-supabase-cleanup.yml` * `README.md` * `backend/docs/README.md` * `backend/docs/api-documentation.json` * `backend/package.json` * `biome.json` * `frontend/package.json` * `frontend/public/manifest.json`
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 18
♻️ Duplicate comments (8)
frontend/src/components/PasswordGenerator.jsx (1)
39-40:⚠️ Potential issue | 🟠 MajorUse unbiased index selection for password characters.
Line 40 uses modulo mapping, which can skew character distribution. For a password generator, prefer rejection sampling for uniform selection.
Proposed fix
+const getUniformIndex = (max) => { + const maxUint32 = 0x100000000; + const maxAccept = maxUint32 - (maxUint32 % max); + const buf = new Uint32Array(1); + do { + crypto.getRandomValues(buf); + } while (buf[0] >= maxAccept); + return buf[0] % max; +}; for (let i = 0; i < validLength; i++) { - newPassword += charset.charAt(randomValues[i] % charset.length); + newPassword += charset.charAt(getUniformIndex(charset.length)); }Does using `randomValue % charsetLength` introduce modulo bias in cryptographic password generation, and is rejection sampling the recommended mitigation?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordGenerator.jsx` around lines 39 - 40, The current loop in PasswordGenerator.jsx uses randomValues[i] % charset.length which causes modulo bias; change it to rejection sampling: compute const charsetLen = charset.length and const maxValid = Math.floor(256 / charsetLen) * charsetLen (assuming you use Uint8 bytes from crypto.getRandomValues), then for each character (validLength) draw a byte and while (byte >= maxValid) redraw via crypto.getRandomValues (or refill randomValues) to reject out-of-range values, then use index = byte % charsetLen to append to newPassword; update code that fills/uses randomValues and references newPassword, validLength and charset accordingly.frontend/src/components/ImageToPdfConverter.jsx (1)
18-20:⚠️ Potential issue | 🟠 Major
conversionIdRefdoesn't invalidate on file-selection changes.The guard only advances on submit. If conversion A is running and the user picks a new set of files before it resolves, A still passes the equality check and auto-downloads against the new UI state.
🛡️ Minimal safeguard
const onFileChange = (e) => { + if (loading) { + e.target.value = ""; + return; + } setConvertedFile(null); @@ const onSubmit = async (e) => { e.preventDefault(); + if (loading) return; @@ <input accept="image/*" className="block w-full text-sm text-foreground border border-input rounded-lg cursor-pointer bg-background focus:outline-none file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-semibold file:bg-primary/10 file:text-primary hover:file:bg-primary/10" id="multiple_files" type="file" multiple + disabled={loading} onChange={onFileChange} />Also applies to: 57-107, 140-147
backend/utils/ipValidation.js (1)
17-24:⚠️ Potential issue | 🔴 CriticalExpanded IPv4-mapped IPv6 literals still bypass
isPrivateIP().Only the compressed
::ffff:form is normalized right now. Valid mapped literals like0:0:0:0:0:ffff:7f00:1and0:0:0:0:0:ffff:127.0.0.1fall through unchanged, so loopback/private targets can still evade the SSRF filter.🔒 Suggested fix
- if (normalized.startsWith("::ffff:")) { - const suffix = ip.substring(7); + const mappedPrefix = ["::ffff:", "0:0:0:0:0:ffff:"].find((prefix) => + normalized.startsWith(prefix), + ); + if (mappedPrefix) { + const suffix = ip.substring(mappedPrefix.length);Also applies to: 87-90
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/ipValidation.js` around lines 17 - 24, The IPv6-to-IPv4 mapping normalization only handles the compressed "::ffff:" prefix so expanded mapped IPv6 forms (e.g., "0:0:0:0:0:ffff:7f00:1" or "0:0:0:0:0:ffff:127.0.0.1") bypass isPrivateIP(); update the normalization logic that uses isIP(ip) and normalized to detect IPv4-mapped IPv6 in both hexadecimal-word and dotted-decimal suffix forms, convert those mapped addresses to their IPv4 equivalent before returning (so isPrivateIP() gets the IPv4), and ensure both branches that handle "::ffff:" and expanded "ffff:" segments are covered (refer to isIP, normalized, and isPrivateIP identifiers to locate the code).backend/docs/README.md (1)
441-450:⚠️ Potential issue | 🟡 MinorFix malformed fenced JSON block in the 401 response example.
The 401 schema block still mixes indentation/fencing and leaves an extra fence, so markdown lint/rendering stays broken.
🛠️ Suggested patch
- ```json - { - - "msg": "Invalid or missing cron secret" - } - - ``` - - ``` + ```json + { + "msg": "Invalid or missing cron secret" + } + ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/README.md` around lines 441 - 450, The JSON fenced block for the 401 response example is malformed (extra/misaligned fences and indentation) — fix the example by replacing the broken fences and content so it becomes a single proper ```json fenced block containing a compact JSON object with a "msg" key set to "Invalid or missing cron secret". Update the section that currently shows multiple fences/indented braces to use one opening ```json fence, the JSON object with correct indentation and quotation, then a single closing ``` fence to restore proper Markdown rendering.backend/index.js (1)
27-34:⚠️ Potential issue | 🟠 MajorStill re-throw the Supabase readiness failure.
startServer()awaits this helper, but the catch only logs and resolves. A bad bucket/credential state still lets the API boot and/healthreport healthy until the first Supabase-backed route fails.💡 Proposed fix
const testSupabaseConnection = async () => { try { const { data: bucket, error: getBucketError } = await supabase.storage.getBucket("utilityhub"); if (getBucketError) throw getBucketError; console.log(`Supabase Storage connected!\nBucket '${bucket.name}' found.`); } catch (error) { console.error("Supabase Storage connection failed:", error.message); + throw error; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/index.js` around lines 27 - 34, The testSupabaseConnection helper swallows readiness errors (it catches and only logs) allowing startServer() to proceed and the service to report healthy; modify testSupabaseConnection so that after logging the error it re-throws the error (or simply remove the try/catch so the promise rejects) when supabase.storage.getBucket("utilityhub") fails, ensuring startServer() receives the rejection and the process fails fast instead of booting with bad Supabase credentials.frontend/src/components/ImageCropper.jsx (1)
112-136:⚠️ Potential issue | 🟡 MinorUse
blob.typefor the downloaded extension.Browsers can fall back from
canvas.toBlob(..., mimeType)to PNG when the requested encoder is unsupported. This still saves.gif/.bmpfilenames for PNG payloads.💡 Proposed fix
const objectUrl = URL.createObjectURL(blob); setCroppedImageSrc(objectUrl); - const extension = mimeType.split("/")[1] || "png"; + const actualMimeType = blob.type || mimeType; + const extension = actualMimeType.split("/")[1] || "png"; handleDownload(objectUrl, `dkutils-cropped-image-${Date.now()}.${extension}`); setLoading(false);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 112 - 136, The downloaded file extension should be derived from the actual blob payload, not the requested mimeType: inside the canvas.toBlob callback (the anonymous function using currentOpId/cropOpIdRef), read the produced blob.type and use that to compute the extension passed to handleDownload (fall back to original mimeType or "image/png" only if blob.type is empty), so change the extension determination to use blob.type instead of mimeType when calling handleDownload with objectUrl and the filename.frontend/src/components/HashGenerator.jsx (1)
1-1:⚠️ Potential issue | 🟡 MinorStrip the UTF-8 BOM from the file header.
The first byte is still
U+FEFF. I can see the same leading byte on several touched JSX files, so it’s worth removing them in the same batch to avoid parser/tooling edge cases.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/HashGenerator.jsx` at line 1, The file HashGenerator.jsx (the HashGenerator component) contains a UTF-8 BOM (U+FEFF) at the top of the file; remove the invisible BOM character from the file header so the first character is the 'i' of the import statement, save the file without BOM, and repeat this cleanup for the other touched JSX files to avoid parser/tooling issues.backend/routes/favicon.js (1)
11-40:⚠️ Potential issue | 🔴 CriticalThe DNS safety check is still TOCTOU.
validateUrl()screens one DNS resolution result, but Lines 48 and 107 create a fresh connection using the hostname again. A rebinding host can pass the preflight and then resolve to a private address at connect time. Reuse the pinned-lookup approach frombackend/routes/redirectChecker.js(or connect to the validated IP while preserving Host/SNI) for both the page fetch and favicon downloads.Also applies to: 48-52, 103-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/favicon.js` around lines 11 - 40, validateUrl() performs a DNS check with dns.lookup but later code (the page fetch and favicon download) re-resolves the hostname, creating a TOCTOU risk; fix by performing a single pinned lookup in validateUrl (using dns.lookup(..., { all: true }) to collect allowed IPs) and propagate that pinned address (or list) to the fetch/path that downloads the page and favicon instead of re-resolving the hostname, or alternatively connect directly to a validated IP while preserving the original Host header and SNI; update the call sites that perform the fetch/download (the favicon download and page fetch code referenced in the same module) to accept and use the pinned lookup results and ensure isPrivateIP is only checked against those pinned IPs to prevent rebinding.
🧹 Nitpick comments (14)
frontend/src/components/ImageGrayscaler.jsx (1)
44-55: Track usage after successful conversion (if metric is “successful uses”).
trackToolUsageis called before the conversion request, so failed conversions are counted. If your KPI is successful completions, move tracking into the success path.Suggested refactor
setLoading(true); - trackToolUsage("ImageGrayscaler", "image"); const formData = new FormData(); formData.append("image", selectedFile); try { const res = await axios.post( `${import.meta.env.VITE_API_BASE_URL}/api/convert/image-grayscale`, formData, { timeout: 30000 }, ); + trackToolUsage("ImageGrayscaler", "image");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageGrayscaler.jsx` around lines 44 - 55, The call to trackToolUsage("ImageGrayscaler", "image") is happening before the network request so failures are counted; update the ImageGrayscaler component to call trackToolUsage only after a successful axios.post returns (inside the try block after receiving a successful response), leaving setLoading(true) before the request and keeping error handling in the catch; ensure you reference the existing trackToolUsage, setLoading, and the axios.post to move the metric call into the success path (or alternatively add a separate failure metric in the catch if you need both).frontend/src/components/PdfToTextConverter.jsx (1)
48-49: Consider moving analytics tracking after successful conversion.
trackToolUsageis called at line 49 before theaxios.postcompletes. If the request fails, the tool usage is still recorded. For consistency withPdfToWordConverter(which tracks after success) and to avoid counting failed attempts, consider movingtrackToolUsageinside thetryblock after receiving the response.♻️ Suggested change
setLoading(true); - trackToolUsage("PdfToTextConverter", "pdf"); const formData = new FormData(); formData.append("pdf", selectedFile); try { const res = await axios.post( `${import.meta.env.VITE_API_BASE_URL}/api/convert/pdf-to-text`, formData, { headers: { "Content-Type": "multipart/form-data", }, }, ); const extractedTextContent = res.data; setExtractedText(extractedTextContent); + trackToolUsage("PdfToTextConverter", "pdf");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfToTextConverter.jsx` around lines 48 - 49, The analytics call trackToolUsage in PdfToTextConverter is invoked before the axios.post completes, causing failed conversions to be counted; move the trackToolUsage("PdfToTextConverter", "pdf") call from its current position (near setLoading(true)) into the try block after the axios.post successful response is received (the same place PdfToWordConverter records usage) so it only runs on success; locate the conversion function in the PdfToTextConverter component and place the tracking call immediately after the await axios.post response handling.backend/utils/filenameSanitizer.js (1)
9-12: Sanitizefallbackbefore returning it.Both early-return paths hand
fallbackback untouched, so this helper stops being a sanitizer as soon as a caller provides a custom fallback. That makes later storage/download call sites easy to accidentally regress.♻️ Suggested adjustment
+const sanitizeSegment = (value) => + path + .basename(value) + .normalize("NFC") + .replace(/[^a-zA-Z0-9.\-_]/g, "_") + .replace(/\.\./g, ".") + .replace(/[_-]{2,}/g, "_") + .replace(/^[.\-_]+|[.\-_]+$/g, ""); + const sanitizeFilename = (filename, fallback = `file_${Date.now()}`) => { + const safeFallback = sanitizeSegment(String(fallback || "")) || `file_${Date.now()}`; if (!filename || typeof filename !== "string") { - return fallback; + return safeFallback; } @@ if (!sanitized || sanitized === "." || sanitized === "..") { - return fallback; + return safeFallback; }Also applies to: 42-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/filenameSanitizer.js` around lines 9 - 12, The sanitizeFilename function currently returns the provided fallback unmodified in its early-return branches (and similarly at the other early-return on lines 42-45); update sanitizeFilename so any fallback value is passed through the same sanitation logic before being returned (e.g., normalize/strip invalid chars, trim length, enforce extension rules). Locate the existing sanitation steps inside sanitizeFilename, extract or reuse them (or call a helper like sanitizeCore) and apply them to fallback in both early returns so callers’ custom fallback values can’t bypass validation.frontend/src/components/PdfMerger.jsx (1)
12-17:maxUploadSizeis not actually coming from context here.
frontend/src/context/AuthContext.jsxcurrently exposes only{ state, dispatch }, soauth.limits?.maxUploadSizeBytesandauth.uploadLimitare alwaysundefined. If the real cap becomes configurable, this component will silently keep using the 50MB/10MB fallback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfMerger.jsx` around lines 12 - 17, The code reads auth.limits and auth.uploadLimit directly from the context object but AuthContext only exposes { state, dispatch }, so auth.limits and auth.uploadLimit are always undefined; update the logic to read limits from auth.state (e.g., use auth.state.limits?.maxUploadSizeBytes and auth.state.uploadLimit) when computing maxUploadSize in PdfMerger.jsx and keep the existing fallback (isAuthenticated ? 50MB : 10MB); ensure you reference the existing symbols maxUploadSize, auth, useContext(AuthContext) and auth.state so the component will honor any configurable caps exposed on state.backend/middleware/apiActivityTracker.js (1)
14-24: Parallelize independent usage-counter updates to reduce request latency.
TotalUsageandServiceUsageupdates are independent and can run concurrently.⚡ Proposed refactor
- await TotalUsage.findOneAndUpdate( - { key: "global" }, - { $inc: { totalCount: 1 }, $setOnInsert: { key: "global" } }, - { upsert: true, new: true }, - ); - - await ServiceUsage.findOneAndUpdate( - { endpoint: req.path }, - { $inc: { count: 1 } }, - { upsert: true, new: true }, - ); + await Promise.all([ + TotalUsage.findOneAndUpdate( + { key: "global" }, + { $inc: { totalCount: 1 }, $setOnInsert: { key: "global" } }, + { upsert: true, new: true }, + ), + ServiceUsage.findOneAndUpdate( + { endpoint: req.path }, + { $inc: { count: 1 } }, + { upsert: true, new: true }, + ), + ]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/middleware/apiActivityTracker.js` around lines 14 - 24, TotalUsage.findOneAndUpdate and ServiceUsage.findOneAndUpdate are independent and should be executed concurrently to reduce latency; replace the sequential awaits with a single await Promise.all([...]) that runs both findOneAndUpdate calls (preserving their current query, update and options including upsert/new/$inc/$setOnInsert) so both updates happen in parallel while still awaiting their completion before continuing (keep using req.path for the ServiceUsage query).backend/middleware/auth.js (1)
12-13: Remove redundant role reassignment.Line 13 duplicates data already present after Line 12 and can be safely removed.
Proposed change
const decoded = jwt.verify(token, process.env.JWT_SECRET); req.user = decoded.user; - req.user.role = decoded.user.role; return next();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/middleware/auth.js` around lines 12 - 13, Remove the redundant role reassignment: when setting req.user from decoded.user (the assignment in the middleware that sets req.user = decoded.user), delete the subsequent line that reassigns req.user.role = decoded.user.role since decoded.user already contains role; keep only the single assignment to req.user to avoid duplication.frontend/src/components/Base64TextConverter.jsx (1)
16-44: Track usage only after successful conversion.Line 18 and Line 33 fire analytics before conversion succeeds, so failed encode/decode attempts are counted as successful usage events. Move tracking into the success path in each
tryblock.Proposed change
const encodeBase64 = () => { setLoading(true); - trackToolUsage("Base64TextConverter", "text"); setTimeout(() => { try { setConvertedText(btoa(text)); + trackToolUsage("Base64TextConverter", "text"); } catch (error) { setConvertedText(""); toast.error("Failed to encode text."); } finally { @@ const decodeBase64 = () => { setLoading(true); - trackToolUsage("Base64TextConverter", "text"); setTimeout(() => { try { setConvertedText(atob(text)); + trackToolUsage("Base64TextConverter", "text"); } catch (error) { setConvertedText(""); toast.error("Failed to decode text."); } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/Base64TextConverter.jsx` around lines 16 - 44, The tracking call is currently executed before conversion and should only run on success; remove trackToolUsage from the top of both encodeBase64 and decodeBase64, and call trackToolUsage("Base64TextConverter","text") inside each try block after a successful conversion (after setConvertedText(btoa(text)) in encodeBase64 and after setConvertedText(atob(text)) in decodeBase64) so failed attempts are not counted; keep the existing error handling (toast.error, setConvertedText("")) and finally block that clears setLoading(false).frontend/src/components/ImageResizer.jsx (2)
78-79: Consider tracking analytics after successful resize.
trackToolUsageis called at line 79 before the resize operation completes. If theFileReader,Imagedecode, or canvas operations fail (handled by the new error handlers), the usage is still tracked despite the operation failing.Per PR comments, moving the tracking after successful processing would ensure only completed operations are recorded.
Proposed change
setLoading(true); - trackToolUsage("ImageResizer", "image"); const reader = new FileReader(); // ... existing code ... ctx.drawImage(img, 0, 0, width, height); const dataUrl = canvas.toDataURL(originalImage.type); setResizedImageSrc(dataUrl); + trackToolUsage("ImageResizer", "image"); handleDownload(dataUrl, `resized-${originalImage ? originalImage.name : "image"}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 78 - 79, Move the analytics call so it only fires after a successful resize: currently trackToolUsage("ImageResizer", "image") is called immediately after setLoading(true) which records usage even if FileReader, Image decode, or canvas operations fail; instead, call trackToolUsage(...) inside the success path after the image has been fully processed (e.g., after the canvas/export step or where the existing success callback completes) and do not call it in error handlers, referencing the setLoading, trackToolUsage, FileReader, Image decode, and canvas/export code paths to locate where to relocate the call.
70-71: Add radix parameter toNumber.parseIntcalls.
Number.parseIntwithout a radix can lead to unexpected parsing behavior with strings that have leading zeros (interpreted as octal in some contexts). Always specify radix 10 for decimal parsing.Proposed fix
- const width = Number.parseInt(newWidth); - const height = Number.parseInt(newHeight); + const width = Number.parseInt(newWidth, 10); + const height = Number.parseInt(newHeight, 10);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 70 - 71, In ImageResizer.jsx update the two Number.parseInt calls to explicitly pass radix 10 so parsing is deterministic: change the calls that compute width and height (currently using Number.parseInt(newWidth) and Number.parseInt(newHeight)) to include the second argument 10 (e.g., Number.parseInt(newWidth, 10) and Number.parseInt(newHeight, 10)).frontend/src/components/JsonXmlConverter.jsx (1)
30-35: Use the non-blocking analytics wrapper here too.Both handlers set
loadingand then calltrackToolUsageoutside the requesttry. If analytics throws, conversion can abort aftersetLoading(true)and leave the UI stuck.frontend/src/components/ExcelToPdfConverter.jsxalready uses a fire-and-forget wrapper that avoids that failure mode.♻️ Suggested change
const convertJsonToXml = async () => { setLoading(true); - trackToolUsage("JsonXmlConverter", "web"); + void Promise.resolve() + .then(() => trackToolUsage("JsonXmlConverter", "web")) + .catch((analyticsError) => { + console.error("Failed to track tool usage", analyticsError); + }); setOutputData(""); setError(null); try {Apply the same wrapper in
convertXmlToJson.Also applies to: 52-57
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/JsonXmlConverter.jsx` around lines 30 - 35, Both convertJsonToXml and convertXmlToJson call trackToolUsage directly which can throw and leave loading stuck; update both functions (convertJsonToXml and convertXmlToJson) to call analytics via the same non-blocking "fire-and-forget" wrapper used in ExcelToPdfConverter.jsx (i.e., invoke trackToolUsage asynchronously without awaiting and swallow errors) immediately after setLoading(true) so analytics cannot abort the conversion flow. Ensure you mirror the wrapper behavior and error-swallowing logic from ExcelToPdfConverter.jsx when replacing direct trackToolUsage calls.backend/routes/cleanSupabase.js (1)
38-65: Consider extracting shared cleanup handler logic.Both route handlers (
POST /andPOST /trigger) have identical implementation bodies. While this works correctly, extracting the shared logic into a helper function would reduce duplication and simplify future maintenance.const runCleanup = async (req, res) => { try { await cleanSupabaseStorage(); res.status(200).json({ msg: "Supabase cleanup triggered successfully." }); } catch (error) { console.error("Error triggering Supabase cleanup:", error); res.status(500).json({ msg: "Failed to trigger Supabase cleanup.", error: error.message }); } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/cleanSupabase.js` around lines 38 - 65, Duplicate try/catch response logic in both router.post handlers should be extracted into a single helper; create an async function (e.g., runCleanup(req, res)) that calls cleanSupabaseStorage() and handles the success/500 responses and logging, then replace the bodies of both router.post("/", requireAuth, ...) and router.post("/trigger", requireSecret, ...) to simply call await runCleanup(req, res). Keep the same error logging message and JSON shape so behavior is unchanged.frontend/src/components/PdfCompressor.jsx (1)
16-24: Potential access offileafter it's undefined in edge case.Line 17 accesses
e.target.files[0]which could beundefined, then line 20 checks!file. However, on line 18,state.isAuthenticatedis accessed before the early return. Ifstatewere everundefined(edge case during context initialization), this would throw. Consider adding defensive access:const onFileChange = (e) => { const file = e.target.files[0]; - const maxFileSize = state.isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; + const maxFileSize = state?.isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024;Note: Other PDF tools like
PdfMerger.jsxuseauth.state?.isAuthenticated ?? falsefor safer access. Consider aligning with that pattern for consistency across components.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfCompressor.jsx` around lines 16 - 24, The onFileChange handler may access state.isAuthenticated before ensuring file/ files exist; update onFileChange to defensively read authentication and files: first check e.target?.files and length and return early if no files, then compute file and maxFileSize, or compute maxFileSize using safe access like state?.isAuthenticated ?? false; ensure you reference the same handler name onFileChange and DOM access e.target.files and keep setSelectedFile/e.target.value clearing behavior intact.backend/routes/seoTools.js (1)
99-107: Fallback conditions check for redirect errors but redirects are disabled.The fallback from HTTPS to HTTP (lines 99-107) checks for redirect-related errors (
"redirect","302","301"), butfetchContentsetsmaxRedirects: 0(line 59), which means redirects result in an error rather than following them. This is correct behavior for SSRF prevention, but the condition checking for redirect strings in the error message may not trigger as expected since Axios throws a different error whenmaxRedirectsis exceeded.Consider simplifying the fallback condition or verifying the actual error message format when a redirect response is received with
maxRedirects: 0.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/seoTools.js` around lines 99 - 107, The fallback from HTTPS to HTTP is relying on string-matching of result.error for "redirect"/"301"/"302" even though fetchContent sets maxRedirects: 0; update the fallback check around result.exists/result.error so it detects redirects reliably by inspecting the Axios-style error shape: check for error.response?.status being 301 or 302 (or 301/302/307/308 as needed) and/or error.code === 'ERR_FR_TOO_MANY_REDIRECTS' (or the runtime-specific redirect error code) before calling fetchContent(url, validatedAddresses) again; adjust logic in the block that references result.exists and result.error to use these explicit checks instead of substring matching.backend/routes/analytics.js (1)
23-115: Prefer a single metadata source for approved tools.These two objects now encode the same catalog twice. The next tool addition can easily update
/trackbut miss/stats//popular, which silently drops that tool from public analytics. Consider a single metadata map and derive the public subset from it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/analytics.js` around lines 23 - 115, APPROVED_TOOL_CATEGORY_PAIRS and APPROVED_PUBLIC_TOOL_CATEGORY_PAIRS duplicate the same catalog, leading to drift; replace them with a single source (keep APPROVED_TOOL_CATEGORY_PAIRS as the canonical map) and derive the public subset used by /track, /stats and /popular by filtering APPROVED_TOOL_CATEGORY_PAIRS (e.g. exclude internal keys like "Login", "Register" and keys starting with "SeoTools:") so additions only require updating one map and the public view is computed from it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/publish.yml:
- Around line 41-108: The job currently reads only the root website version
(WEBSITE_VERSION) but then applies it to frontend/backend files; update the Get
versions step (and subsequent env exports used by the node script) to also read
frontend and backend package.json versions (e.g., FRONTEND_VERSION and
BACKEND_VERSION) using the same node -p approach, and in the node script
pass/consume those separate env vars and call updateReadme with the correct
version for each path (use FRONTEND_VERSION for frontend/README.md and
BACKEND_VERSION for backend/docs/README.md); additionally add a simple guard
that compares WEBSITE_VERSION to FRONTEND_VERSION and BACKEND_VERSION and fails
the job if they diverge to prevent stamping inconsistent releases.
In `@backend/docs/api-documentation.json`:
- Around line 960-963: The OpenAPI entry for "/api/clean-supabase/trigger" is
documented as "get" but the route implementation uses router.post("/trigger") in
backend/routes/cleanSupabase.js; update the backend/docs/api-documentation.json
entry by replacing the "get" operation with "post" and change the "summary" from
"GET /api/clean-supabase/trigger" to "POST /api/clean-supabase/trigger" so the
documentation matches the router.post implementation.
In `@backend/routes/auth.js`:
- Around line 27-39: The response currently returns raw validation details
(errors.array()) which can expose sensitive fields; update the handler to return
the sanitizedErrors variable instead by replacing the returned payload in the
res.status(400).json call so it sends { errors: sanitizedErrors } (keep existing
logging of sanitizedErrors and the sanitization logic that redacts password).
Locate the block where sanitizedErrors is built and the return statement that
uses errors.array() and change only the response to use sanitizedErrors.
In `@backend/routes/favicon.js`:
- Around line 48-52: The favicon fetch currently disables redirects
(maxRedirects: 0) and treats non-2xx as fatal; change this to follow redirects
manually: when axios.get(fileUrl, {responseType: "arraybuffer", maxRedirects: 0,
...}) returns a 3xx, read the Location header, resolve relative URLs against
fileUrl, validate the redirected URL with the same validation used for the
initial URL, and repeat the fetch up to a small redirect limit before failing;
apply the same manual-redirect logic to the other favicon/download axios block
around lines 107-111 so http→https, apex→www and CDN redirects succeed while
preserving your SSRF/host validation.
In `@backend/routes/pdfConverter.js`:
- Around line 45-47: The current code calls merger.add concurrently via
Promise.all(files.map(...)) which can merge pages out of upload order; change
this to add buffers to the same merger sequentially in the original files order
(e.g., iterate files in order and await merger.add(file.buffer) inside a
for/for-of loop or use files.reduce with await) so each merger.add completes
before the next begins, preserving upload order for the merged PDF.
In `@backend/routes/textConverter.js`:
- Around line 17-21: Add a round-trip Base64 validation before decoding so
malformed input returns 400 instead of producing corrupted output: implement an
isValidBase64 helper that attempts to decode and re-encode (e.g.,
Buffer.from(str, 'base64').toString('base64') === str.trim()) and use it in the
branch handling type === "decode" (where result is currently set via
Buffer.from(text, "base64").toString("utf8")). If validation fails return
res.status(400).json({ msg: "Invalid Base64 input." }); otherwise proceed to
decode into result.
In `@frontend/src/App.jsx`:
- Around line 27-37: PrivateRoute currently redirects immediately because it
checks state.isAuthenticated before the auth bootstrap finishes; update the
guard to wait for an auth-initialized flag from AuthContext (e.g.
state.isInitialized or state.authReady set when token restore finishes/when the
LOGIN action from the bootstrap completes) and only run the useEffect navigate
to "/login" when that flag is true and state.isAuthenticated is false; likewise
only render children when initialized and authenticated so protected routes are
gated until the token-restore/bootstrap logic completes (adjust references to
PrivateRoute, AuthContext, state.isAuthenticated and the bootstrap that
dispatches LOGIN).
In `@frontend/src/components/CsvToJsonConverter.jsx`:
- Line 54: The analytics call in CsvToJsonConverter.jsx uses an inconsistent
event key: trackToolUsage("JsonToCsvConverter", "web") while the component and
opposite path use "CsvToJsonConverter"; update the call in
CsvToJsonConverter.jsx to use the same key ("CsvToJsonConverter") or extract a
single constant and use that in both paths to avoid splitting metrics (refer to
the trackToolUsage invocation and the CsvToJsonConverter component name to
locate the change).
- Around line 39-48: In the Papa.parse completion handler (the complete callback
passed into Papa.parse), add a check for results.errors (e.g., results.errors &&
results.errors.length) before calling setConvertedOutput with results.data; if
there are parse errors, call setConvertedOutput with a descriptive error string
that includes the first/aggregated results.errors messages and still call
setLoading(false); otherwise continue to JSON.stringify(results.data, null, 2)
and setLoading(false). This change touches the complete callback where
Papa.parse is invoked and uses setConvertedOutput and setLoading to report parse
failures.
In `@frontend/src/components/FaviconExtractor.jsx`:
- Around line 13-32: The analytics call trackToolUsage("FaviconExtractor",
"web") is fired before the async axios.post completes, causing failures to be
tracked as usage; move the trackToolUsage invocation into the try block after
the axios.post resolves (e.g., immediately after const { path, originalname } =
res.data or after handleDownload(path, originalname)) so it's only called on
successful extraction; update handleSubmit to call trackToolUsage there and keep
error handling (setError/toast) unchanged in the catch/finally.
In `@frontend/src/components/ImageBackgroundRemover.jsx`:
- Around line 37-49: The code calls
trackToolUsage("ImageBackgroundRemover","image") before removeBackground()
completes, which records attempts even on failure; move the trackToolUsage call
to after the await removeBackground(selectedFile) succeeds (for example
immediately after creating objectUrl/blob and before initiating the download and
toast.success) so only successful removals are tracked; update the try block
around removeBackground, objectUrl/link creation, and toast.success to ensure
trackToolUsage is invoked only on success and not in the catch/finally paths
(refer to removeBackground, trackToolUsage, toast.success, URL.createObjectURL
and link.click).
In `@frontend/src/components/ImageFlipper.jsx`:
- Around line 17-35: The onFileChange handler currently treats a cancelled file
dialog (file undefined) as a non-image and shows an error; update onFileChange
to first check for no file and simply clear selection/state and return early
(e.g., if (!file) { setSelectedFile(null); e.target.value = ""; return; })
before the image type/size checks so cancelling does not trigger toast.error;
preserve the existing size-limit and non-image branches (references:
onFileChange, setSelectedFile, toast, isAuthenticated).
In `@frontend/src/components/ImageFormatConverter.jsx`:
- Around line 48-50: The oversize toast currently always suggests "Login for a
higher limit", which is incorrect for authenticated users; update the toast in
ImageFormatConverter.jsx (the file/size check where file, maxSize and
toast.error are used—likely inside the file upload handler such as
handleFileChange/handleFileUpload) to be auth-aware: detect the authentication
flag used in the component (e.g., isAuthenticated or user) and, when the user is
authenticated (or maxSize equals the authenticated cap constant), show a
different message that does not prompt to login (e.g., mention the current
authenticated limit or suggest upgrading the account) and only include the
"Login for a higher limit (50MB)" suggestion when the user is not authenticated.
In `@frontend/src/components/ImageGrayscaler.jsx`:
- Around line 16-34: The onFileChange handler currently treats a canceled file
picker as a non-image and shows an error; update onFileChange to first check for
no file selected (e.g., if (!file) { setSelectedFile(null); e.target.value =
null; return; }) before checking file.type so cancellation does not trigger
toast.error. Keep the existing logic for image type and size checks (references:
onFileChange, setSelectedFile, isAuthenticated, toast, e.target.files) unchanged
apart from this early-return guard.
In `@frontend/src/components/ImageToBase64Converter.jsx`:
- Around line 42-74: The POST response handling can apply to a stale selection
if the user changes selectedFile while a request is in flight; fix by adding a
request-id guard: create a mutable ref (e.g., latestRequestIdRef) updated
whenever selectedFile changes and incremented at the start of onSubmit, capture
the current requestId in the async closure, and before calling setBase64String,
handleDownload, or trackToolUsage verify that requestId ===
latestRequestIdRef.current; ignore responses when they don't match. Update the
same guard on selection change to invalidate in-flight requests so only the most
recent selection (symbols: onSubmit, selectedFile, setBase64String,
handleDownload, trackToolUsage) is allowed to update state or trigger downloads.
In `@frontend/src/components/JsonFormatterValidator.jsx`:
- Around line 45-48: The copyToClipboard function currently calls
navigator.clipboard.writeText(textToCopy) without awaiting or handling failures,
then immediately shows toast.success; change copyToClipboard to await the
Promise returned by navigator.clipboard.writeText (or use .then/.catch), wrap
the call in a try/catch, call toast.success only on success, and call
toast.error (or a suitable error toast) in the catch block; reference the
copyToClipboard function, navigator.clipboard.writeText, toast.success and
toast.error when making the fix so failures (permission denied, insecure
context, etc.) are handled and users see correct feedback.
In `@frontend/src/components/PasswordStrengthChecker.jsx`:
- Around line 43-51: The debounce returned in debouncedCheckStrengthRef can
schedule a timeout that survives unmount; modify the debounce so its ref exposes
a cancel/clear method (e.g., assign the timeout id to
debouncedCheckStrengthRef.current.timeout or provide
debouncedCheckStrengthRef.current.cancel) and then add a useEffect with a
cleanup that calls that cancel/clear on unmount to clear any pending timer;
update the debounce factory around debouncedCheckStrengthRef and add a
useEffect(() => () => { /* clear via debouncedCheckStrengthRef.current.cancel or
clearTimeout(debouncedCheckStrengthRef.current.timeout) */ }) to prevent delayed
calls after teardown while keeping checkStrengthRef usage unchanged.
- Around line 13-35: The checkStrength function can be overwritten by
out-of-order responses; to fix it, track and validate request identity before
applying results (e.g., add a request counter or use AbortController stored in a
ref). On each invocation of checkStrength, increment a requestId (or abort
previous controller) and include it with the axios call; when the response/error
resolves, only call setStrengthScore, setFeedback, toast, and setLoading if the
response's requestId matches the latestRequestId (or if the request wasn't
aborted). Update the checkStrength closure to read/write the ref so stale
responses are ignored and the UI state is only updated for the latest check.
---
Duplicate comments:
In `@backend/docs/README.md`:
- Around line 441-450: The JSON fenced block for the 401 response example is
malformed (extra/misaligned fences and indentation) — fix the example by
replacing the broken fences and content so it becomes a single proper ```json
fenced block containing a compact JSON object with a "msg" key set to "Invalid
or missing cron secret". Update the section that currently shows multiple
fences/indented braces to use one opening ```json fence, the JSON object with
correct indentation and quotation, then a single closing ``` fence to restore
proper Markdown rendering.
In `@backend/index.js`:
- Around line 27-34: The testSupabaseConnection helper swallows readiness errors
(it catches and only logs) allowing startServer() to proceed and the service to
report healthy; modify testSupabaseConnection so that after logging the error it
re-throws the error (or simply remove the try/catch so the promise rejects) when
supabase.storage.getBucket("utilityhub") fails, ensuring startServer() receives
the rejection and the process fails fast instead of booting with bad Supabase
credentials.
In `@backend/routes/favicon.js`:
- Around line 11-40: validateUrl() performs a DNS check with dns.lookup but
later code (the page fetch and favicon download) re-resolves the hostname,
creating a TOCTOU risk; fix by performing a single pinned lookup in validateUrl
(using dns.lookup(..., { all: true }) to collect allowed IPs) and propagate that
pinned address (or list) to the fetch/path that downloads the page and favicon
instead of re-resolving the hostname, or alternatively connect directly to a
validated IP while preserving the original Host header and SNI; update the call
sites that perform the fetch/download (the favicon download and page fetch code
referenced in the same module) to accept and use the pinned lookup results and
ensure isPrivateIP is only checked against those pinned IPs to prevent
rebinding.
In `@backend/utils/ipValidation.js`:
- Around line 17-24: The IPv6-to-IPv4 mapping normalization only handles the
compressed "::ffff:" prefix so expanded mapped IPv6 forms (e.g.,
"0:0:0:0:0:ffff:7f00:1" or "0:0:0:0:0:ffff:127.0.0.1") bypass isPrivateIP();
update the normalization logic that uses isIP(ip) and normalized to detect
IPv4-mapped IPv6 in both hexadecimal-word and dotted-decimal suffix forms,
convert those mapped addresses to their IPv4 equivalent before returning (so
isPrivateIP() gets the IPv4), and ensure both branches that handle "::ffff:" and
expanded "ffff:" segments are covered (refer to isIP, normalized, and
isPrivateIP identifiers to locate the code).
In `@frontend/src/components/HashGenerator.jsx`:
- Line 1: The file HashGenerator.jsx (the HashGenerator component) contains a
UTF-8 BOM (U+FEFF) at the top of the file; remove the invisible BOM character
from the file header so the first character is the 'i' of the import statement,
save the file without BOM, and repeat this cleanup for the other touched JSX
files to avoid parser/tooling issues.
In `@frontend/src/components/ImageCropper.jsx`:
- Around line 112-136: The downloaded file extension should be derived from the
actual blob payload, not the requested mimeType: inside the canvas.toBlob
callback (the anonymous function using currentOpId/cropOpIdRef), read the
produced blob.type and use that to compute the extension passed to
handleDownload (fall back to original mimeType or "image/png" only if blob.type
is empty), so change the extension determination to use blob.type instead of
mimeType when calling handleDownload with objectUrl and the filename.
In `@frontend/src/components/PasswordGenerator.jsx`:
- Around line 39-40: The current loop in PasswordGenerator.jsx uses
randomValues[i] % charset.length which causes modulo bias; change it to
rejection sampling: compute const charsetLen = charset.length and const maxValid
= Math.floor(256 / charsetLen) * charsetLen (assuming you use Uint8 bytes from
crypto.getRandomValues), then for each character (validLength) draw a byte and
while (byte >= maxValid) redraw via crypto.getRandomValues (or refill
randomValues) to reject out-of-range values, then use index = byte % charsetLen
to append to newPassword; update code that fills/uses randomValues and
references newPassword, validLength and charset accordingly.
---
Nitpick comments:
In `@backend/middleware/apiActivityTracker.js`:
- Around line 14-24: TotalUsage.findOneAndUpdate and
ServiceUsage.findOneAndUpdate are independent and should be executed
concurrently to reduce latency; replace the sequential awaits with a single
await Promise.all([...]) that runs both findOneAndUpdate calls (preserving their
current query, update and options including upsert/new/$inc/$setOnInsert) so
both updates happen in parallel while still awaiting their completion before
continuing (keep using req.path for the ServiceUsage query).
In `@backend/middleware/auth.js`:
- Around line 12-13: Remove the redundant role reassignment: when setting
req.user from decoded.user (the assignment in the middleware that sets req.user
= decoded.user), delete the subsequent line that reassigns req.user.role =
decoded.user.role since decoded.user already contains role; keep only the single
assignment to req.user to avoid duplication.
In `@backend/routes/analytics.js`:
- Around line 23-115: APPROVED_TOOL_CATEGORY_PAIRS and
APPROVED_PUBLIC_TOOL_CATEGORY_PAIRS duplicate the same catalog, leading to
drift; replace them with a single source (keep APPROVED_TOOL_CATEGORY_PAIRS as
the canonical map) and derive the public subset used by /track, /stats and
/popular by filtering APPROVED_TOOL_CATEGORY_PAIRS (e.g. exclude internal keys
like "Login", "Register" and keys starting with "SeoTools:") so additions only
require updating one map and the public view is computed from it.
In `@backend/routes/cleanSupabase.js`:
- Around line 38-65: Duplicate try/catch response logic in both router.post
handlers should be extracted into a single helper; create an async function
(e.g., runCleanup(req, res)) that calls cleanSupabaseStorage() and handles the
success/500 responses and logging, then replace the bodies of both
router.post("/", requireAuth, ...) and router.post("/trigger", requireSecret,
...) to simply call await runCleanup(req, res). Keep the same error logging
message and JSON shape so behavior is unchanged.
In `@backend/routes/seoTools.js`:
- Around line 99-107: The fallback from HTTPS to HTTP is relying on
string-matching of result.error for "redirect"/"301"/"302" even though
fetchContent sets maxRedirects: 0; update the fallback check around
result.exists/result.error so it detects redirects reliably by inspecting the
Axios-style error shape: check for error.response?.status being 301 or 302 (or
301/302/307/308 as needed) and/or error.code === 'ERR_FR_TOO_MANY_REDIRECTS' (or
the runtime-specific redirect error code) before calling fetchContent(url,
validatedAddresses) again; adjust logic in the block that references
result.exists and result.error to use these explicit checks instead of substring
matching.
In `@backend/utils/filenameSanitizer.js`:
- Around line 9-12: The sanitizeFilename function currently returns the provided
fallback unmodified in its early-return branches (and similarly at the other
early-return on lines 42-45); update sanitizeFilename so any fallback value is
passed through the same sanitation logic before being returned (e.g.,
normalize/strip invalid chars, trim length, enforce extension rules). Locate the
existing sanitation steps inside sanitizeFilename, extract or reuse them (or
call a helper like sanitizeCore) and apply them to fallback in both early
returns so callers’ custom fallback values can’t bypass validation.
In `@frontend/src/components/Base64TextConverter.jsx`:
- Around line 16-44: The tracking call is currently executed before conversion
and should only run on success; remove trackToolUsage from the top of both
encodeBase64 and decodeBase64, and call
trackToolUsage("Base64TextConverter","text") inside each try block after a
successful conversion (after setConvertedText(btoa(text)) in encodeBase64 and
after setConvertedText(atob(text)) in decodeBase64) so failed attempts are not
counted; keep the existing error handling (toast.error, setConvertedText(""))
and finally block that clears setLoading(false).
In `@frontend/src/components/ImageGrayscaler.jsx`:
- Around line 44-55: The call to trackToolUsage("ImageGrayscaler", "image") is
happening before the network request so failures are counted; update the
ImageGrayscaler component to call trackToolUsage only after a successful
axios.post returns (inside the try block after receiving a successful response),
leaving setLoading(true) before the request and keeping error handling in the
catch; ensure you reference the existing trackToolUsage, setLoading, and the
axios.post to move the metric call into the success path (or alternatively add a
separate failure metric in the catch if you need both).
In `@frontend/src/components/ImageResizer.jsx`:
- Around line 78-79: Move the analytics call so it only fires after a successful
resize: currently trackToolUsage("ImageResizer", "image") is called immediately
after setLoading(true) which records usage even if FileReader, Image decode, or
canvas operations fail; instead, call trackToolUsage(...) inside the success
path after the image has been fully processed (e.g., after the canvas/export
step or where the existing success callback completes) and do not call it in
error handlers, referencing the setLoading, trackToolUsage, FileReader, Image
decode, and canvas/export code paths to locate where to relocate the call.
- Around line 70-71: In ImageResizer.jsx update the two Number.parseInt calls to
explicitly pass radix 10 so parsing is deterministic: change the calls that
compute width and height (currently using Number.parseInt(newWidth) and
Number.parseInt(newHeight)) to include the second argument 10 (e.g.,
Number.parseInt(newWidth, 10) and Number.parseInt(newHeight, 10)).
In `@frontend/src/components/JsonXmlConverter.jsx`:
- Around line 30-35: Both convertJsonToXml and convertXmlToJson call
trackToolUsage directly which can throw and leave loading stuck; update both
functions (convertJsonToXml and convertXmlToJson) to call analytics via the same
non-blocking "fire-and-forget" wrapper used in ExcelToPdfConverter.jsx (i.e.,
invoke trackToolUsage asynchronously without awaiting and swallow errors)
immediately after setLoading(true) so analytics cannot abort the conversion
flow. Ensure you mirror the wrapper behavior and error-swallowing logic from
ExcelToPdfConverter.jsx when replacing direct trackToolUsage calls.
In `@frontend/src/components/PdfCompressor.jsx`:
- Around line 16-24: The onFileChange handler may access state.isAuthenticated
before ensuring file/ files exist; update onFileChange to defensively read
authentication and files: first check e.target?.files and length and return
early if no files, then compute file and maxFileSize, or compute maxFileSize
using safe access like state?.isAuthenticated ?? false; ensure you reference the
same handler name onFileChange and DOM access e.target.files and keep
setSelectedFile/e.target.value clearing behavior intact.
In `@frontend/src/components/PdfMerger.jsx`:
- Around line 12-17: The code reads auth.limits and auth.uploadLimit directly
from the context object but AuthContext only exposes { state, dispatch }, so
auth.limits and auth.uploadLimit are always undefined; update the logic to read
limits from auth.state (e.g., use auth.state.limits?.maxUploadSizeBytes and
auth.state.uploadLimit) when computing maxUploadSize in PdfMerger.jsx and keep
the existing fallback (isAuthenticated ? 50MB : 10MB); ensure you reference the
existing symbols maxUploadSize, auth, useContext(AuthContext) and auth.state so
the component will honor any configurable caps exposed on state.
In `@frontend/src/components/PdfToTextConverter.jsx`:
- Around line 48-49: The analytics call trackToolUsage in PdfToTextConverter is
invoked before the axios.post completes, causing failed conversions to be
counted; move the trackToolUsage("PdfToTextConverter", "pdf") call from its
current position (near setLoading(true)) into the try block after the axios.post
successful response is received (the same place PdfToWordConverter records
usage) so it only runs on success; locate the conversion function in the
PdfToTextConverter component and place the tracking call immediately after the
await axios.post response handling.
🪄 Autofix (Beta)
❌ Autofix failed (check again to retry)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 7a7164d1-c733-46ab-b762-bb5769daa894
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (138)
.github/workflows/publish.yml.github/workflows/trigger-supabase-cleanup.yml.markdownlint.json.markdownlintignoreREADME.mdbackend/docs/README.mdbackend/docs/api-documentation.jsonbackend/index.jsbackend/middleware/apiActivityTracker.jsbackend/middleware/auth.jsbackend/middleware/authorize.jsbackend/middleware/uploadLimiter.jsbackend/models/ApiActivity.jsbackend/models/ServiceUsage.jsbackend/models/ToolUsage.jsbackend/models/Url.jsbackend/models/User.jsbackend/package.jsonbackend/routes/analytics.jsbackend/routes/auth.jsbackend/routes/cleanSupabase.jsbackend/routes/favicon.jsbackend/routes/imageConverter.jsbackend/routes/jsonXmlConverter.jsbackend/routes/keepAlive.jsbackend/routes/officeConverter.jsbackend/routes/passwordStrength.jsbackend/routes/pdfConverter.jsbackend/routes/redirectChecker.jsbackend/routes/screenshot.jsbackend/routes/seoTools.jsbackend/routes/shortener.jsbackend/routes/textConverter.jsbackend/routes/textToPdf.jsbackend/scripts/migrateTotalUsageKey.jsbackend/utils/filenameSanitizer.jsbackend/utils/ipValidation.jsbackend/utils/pdfErrorHandler.jsbackend/utils/supabaseCleaner.jsbackend/utils/supabaseClient.jsbiome.jsonfrontend/package.jsonfrontend/public/manifest.jsonfrontend/src/App.jsxfrontend/src/components/Base64TextConverter.jsxfrontend/src/components/CsvToJsonConverter.jsxfrontend/src/components/ExcelToPdfConverter.jsxfrontend/src/components/FaviconExtractor.jsxfrontend/src/components/Footer.jsxfrontend/src/components/HashGenerator.jsxfrontend/src/components/HtmlToMarkdownConverter.jsxfrontend/src/components/ImageBackgroundRemover.jsxfrontend/src/components/ImageCompressor.jsxfrontend/src/components/ImageCropper.jsxfrontend/src/components/ImageFlipper.jsxfrontend/src/components/ImageFormatConverter.jsxfrontend/src/components/ImageGrayscaler.jsxfrontend/src/components/ImageResizer.jsxfrontend/src/components/ImageToBase64Converter.jsxfrontend/src/components/ImageToPdfConverter.jsxfrontend/src/components/JsonFormatterValidator.jsxfrontend/src/components/JsonXmlConverter.jsxfrontend/src/components/LinkShortener.jsxfrontend/src/components/MarkdownToHtmlConverter.jsxfrontend/src/components/Navbar.jsxfrontend/src/components/PasswordGenerator.jsxfrontend/src/components/PasswordStrengthChecker.jsxfrontend/src/components/PdfCompressor.jsxfrontend/src/components/PdfMerger.jsxfrontend/src/components/PdfPageDeleter.jsxfrontend/src/components/PdfRotator.jsxfrontend/src/components/PdfSplitter.jsxfrontend/src/components/PdfToExcelConverter.jsxfrontend/src/components/PdfToTextConverter.jsxfrontend/src/components/PdfToWordConverter.jsxfrontend/src/components/PngToJpgConverter.jsxfrontend/src/components/QrCodeGenerator.jsxfrontend/src/components/QrCodeScanner.jsxfrontend/src/components/SeoTools.jsxfrontend/src/components/TextCaseConverter.jsxfrontend/src/components/TextDifferenceChecker.jsxfrontend/src/components/TextToPdfGenerator.jsxfrontend/src/components/ToolCard.jsxfrontend/src/components/UrlRedirectChecker.jsxfrontend/src/components/WebsiteScreenshotGenerator.jsxfrontend/src/components/auth/Login.jsxfrontend/src/components/auth/Register.jsxfrontend/src/components/mode-toggle.jsxfrontend/src/components/theme-provider.jsxfrontend/src/components/ui/badge.jsxfrontend/src/components/ui/button.jsxfrontend/src/components/ui/card.jsxfrontend/src/components/ui/input.jsxfrontend/src/components/ui/label.jsxfrontend/src/components/ui/textarea.jsxfrontend/src/context/AuthContext.jsxfrontend/src/index.cssfrontend/src/index.jsxfrontend/src/lib/utils.jsfrontend/src/pages/HomePage.jsxfrontend/src/pages/ImageToolsPage.jsxfrontend/src/pages/PdfToolsPage.jsxfrontend/src/pages/TextToolsPage.jsxfrontend/src/pages/WebToolsPage.jsxfrontend/src/utils/setAuthToken.jsfrontend/src/utils/useAnalytics.jsfrontend/src/utils/useSortedTools.jsfrontend/tailwind.config.jsfrontend/vercel.jsonfrontend/vite.config.jspackage.jsonpackage/README.mdpackage/package.jsonpackage/rslib.config.tspackage/scripts/postbuild.mjspackage/src/branding.tspackage/src/cli.tspackage/src/constants/index.tspackage/src/image/index.tspackage/src/index.tspackage/src/interactive.tspackage/src/interfaces/common.tspackage/src/interfaces/index.tspackage/src/pdf/index.tspackage/src/types/common.tspackage/src/types/shims.d.tspackage/src/utils/branding.tspackage/src/utils/config.tspackage/src/utils/errors.tspackage/src/utils/ffmpeg.tspackage/src/utils/files.tspackage/src/utils/index.tspackage/src/utils/watermark.tspackage/src/video/index.tspackage/tests/dist.test.jspackage/tsconfig.jsonpnpm-workspace.yamlturbo.json
💤 Files with no reviewable changes (2)
- .markdownlintignore
- .markdownlint.json
📜 Review details
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.0)
backend/docs/README.md
[warning] 441-441: Code block style
Expected: fenced; Actual: indented
(MD046, code-block-style)
[warning] 447-447: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| - name: Bump versions | ||
| run: | | ||
| # Fetch tags and get the last release tag | ||
| git fetch --tags | ||
| LAST_TAG=$(git describe --tags --abbrev=0 2>/dev/null || echo "") | ||
| if [ -n "$LAST_TAG" ]; then | ||
| BASE=$(git rev-list -n1 "$LAST_TAG") | ||
| echo "Using base commit $BASE from last tag $LAST_TAG" | ||
| else | ||
| BASE=$(git rev-parse --verify 4b825dc642cb6eb9a060e54bf8d69288fbee4904) | ||
| echo "No tags found, using empty-tree SHA $BASE" | ||
| fi | ||
| CHANGED_FILES=$(git diff --name-only "$BASE" HEAD) | ||
|
|
||
| # Determine which workspaces need version bumps | ||
| if echo "$CHANGED_FILES" | grep -q "^frontend/"; then | ||
| echo "Frontend changes detected, bumping frontend version" | ||
| cd frontend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. | ||
| fi | ||
| echo "Bumping NPM package version (always patch)" | ||
| cd package | ||
| npm version patch --no-git-tag-version --force | ||
| cd .. | ||
|
|
||
| if echo "$CHANGED_FILES" | grep -q "^backend/"; then | ||
| echo "Backend changes detected, bumping backend version" | ||
| cd backend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd .. | ||
| fi | ||
| echo "Bumping website versions (${{ github.event.inputs.release_type }})" | ||
| # Always bump root, frontend, and backend versions | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | ||
|
|
||
| # Always bump root version | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version | ||
| cd frontend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | ||
| cd .. | ||
|
|
||
| - name: Get version | ||
| id: get_version | ||
| cd backend | ||
| npm version ${{ github.event.inputs.release_type }} --no-git-tag-version --force | ||
| cd .. | ||
|
|
||
| - name: Get versions | ||
| id: get_versions | ||
| run: | | ||
| VERSION=$(node -p "require('./package.json').version") | ||
| echo "version=$VERSION" >> "$GITHUB_OUTPUT" | ||
| WEBSITE_VERSION=$(node -p "require('./package.json').version") | ||
| PACKAGE_VERSION=$(node -p "require('./package/package.json').version") | ||
| echo "website_version=$WEBSITE_VERSION" >> "$GITHUB_OUTPUT" | ||
| echo "package_version=$PACKAGE_VERSION" >> "$GITHUB_OUTPUT" | ||
|
|
||
| - name: Update docs version | ||
| - name: Update docs and READMEs | ||
| run: | | ||
| VERSION=${{ steps.get_version.outputs.version }} | ||
| export VERSION | ||
| W_VERSION=${{ steps.get_versions.outputs.website_version }} | ||
| P_VERSION=${{ steps.get_versions.outputs.package_version }} | ||
| export W_VERSION P_VERSION | ||
| node - <<'EOF' | ||
| const fs = require('fs'); | ||
| const version = process.env.VERSION; | ||
| const wVersion = process.env.W_VERSION; | ||
| const pVersion = process.env.P_VERSION; | ||
|
|
||
| // update backend api JSON docs | ||
| // update backend api JSON docs with website version | ||
| const jsonPath = 'backend/docs/api-documentation.json'; | ||
| if (fs.existsSync(jsonPath)) { | ||
| const doc = JSON.parse(fs.readFileSync(jsonPath, 'utf8')); | ||
| doc.info = doc.info || {}; | ||
| doc.info.version = version; | ||
| doc.info.version = wVersion; | ||
| fs.writeFileSync(jsonPath, JSON.stringify(doc, null, 2)); | ||
| } | ||
|
|
||
| // update any README files with Version: lines | ||
| const updateReadme = (p) => { | ||
| const updateReadme = (p, v) => { | ||
| if (fs.existsSync(p)) { | ||
| let c = fs.readFileSync(p, 'utf8'); | ||
| c = c.replace(/\*\*Version:\*\* .*/, `**Version:** ${version}`); | ||
| if (c.includes('**Version:**')) { | ||
| c = c.replace(/\*\*Version:\*\* .*/, `**Version:** ${v}`); | ||
| } else { | ||
| const h1Match = c.match(/^# .*/m); | ||
| if (h1Match) { | ||
| c = c.replace(/^# .*/m, `${h1Match[0]}\n\n**Version:** ${v}`); | ||
| } else { | ||
| c = `**Version:** ${v}\n\n${c}`; | ||
| } | ||
| } | ||
| fs.writeFileSync(p, c); | ||
| } | ||
| }; | ||
| updateReadme('backend/docs/README.md'); | ||
| updateReadme('README.md'); | ||
| updateReadme('frontend/README.md'); | ||
|
|
||
| // Update READMEs with appropriate versions | ||
| updateReadme('package/README.md', pVersion); | ||
| updateReadme('README.md', wVersion); | ||
| updateReadme('frontend/README.md', wVersion); | ||
| updateReadme('backend/docs/README.md', wVersion); |
There was a problem hiding this comment.
Enforce frontend/backend alignment before using a single website_version.
Lines 50-58 bump the root, frontend, and backend manifests independently, but Lines 63-108 only read the root version and then reuse it for the release metadata and README updates. If frontend/package.json or backend/package.json has already drifted, this job will stamp the wrong version into the release commit.
Suggested guard
- name: Get versions
id: get_versions
run: |
WEBSITE_VERSION=$(node -p "require('./package.json').version")
+ FRONTEND_VERSION=$(node -p "require('./frontend/package.json').version")
+ BACKEND_VERSION=$(node -p "require('./backend/package.json').version")
PACKAGE_VERSION=$(node -p "require('./package/package.json').version")
+ if [ "$FRONTEND_VERSION" != "$WEBSITE_VERSION" ] || [ "$BACKEND_VERSION" != "$WEBSITE_VERSION" ]; then
+ echo "frontend/backend versions must match root website_version before releasing"
+ exit 1
+ fi
echo "website_version=$WEBSITE_VERSION" >> "$GITHUB_OUTPUT"
echo "package_version=$PACKAGE_VERSION" >> "$GITHUB_OUTPUT"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/publish.yml around lines 41 - 108, The job currently reads
only the root website version (WEBSITE_VERSION) but then applies it to
frontend/backend files; update the Get versions step (and subsequent env exports
used by the node script) to also read frontend and backend package.json versions
(e.g., FRONTEND_VERSION and BACKEND_VERSION) using the same node -p approach,
and in the node script pass/consume those separate env vars and call
updateReadme with the correct version for each path (use FRONTEND_VERSION for
frontend/README.md and BACKEND_VERSION for backend/docs/README.md); additionally
add a simple guard that compares WEBSITE_VERSION to FRONTEND_VERSION and
BACKEND_VERSION and fails the job if they diverge to prevent stamping
inconsistent releases.
| "/api/clean-supabase/trigger": { | ||
| "get": { | ||
| "summary": "GET /api/clean-supabase/trigger", | ||
| "responses": [ |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify the actual HTTP verb used in clean-supabase routes vs docs
rg -n -C3 'clean-supabase|trigger' backend/routes
rg -n -C2 '"/api/clean-supabase/trigger"|"/api/clean-supabase"' backend/docs/api-documentation.jsonRepository: gaureshpai/dkutils
Length of output: 2671
Fix HTTP method documentation for /api/clean-supabase/trigger from GET to POST.
The route implementation at backend/routes/cleanSupabase.js:54 uses router.post("/trigger", ...), but the API documentation at backend/docs/api-documentation.json:961 incorrectly documents this endpoint as a GET method. Clients generated from this documentation will call the endpoint with the wrong HTTP verb.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/docs/api-documentation.json` around lines 960 - 963, The OpenAPI
entry for "/api/clean-supabase/trigger" is documented as "get" but the route
implementation uses router.post("/trigger") in backend/routes/cleanSupabase.js;
update the backend/docs/api-documentation.json entry by replacing the "get"
operation with "post" and change the "summary" from "GET
/api/clean-supabase/trigger" to "POST /api/clean-supabase/trigger" so the
documentation matches the router.post implementation.
| // Sanitize errors before logging to prevent password exposure | ||
| const sanitizedErrors = errors.array().map((error) => { | ||
| const { value, ...sanitizedError } = error; | ||
| if (error.path === "password") { | ||
| return { ...sanitizedError, value: "[REDACTED]" }; | ||
| } | ||
| return sanitizedError; | ||
| }); | ||
|
|
||
| if (process.env.NODE_ENV !== "production") { | ||
| console.log("Validation errors:", sanitizedErrors); | ||
| } | ||
| return res.status(400).json({ errors: errors.array() }); |
There was a problem hiding this comment.
Use the sanitized validation payload in the response.
At Line 39, errors.array() is returned even though sanitizedErrors is already prepared. That reintroduces sensitive input values in the response body.
🔧 Proposed fix
- return res.status(400).json({ errors: errors.array() });
+ return res.status(400).json({ errors: sanitizedErrors });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Sanitize errors before logging to prevent password exposure | |
| const sanitizedErrors = errors.array().map((error) => { | |
| const { value, ...sanitizedError } = error; | |
| if (error.path === "password") { | |
| return { ...sanitizedError, value: "[REDACTED]" }; | |
| } | |
| return sanitizedError; | |
| }); | |
| if (process.env.NODE_ENV !== "production") { | |
| console.log("Validation errors:", sanitizedErrors); | |
| } | |
| return res.status(400).json({ errors: errors.array() }); | |
| // Sanitize errors before logging to prevent password exposure | |
| const sanitizedErrors = errors.array().map((error) => { | |
| const { value, ...sanitizedError } = error; | |
| if (error.path === "password") { | |
| return { ...sanitizedError, value: "[REDACTED]" }; | |
| } | |
| return sanitizedError; | |
| }); | |
| if (process.env.NODE_ENV !== "production") { | |
| console.log("Validation errors:", sanitizedErrors); | |
| } | |
| return res.status(400).json({ errors: sanitizedErrors }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/auth.js` around lines 27 - 39, The response currently returns
raw validation details (errors.array()) which can expose sensitive fields;
update the handler to return the sanitizedErrors variable instead by replacing
the returned payload in the res.status(400).json call so it sends { errors:
sanitizedErrors } (keep existing logging of sanitizedErrors and the sanitization
logic that redacts password). Locate the block where sanitizedErrors is built
and the return statement that uses errors.array() and change only the response
to use sanitizedErrors.
| const response = await axios.get(fileUrl, { | ||
| responseType: "arraybuffer", | ||
| maxRedirects: 0, // Disable redirects to prevent SSRF chains | ||
| timeout: 5000, | ||
| }); |
There was a problem hiding this comment.
Don't treat ordinary 3xx responses as fatal here.
Both the site fetch and the favicon downloads disable redirects, and the page fetch only accepts 2xx. That breaks common flows like http→https, apex→www, and CDN-hosted favicon redirects, so valid sites will now fail with a generic extraction error. Follow redirects manually with the same validation you already apply to the initial URL.
Also applies to: 107-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/favicon.js` around lines 48 - 52, The favicon fetch currently
disables redirects (maxRedirects: 0) and treats non-2xx as fatal; change this to
follow redirects manually: when axios.get(fileUrl, {responseType: "arraybuffer",
maxRedirects: 0, ...}) returns a 3xx, read the Location header, resolve relative
URLs against fileUrl, validate the redirected URL with the same validation used
for the initial URL, and repeat the fetch up to a small redirect limit before
failing; apply the same manual-redirect logic to the other favicon/download
axios block around lines 107-111 so http→https, apex→www and CDN redirects
succeed while preserving your SSRF/host validation.
| try { | ||
| await Promise.all(files.map((file) => merger.add(file.buffer))); | ||
| } catch (err) { |
There was a problem hiding this comment.
Preserve upload order when building the merged PDF.
These merger.add() calls run concurrently against the same merger instance. If they resolve out of order, the merged document can come back in a different order than the user uploaded.
💡 Suggested change
try {
- await Promise.all(files.map((file) => merger.add(file.buffer)));
+ for (const file of files) {
+ await merger.add(file.buffer);
+ }
} catch (err) {
throw new Error(
"Failed to merge PDFs. One or more files may be corrupted or password-protected.",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/pdfConverter.js` around lines 45 - 47, The current code calls
merger.add concurrently via Promise.all(files.map(...)) which can merge pages
out of upload order; change this to add buffers to the same merger sequentially
in the original files order (e.g., iterate files in order and await
merger.add(file.buffer) inside a for/for-of loop or use files.reduce with await)
so each merger.add completes before the next begins, preserving upload order for
the merged PDF.
| if (file.size > maxSize) { | ||
| toast.error( | ||
| `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, |
There was a problem hiding this comment.
Keep the oversize toast auth-aware.
This branch still tells authenticated users to "Login for a higher limit (50MB)" even when maxSize is already the authenticated cap.
💬 Suggested fix
if (file.size > maxSize) {
- toast.error(
- `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`,
- );
+ const message = isAuthenticated
+ ? `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB.`
+ : `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`;
+ toast.error(message);
hasInvalidFile = true;
continue;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageFormatConverter.jsx` around lines 48 - 50, The
oversize toast currently always suggests "Login for a higher limit", which is
incorrect for authenticated users; update the toast in ImageFormatConverter.jsx
(the file/size check where file, maxSize and toast.error are used—likely inside
the file upload handler such as handleFileChange/handleFileUpload) to be
auth-aware: detect the authentication flag used in the component (e.g.,
isAuthenticated or user) and, when the user is authenticated (or maxSize equals
the authenticated cap constant), show a different message that does not prompt
to login (e.g., mention the current authenticated limit or suggest upgrading the
account) and only include the "Login for a higher limit (50MB)" suggestion when
the user is not authenticated.
| const onFileChange = (e) => { | ||
| const file = e.target.files[0]; | ||
| const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; | ||
|
|
||
| if (file && file.type.startsWith("image/")) { | ||
| if (file.size > maxFileSize) { | ||
| toast.error( | ||
| `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, | ||
| ); | ||
| setSelectedFile(null); | ||
| e.target.value = null; | ||
| } else { | ||
| setSelectedFile(file); | ||
| } | ||
| } else { | ||
| setSelectedFile(null); | ||
| toast.error("Please select an image file."); | ||
| e.target.value = ""; | ||
| } | ||
| }; | ||
| if (file?.type.startsWith("image/")) { | ||
| if (file.size > maxFileSize) { | ||
| toast.error( | ||
| `File too large: ${file.name}. Maximum size is ${maxFileSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, | ||
| ); | ||
| setSelectedFile(null); | ||
| e.target.value = null; | ||
| } else { | ||
| setSelectedFile(file); | ||
| } | ||
| } else { | ||
| setSelectedFile(null); | ||
| toast.error("Please select an image file."); | ||
| e.target.value = ""; | ||
| } |
There was a problem hiding this comment.
Avoid showing an error when file picker is canceled.
When no file is selected, this currently falls into the non-image branch and shows "Please select an image file." even though the user just canceled selection.
Suggested fix
const onFileChange = (e) => {
const file = e.target.files[0];
+ if (!file) {
+ setSelectedFile(null);
+ return;
+ }
const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024;
- if (file?.type.startsWith("image/")) {
+ if (file.type.startsWith("image/")) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageGrayscaler.jsx` around lines 16 - 34, The
onFileChange handler currently treats a canceled file picker as a non-image and
shows an error; update onFileChange to first check for no file selected (e.g.,
if (!file) { setSelectedFile(null); e.target.value = null; return; }) before
checking file.type so cancellation does not trigger toast.error. Keep the
existing logic for image type and size checks (references: onFileChange,
setSelectedFile, isAuthenticated, toast, e.target.files) unchanged apart from
this early-return guard.
| const copyToClipboard = (textToCopy) => { | ||
| navigator.clipboard.writeText(textToCopy); | ||
| toast.success("Copied to clipboard!"); | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does navigator.clipboard.writeText require secure context/user gesture, and does it reject on permission errors? What is the recommended React error-handling pattern?
💡 Result:
Yes, navigator.clipboard.writeText requires a secure context (HTTPS). It typically requires a user gesture (transient user activation) in most browsers (Safari, Firefox strictly; Chromium allows either user activation or explicit "clipboard-write" permission grant). The Promise rejects on permission errors (e.g., NotAllowedError, Document not focused), insecure contexts, or other failures. Recommended React error-handling pattern: Use async/await in an event handler (e.g., onClick) with try-catch, provide user feedback (e.g., toast/success state), and optionally fallback to document.execCommand('copy') for legacy support. Use a custom hook for reusability. Example hook: const useClipboard = => { const copy = async (text) => { try { await navigator.clipboard.writeText(text); // Success feedback return true; } catch (err) { console.error(err); // Fallback or user prompt return false; } }; return copy; }; Usage: Call in onClick handler, show "Copied!" on success, "Failed to copy" on error.
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/Clipboard/writeText
- 2: https://developer.mozilla.org/en-US/docs/Web/API/Clipboard
- 3: Interoperability issue:
navigator.clipboard.write()andnavigator.clipboard.writeText()user gesture requirement w3c/clipboard-apis#182 - 4: https://developer.mozilla.org/en-US/docs/Web/API/Clipboard_API
- 5: https://jsguides.dev/tutorials/browser-clipboard/
- 6: https://react.wiki/hooks/copy-to-clipboard/
- 7: https://www.js-craft.io/blog/react-copy-text-clipboard/
🏁 Script executed:
# Locate and examine the JsonFormatterValidator.jsx file
fd -t f JsonFormatterValidator.jsxRepository: gaureshpai/dkutils
Length of output: 112
🏁 Script executed:
# Read the JsonFormatterValidator.jsx file to examine lines 45-48 and surrounding context
cat -n frontend/src/components/JsonFormatterValidator.jsx | head -70Repository: gaureshpai/dkutils
Length of output: 2546
🏁 Script executed:
# Search for all calls to copyToClipboard in the file to understand usage context
cat -n frontend/src/components/JsonFormatterValidator.jsx | grep -i "clipboard"Repository: gaureshpai/dkutils
Length of output: 380
🏁 Script executed:
# Check entire file for all copyToClipboard references and confirm this is the only definition
rg "copyToClipboard" frontend/src/components/JsonFormatterValidator.jsxRepository: gaureshpai/dkutils
Length of output: 158
Handle clipboard write failures before showing success toast.
Line 46 is asynchronous and can fail due to permission denial, insecure context, or document not being focused. The current code shows a success toast immediately without awaiting the Promise, providing false positive feedback to users when the clipboard write fails.
Proposed change
- const copyToClipboard = (textToCopy) => {
- navigator.clipboard.writeText(textToCopy);
- toast.success("Copied to clipboard!");
- };
+ const copyToClipboard = async (textToCopy) => {
+ try {
+ await navigator.clipboard.writeText(textToCopy);
+ toast.success("Copied to clipboard!");
+ } catch (error) {
+ console.error("Failed to copy to clipboard:", error);
+ toast.error("Failed to copy to clipboard. Please try again.");
+ }
+ };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/JsonFormatterValidator.jsx` around lines 45 - 48, The
copyToClipboard function currently calls
navigator.clipboard.writeText(textToCopy) without awaiting or handling failures,
then immediately shows toast.success; change copyToClipboard to await the
Promise returned by navigator.clipboard.writeText (or use .then/.catch), wrap
the call in a try/catch, call toast.success only on success, and call
toast.error (or a suitable error toast) in the catch block; reference the
copyToClipboard function, navigator.clipboard.writeText, toast.success and
toast.error when making the fix so failures (permission denied, insecure
context, etc.) are handled and users see correct feedback.
| const checkStrength = useCallback(async (pwd) => { | ||
| if (pwd.length === 0) { | ||
| setStrengthScore(0); | ||
| setFeedback([]); | ||
| return; | ||
| } | ||
|
|
||
| setLoading(true); | ||
| try { | ||
| const res = await axios.post( | ||
| `${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, | ||
| { password: pwd }, | ||
| ); | ||
| setStrengthScore(res.data.score); | ||
| setFeedback(res.data.feedback); | ||
| } catch (err) { | ||
| console.error("Error checking password strength:", err); | ||
| setStrengthScore(0); | ||
| setFeedback(["Error checking strength."]); | ||
| toast.error("Failed to check password strength."); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }, []); | ||
| setLoading(true); | ||
| try { | ||
| const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, { | ||
| password: pwd, | ||
| }); | ||
| setStrengthScore(res.data.score); | ||
| setFeedback(res.data.feedback); | ||
| } catch (err) { | ||
| console.error("Error checking password strength:", err); | ||
| setStrengthScore(0); | ||
| setFeedback(["Error checking strength."]); | ||
| toast.error("Failed to check password strength."); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }, []); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
fd -type f "PasswordStrengthChecker.jsx"Repository: gaureshpai/dkutils
Length of output: 232
🏁 Script executed:
fd "PasswordStrengthChecker.jsx"Repository: gaureshpai/dkutils
Length of output: 113
🏁 Script executed:
cat -n frontend/src/components/PasswordStrengthChecker.jsxRepository: gaureshpai/dkutils
Length of output: 4357
🏁 Script executed:
rg "CancelToken\|cancelToken\|abortController\|signal:" frontend/src --type jsx --type js -iRepository: gaureshpai/dkutils
Length of output: 89
🏁 Script executed:
rg "CancelToken|cancelToken|abortController|signal:" frontend/src -iRepository: gaureshpai/dkutils
Length of output: 44
Prevent stale API responses from overwriting newer password checks.
Lines 25–26 update UI state without guarding request order. Even with the 500ms debounce, responses can arrive out-of-order—for example, if a user types quickly, an earlier request can complete after a newer one, causing stale password strength data to overwrite the latest result.
Proposed fix
const [loading, setLoading] = useState(false);
const { trackToolUsage } = useAnalytics();
+ const latestRequestIdRef = useRef(0);
const checkStrength = useCallback(async (pwd) => {
if (pwd.length === 0) {
setStrengthScore(0);
setFeedback([]);
return;
}
+ const requestId = ++latestRequestIdRef.current;
setLoading(true);
try {
const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, {
password: pwd,
});
+ if (requestId !== latestRequestIdRef.current) return;
setStrengthScore(res.data.score);
setFeedback(res.data.feedback);
} catch (err) {
+ if (requestId !== latestRequestIdRef.current) return;
console.error("Error checking password strength:", err);
setStrengthScore(0);
setFeedback(["Error checking strength."]);
toast.error("Failed to check password strength.");
} finally {
- setLoading(false);
+ if (requestId === latestRequestIdRef.current) setLoading(false);
}
}, []);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const checkStrength = useCallback(async (pwd) => { | |
| if (pwd.length === 0) { | |
| setStrengthScore(0); | |
| setFeedback([]); | |
| return; | |
| } | |
| setLoading(true); | |
| try { | |
| const res = await axios.post( | |
| `${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, | |
| { password: pwd }, | |
| ); | |
| setStrengthScore(res.data.score); | |
| setFeedback(res.data.feedback); | |
| } catch (err) { | |
| console.error("Error checking password strength:", err); | |
| setStrengthScore(0); | |
| setFeedback(["Error checking strength."]); | |
| toast.error("Failed to check password strength."); | |
| } finally { | |
| setLoading(false); | |
| } | |
| }, []); | |
| setLoading(true); | |
| try { | |
| const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, { | |
| password: pwd, | |
| }); | |
| setStrengthScore(res.data.score); | |
| setFeedback(res.data.feedback); | |
| } catch (err) { | |
| console.error("Error checking password strength:", err); | |
| setStrengthScore(0); | |
| setFeedback(["Error checking strength."]); | |
| toast.error("Failed to check password strength."); | |
| } finally { | |
| setLoading(false); | |
| } | |
| }, []); | |
| const [loading, setLoading] = useState(false); | |
| const { trackToolUsage } = useAnalytics(); | |
| const latestRequestIdRef = useRef(0); | |
| const checkStrength = useCallback(async (pwd) => { | |
| if (pwd.length === 0) { | |
| setStrengthScore(0); | |
| setFeedback([]); | |
| return; | |
| } | |
| const requestId = ++latestRequestIdRef.current; | |
| setLoading(true); | |
| try { | |
| const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, { | |
| password: pwd, | |
| }); | |
| if (requestId !== latestRequestIdRef.current) return; | |
| setStrengthScore(res.data.score); | |
| setFeedback(res.data.feedback); | |
| } catch (err) { | |
| if (requestId !== latestRequestIdRef.current) return; | |
| console.error("Error checking password strength:", err); | |
| setStrengthScore(0); | |
| setFeedback(["Error checking strength."]); | |
| toast.error("Failed to check password strength."); | |
| } finally { | |
| if (requestId === latestRequestIdRef.current) setLoading(false); | |
| } | |
| }, []); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PasswordStrengthChecker.jsx` around lines 13 - 35,
The checkStrength function can be overwritten by out-of-order responses; to fix
it, track and validate request identity before applying results (e.g., add a
request counter or use AbortController stored in a ref). On each invocation of
checkStrength, increment a requestId (or abort previous controller) and include
it with the axios call; when the response/error resolves, only call
setStrengthScore, setFeedback, toast, and setLoading if the response's requestId
matches the latestRequestId (or if the request wasn't aborted). Update the
checkStrength closure to read/write the ref so stale responses are ignored and
the UI state is only updated for the latest check.
| const debouncedCheckStrengthRef = useRef( | ||
| ((delay) => { | ||
| let timeout; | ||
| return (...args) => { | ||
| clearTimeout(timeout); | ||
| timeout = setTimeout(() => checkStrengthRef.current(...args), delay); | ||
| }; | ||
| })(500), | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the PasswordStrengthChecker.jsx file
find . -name "PasswordStrengthChecker.jsx" -type fRepository: gaureshpai/dkutils
Length of output: 115
🏁 Script executed:
# If found, let's read the relevant lines
cat -n frontend/src/components/PasswordStrengthChecker.jsx | head -80Repository: gaureshpai/dkutils
Length of output: 2845
Add debounce cleanup on unmount to avoid delayed calls after teardown.
Line 48 schedules a timeout, but there is no cleanup effect to clear pending timers when the component unmounts. Since checkStrengthRef.current performs state updates, pending timeouts that execute after unmount will trigger React warnings and cause unnecessary operations.
Proposed fix
+ const debounceTimeoutRef = useRef(null);
const debouncedCheckStrengthRef = useRef(
- ((delay) => {
- let timeout;
- return (...args) => {
- clearTimeout(timeout);
- timeout = setTimeout(() => checkStrengthRef.current(...args), delay);
- };
- })(500),
+ (...args) => {
+ clearTimeout(debounceTimeoutRef.current);
+ debounceTimeoutRef.current = setTimeout(() => checkStrengthRef.current(...args), 500);
+ },
);
useEffect(() => {
debouncedCheckStrengthRef.current(password);
}, [password]);
+
+ useEffect(() => {
+ return () => clearTimeout(debounceTimeoutRef.current);
+ }, []);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const debouncedCheckStrengthRef = useRef( | |
| ((delay) => { | |
| let timeout; | |
| return (...args) => { | |
| clearTimeout(timeout); | |
| timeout = setTimeout(() => checkStrengthRef.current(...args), delay); | |
| }; | |
| })(500), | |
| ); | |
| const debounceTimeoutRef = useRef(null); | |
| const debouncedCheckStrengthRef = useRef( | |
| (...args) => { | |
| clearTimeout(debounceTimeoutRef.current); | |
| debounceTimeoutRef.current = setTimeout(() => checkStrengthRef.current(...args), 500); | |
| }, | |
| ); | |
| useEffect(() => { | |
| debouncedCheckStrengthRef.current(password); | |
| }, [password]); | |
| useEffect(() => { | |
| return () => clearTimeout(debounceTimeoutRef.current); | |
| }, []); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PasswordStrengthChecker.jsx` around lines 43 - 51,
The debounce returned in debouncedCheckStrengthRef can schedule a timeout that
survives unmount; modify the debounce so its ref exposes a cancel/clear method
(e.g., assign the timeout id to debouncedCheckStrengthRef.current.timeout or
provide debouncedCheckStrengthRef.current.cancel) and then add a useEffect with
a cleanup that calls that cancel/clear on unmount to clear any pending timer;
update the debounce factory around debouncedCheckStrengthRef and add a
useEffect(() => () => { /* clear via debouncedCheckStrengthRef.current.cancel or
clearTimeout(debouncedCheckStrengthRef.current.timeout) */ }) to prevent delayed
calls after teardown while keeping checkStrengthRef usage unchanged.
This comment was marked as duplicate.
This comment was marked as duplicate.
This comment was marked as outdated.
This comment was marked as outdated.
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Code Review
This pull request introduces significant changes to the backend and frontend, including a migration to Jimp for image processing, improved SSRF protection in the redirect checker, and a new database migration script for usage keys. While the changes improve functionality and security, I have identified concerns regarding the efficiency of performing index management during server startup and the potential memory impact of parallel image processing.
| try { | ||
| await mongoose.connect(process.env.MONGO_URI); | ||
| console.log("MongoDB connected!"); | ||
| await migrateTotalUsageKey(); |
There was a problem hiding this comment.
Calling migrateTotalUsageKey on every server startup is inefficient and potentially risky, especially since it performs index management (dropIndex, createIndex) directly on the database. These operations should be handled by a dedicated migration system or run as a one-time task. Furthermore, index operations during application startup can lead to increased boot times and potential locking issues in production environments.
| }); | ||
|
|
||
| Promise.all(conversionPromises) |
There was a problem hiding this comment.
Processing multiple images in parallel using Promise.all with Jimp.read can lead to high memory consumption or exhaustion, especially when handling large files. Jimp is a pure JavaScript implementation and is significantly more memory-intensive than native alternatives like sharp. Consider using a queue with limited concurrency or sequential processing to ensure the server remains stable under load.
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (22)
frontend/src/components/ImageCropper.jsx (1)
112-136:⚠️ Potential issue | 🟡 MinorDerive the filename from the actual blob MIME.
canvas.toBlob(..., mimeType)may return a PNG blob when the requested format isn't supported, but Line 135 still derives the extension from the requestedmimeType. That reintroduces the mislabeled-download bug for formats like GIF/BMP/SVG uploads.💡 Proposed fix
- const extension = mimeType.split("/")[1] || "png"; - handleDownload(objectUrl, `dkutils-cropped-image-${Date.now()}.${extension}`); + const actualMimeType = blob.type || "image/png"; + const extension = actualMimeType.split("/")[1] || "png"; + handleDownload(objectUrl, `dkutils-cropped-image-${Date.now()}.${extension}`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCropper.jsx` around lines 112 - 136, The filename extension is currently derived from the requested mimeType but canvas.toBlob may return a different actual blob type; update the canvas.toBlob callback (the anonymous function passed to canvas.toBlob in ImageCropper.jsx) to derive the extension from blob.type (falling back to originalMimeType or "image/png" if blob.type is empty) before calling handleDownload, and use that derived extension when building the download name (instead of using the mimeType variable).frontend/src/components/PasswordStrengthChecker.jsx (1)
13-60:⚠️ Potential issue | 🟠 MajorGuard debounced checks against stale responses and unmount-time updates.
Older requests can resolve after newer input and overwrite the latest strength state; also, the pending debounce timer is not cleared on unmount.
Proposed fix
const [loading, setLoading] = useState(false); const { trackToolUsage } = useAnalytics(); + const latestRequestIdRef = useRef(0); + const debounceTimeoutRef = useRef(null); const checkStrength = useCallback(async (pwd) => { if (pwd.length === 0) { setStrengthScore(0); setFeedback([]); return; } + const requestId = ++latestRequestIdRef.current; setLoading(true); try { const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/password-strength`, { password: pwd, }); + if (requestId !== latestRequestIdRef.current) return; setStrengthScore(res.data.score); setFeedback(res.data.feedback); } catch (err) { + if (requestId !== latestRequestIdRef.current) return; console.error("Error checking password strength:", err); setStrengthScore(0); setFeedback(["Error checking strength."]); toast.error("Failed to check password strength."); } finally { - setLoading(false); + if (requestId === latestRequestIdRef.current) setLoading(false); } }, []); const debouncedCheckStrengthRef = useRef( - ((delay) => { - let timeout; - return (...args) => { - clearTimeout(timeout); - timeout = setTimeout(() => checkStrengthRef.current(...args), delay); - }; - })(500), + (...args) => { + clearTimeout(debounceTimeoutRef.current); + debounceTimeoutRef.current = setTimeout(() => checkStrengthRef.current(...args), 500); + }, ); + + useEffect(() => { + return () => clearTimeout(debounceTimeoutRef.current); + }, []);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PasswordStrengthChecker.jsx` around lines 13 - 60, The debounced password checks can overwrite newer results and keep timers running after unmount; modify checkStrength/checkStrengthRef and debouncedCheckStrengthRef to (1) track and ignore stale responses by using a monotonically increasing requestId (increment before each call and include it with the response handling inside checkStrength) or by using an AbortController/axios cancel token to cancel prior requests, and (2) store the debounce timeout in a ref so you can clearTimeout(timeoutRef.current) in a cleanup function returned from the useEffect that uses debouncedCheckStrengthRef (and also clear any in-flight request via the AbortController or mark the latest requestId) to avoid state updates after unmount. Ensure you update references in checkStrength, checkStrengthRef, debouncedCheckStrengthRef, and the password-effect useEffect accordingly.backend/utils/pdfErrorHandler.js (1)
125-130:⚠️ Potential issue | 🟠 MajorNormalize hyphenated range tokens before validation.
Line 126 rejects inputs like
1 - 3even though the frontend accepts that format, so users can pass client validation and still get a backend error.Proposed fix
if (range.includes("-")) { - if (!/^\d+-\d+$/.test(range)) { + const [rawStart, rawEnd] = range.split("-").map((n) => n.trim()); + if (!/^\d+$/.test(rawStart) || !/^\d+$/.test(rawEnd)) { throw new Error(`Invalid page range: ${range}. Please use numbers only.`); } - const [start, end] = range.split("-").map((n) => Number.parseInt(n, 10)); + const start = Number.parseInt(rawStart, 10); + const end = Number.parseInt(rawEnd, 10);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/utils/pdfErrorHandler.js` around lines 125 - 130, Normalize hyphenated page-range tokens before validation by trimming spaces around the hyphen (e.g., replace(/\s*-\s*/, '-') on the range variable) so inputs like "1 - 3" become "1-3" prior to the /^\d+-\d+$/ test and subsequent split into [start, end]; update the logic around the range variable used in the range.includes("-") branch (the same range passed into the regex test, split and Number.parseInt calls) to operate on the normalized token.frontend/src/components/ImageBackgroundRemover.jsx (1)
37-49:⚠️ Potential issue | 🟡 MinorTrack successful removals, not attempts.
trackToolUsageon line 39 executes beforeremoveBackground()completes on line 40. If the background removal fails, analytics will still record the attempt as a successful usage, inflating metrics.🔧 Move tracking after successful processing
setLoading(true); try { - trackToolUsage("ImageBackgroundRemover", "image"); const blob = await removeBackground(selectedFile); const objectUrl = URL.createObjectURL(blob); const link = document.createElement("a"); link.href = objectUrl; link.download = `${selectedFile.name.replace(/\.[^.]+$/, "")}-no-bg.png`; document.body.appendChild(link); link.click(); document.body.removeChild(link); URL.revokeObjectURL(objectUrl); + trackToolUsage("ImageBackgroundRemover", "image"); toast.success("Background removed successfully.");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageBackgroundRemover.jsx` around lines 37 - 49, trackToolUsage is called before removeBackground completes, so failures are recorded as successful uses; move the tracking call to after removeBackground resolves and after the download/link logic (i.e., only after the blob is created and the operation succeeds). Update the ImageBackgroundRemover flow to call trackToolUsage("ImageBackgroundRemover","image") only inside the try block after await removeBackground(selectedFile) completes (or immediately after the download is initiated), and do not call it in the error path—optionally add a separate failure/error tracking call in the catch block if needed.backend/docs/api-documentation.json (1)
960-984:⚠️ Potential issue | 🟠 MajorHTTP method mismatch: documentation shows
GETbut route usesPOST.The
/api/clean-supabase/triggerendpoint is documented as aGETmethod (lines 961-962) but the route implementation usesrouter.post("/trigger", ...). API clients generated from this documentation will fail with method-not-allowed errors.🔧 Proposed fix
"/api/clean-supabase/trigger": { - "get": { - "summary": "GET /api/clean-supabase/trigger", + "post": { + "summary": "POST /api/clean-supabase/trigger", "responses": [🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/api-documentation.json` around lines 960 - 984, The OpenAPI entry incorrectly documents "/api/clean-supabase/trigger" as a GET; update the API documentation to match the implementation by changing the operation from "get" to "post" and adjust the "summary" to "POST /api/clean-supabase/trigger" so it aligns with the route defined as router.post("/trigger", ...); keep the existing response schemas but ensure any generated clients will use POST for the "/api/clean-supabase/trigger" path.backend/routes/auth.js (1)
27-39:⚠️ Potential issue | 🟠 MajorReturn
sanitizedErrorsin the 400 response, not rawerrors.array().
sanitizedErrorsis constructed but not used in the response. Raw validator output may include sensitive submitted values.Suggested fix
- return res.status(400).json({ errors: errors.array() }); + return res.status(400).json({ errors: sanitizedErrors });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/auth.js` around lines 27 - 39, The response currently returns the raw validator output (errors.array()) exposing submitted values; update the handler to return the already-built sanitizedErrors instead: use sanitizedErrors in the res.status(400).json(...) call (and keep using sanitizedErrors for logging), ensuring any reference to errors.array() for the response is replaced with sanitizedErrors so password values are redacted.backend/docs/README.md (1)
441-450:⚠️ Potential issue | 🟡 MinorFix malformed fenced JSON block in the 401 response example.
This block still mixes indentation/fencing and includes an extra fence, which can break markdown rendering/linting.
Suggested fix
- ```json - { - - "msg": "Invalid or missing cron secret" - } - - ``` - - ``` + ```json + { + "msg": "Invalid or missing cron secret" + } + ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/docs/README.md` around lines 441 - 450, Fix the malformed fenced JSON block in the 401 response example by replacing the stray/duplicated fences and inconsistent indentation with a single proper JSON fence: open with ```json, include a correctly indented object containing "msg": "Invalid or missing cron secret", and close with ```; locate the example around the 401 response example that currently contains the extra fence and the broken indentation and update that block to a single well-formed fenced code block.backend/routes/textConverter.js (1)
17-18:⚠️ Potential issue | 🟠 MajorValidate Base64 strictly before decode mode.
Buffer.from(text, "base64")can decode malformed input permissively. Reject invalid payloads before decoding.Suggested fix
router.post("/base64-text", (req, res) => { const { text, type } = req.body; + const isValidBase64 = (input) => { + const normalized = input.trim(); + if (!normalized) return false; + const padded = normalized.padEnd(Math.ceil(normalized.length / 4) * 4, "="); + try { + const reEncoded = Buffer.from(padded, "base64").toString("base64").replace(/=+$/, ""); + return reEncoded === normalized.replace(/=+$/, ""); + } catch { + return false; + } + }; if (typeof text !== "string" || text.length === 0 || !type) { return res.status(400).json({ msg: "Text and type (encode/decode) are required." }); } try { let result; if (type === "encode") { result = Buffer.from(text).toString("base64"); } else if (type === "decode") { + if (!isValidBase64(text)) { + return res.status(400).json({ msg: "Invalid Base64 input." }); + } result = Buffer.from(text, "base64").toString("utf8"); } else { return res.status(400).json({ msg: "Invalid type. Must be 'encode' or 'decode'." }); }In current Node.js docs, does Buffer.from(input, "base64") reject malformed Base64 or decode permissively, and what strict validation approach is recommended before decoding in an API?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/textConverter.js` around lines 17 - 18, The decode branch (when type === "decode") currently calls Buffer.from(text, "base64") which can permissively decode malformed input; add strict Base64 validation before decoding: check the input matches a Base64 pattern (only A-Z a-z 0-9 + / with optional 0–2 '=' padding), ensure the length (after trimming whitespace) is a multiple of 4, and reject invalid payloads with a 400/error response instead of calling Buffer.from; update the logic around the result = Buffer.from(text, "base64").toString("utf8") line to only run after validation.backend/routes/keepAlive.js (1)
17-43:⚠️ Potential issue | 🔴 CriticalProtect this internal write endpoint with an explicit secret check.
The handler still performs a storage write without validating a cron/shared secret first. For an internal-only endpoint, reject unauthorized requests before calling Supabase.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/keepAlive.js` around lines 17 - 43, The keep-alive POST handler (router.post in keepAlive.js) currently performs supabase.storage.from(...).upload(...) without authenticating the caller; add an explicit secret check at the top of the async handler (before calling generateRandomString or supabase) that reads a shared secret from a configured source (e.g., process.env.KEEP_ALIVE_SECRET) and compares it to a secret provided by the request (header like x-keep-alive-secret or a query/body field), and immediately return res.status(401).json(...) when missing or mismatched; ensure the rest of the code (generateRandomString, fileName generation and supabase.storage.upload) only runs after the secret passes.frontend/src/components/ImageFlipper.jsx (1)
17-21:⚠️ Potential issue | 🟡 MinorGuard the canceled file-picker path.
If
e.target.files[0]is missing, this still falls through to the non-image branch and shows"Please select an image file."even when the user just dismissed the chooser.Also applies to: 31-35
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageFlipper.jsx` around lines 17 - 21, onFileChange currently reads e.target.files[0] and treats a missing file as a non-image, causing the "Please select an image file." message when the user cancels the picker; update onFileChange to first guard for a canceled selection by checking if e?.target?.files is falsy or has length 0 (or if file is undefined) and simply return early (no error shown) in that case, then continue with the existing image/type/size checks; apply the same early-return guard to the other handler block around the code referenced at lines 31-35 to avoid showing the error when the picker was dismissed.frontend/src/components/ImageGrayscaler.jsx (1)
16-20:⚠️ Potential issue | 🟡 MinorGuard the canceled file-picker path.
If
e.target.files[0]is missing, this still falls into the non-image branch and shows an error toast for a canceled selection.Also applies to: 30-34
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageGrayscaler.jsx` around lines 16 - 20, The onFileChange handler currently reads e.target.files[0] and proceeds into the non-image error branch when the user cancels the picker; update onFileChange to first check if a file exists (guarding the result of e.target.files[0]) and return early without showing an error toast when no file is selected. Locate the onFileChange function and related checks using symbols like onFileChange, maxFileSize and isAuthenticated, add a null/undefined guard for file before checking file.type or size, and apply the same guard to the other similar block referenced (lines ~30-34) to avoid false "not an image" errors on cancellation.frontend/src/components/FaviconExtractor.jsx (1)
13-32:⚠️ Potential issue | 🟡 MinorAnalytics tracked before operation success remains unresolved.
The past review noted that
trackToolUsageis called on line 16 before the asyncaxios.postcompletes. If the request fails, usage is still tracked, inflating success metrics. Move tracking after successful completion.🔧 Proposed fix
const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); - trackToolUsage("FaviconExtractor", "web"); setError(null); try { const res = await axios.post(`${import.meta.env.VITE_API_BASE_URL}/api/favicon`, { url }); const { path, originalname } = res.data; + trackToolUsage("FaviconExtractor", "web"); handleDownload(path, originalname); } catch (err) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/FaviconExtractor.jsx` around lines 13 - 32, In handleSubmit move the call to trackToolUsage("FaviconExtractor", "web") so it runs only after the axios.post completes successfully (i.e., after receiving res.data and before or after calling handleDownload(path, originalname)); currently trackToolUsage is invoked before the async request which records failures as usage — update handleSubmit to call trackToolUsage inside the try block after extracting path/originalname from res.data and before setting success state or invoking handleDownload, leaving error handling in the catch/finally unchanged.backend/index.js (1)
27-35:⚠️ Potential issue | 🟠 MajorSupabase connection failure is still silently swallowed.
The past review flagged that
testSupabaseConnection()catches errors but doesn't re-throw them, allowing the server to start even when Supabase is unavailable. This issue appears unresolved—the function still logs the error and returns normally (line 33-34), sostartServer()will succeed and/healthwill report healthy even if Supabase is unreachable.🔧 Proposed fix
const testSupabaseConnection = async () => { try { const { data: bucket, error: getBucketError } = await supabase.storage.getBucket("utilityhub"); if (getBucketError) throw getBucketError; console.log(`Supabase Storage connected!\nBucket '${bucket.name}' found.`); } catch (error) { console.error("Supabase Storage connection failed:", error.message); + throw error; } };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/index.js` around lines 27 - 35, testSupabaseConnection currently swallows errors (logs and returns) so startServer and /health can report healthy even when Supabase is down; change testSupabaseConnection to propagate failures by throwing the caught error (or throw a new Error with context) after logging, and ensure startServer awaits testSupabaseConnection and aborts startup (or exits non‑zero) when it throws so the process never reports healthy while Supabase is unreachable; update any calling code that assumed a silent success (e.g., startServer and the /health handler) to handle the thrown error appropriately.frontend/src/components/ImageToPdfConverter.jsx (2)
89-93:⚠️ Potential issue | 🟡 MinorDon't toast success before
handleDownload()settles.
handleDownloadis async but not awaited here, so a failed download still shows the success toast and then the error toast fromhandleDownload. Either await it before toasting success or remove the auto-download now that the component already renders a manual download button.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 89 - 93, The success toast is shown before the async handleDownload completes, causing misleading success messages; in the block that checks currentConversionId === conversionIdRef.current (where setConvertedFile and handleDownload are called), await handleDownload(res.data.path, res.data.originalname) and only call toast.success("Images converted to PDF successfully!") after that await (or remove the automatic handleDownload call entirely and rely on the manual download button), and ensure any errors from handleDownload are caught and handled so a failed download doesn't display a success toast.
18-20:⚠️ Potential issue | 🟠 MajorChanging files does not invalidate the in-flight conversion.
conversionIdRefonly advances on submit. If the user selects new files while the previous request is still running, the older response still matches the current id, auto-downloads the old PDF, and overwrites state for the new selection. Bump/cancel the request fromonFileChangeand disable the picker whileloading.Also applies to: 65-68, 140-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToPdfConverter.jsx` around lines 18 - 20, The bug is that conversionIdRef only advances on submit so selecting new files while a conversion is in-flight can cause the old response to match and overwrite state; fix by bumping/cancelling the in-flight conversion in onFileChange: increment conversionIdRef (or set a new id) when files change and if there is an active request abort it via an AbortController stored in a ref (e.g., conversionAbortRef), then clear convertedFile and selected files as you already do; also ensure the file input/picker is disabled while loading (use the loading state to add disabled to the input) and make the submit handler create a new AbortController and attach it to conversionAbortRef so it can be aborted by onFileChange.frontend/src/components/ImageCompressor.jsx (1)
84-116:⚠️ Potential issue | 🟠 MajorInvalidate stale compression responses when the selection changes.
The picker stays active during the async request, but the success path always clears
selectedFilesand the input. If the user picks a new batch before the older response returns, the stale response downloads the wrong ZIP and wipes the new selection. Add a request id / abort guard and disable the file input whileloading.Also applies to: 139-147
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageCompressor.jsx` around lines 84 - 116, The onSubmit flow (function onSubmit) can apply a stale response that clears selectedFiles and triggers handleDownload for a previous batch; fix by introducing a per-request guard (e.g., a requestId or AbortController stored in a ref) that is generated at the start of onSubmit and checked before running the success path (handleDownload, setSelectedFiles([]), fileInputRef.current.value = "") so only the latest request can mutate selectedFiles; also setLoading(true) should disable the file input UI (bind disabled={loading} to the input) and clear/abort previous pending requests when a new submission or new selection occurs to prevent stale downloads.frontend/src/components/ImageFormatConverter.jsx (1)
48-50:⚠️ Potential issue | 🟡 MinorKeep the oversize toast auth-aware here too.
Authenticated users still get told to “Login for a higher limit (50MB)” even when
maxSizeis already the authenticated cap.Suggested copy change
if (file.size > maxSize) { - toast.error( - `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, - ); + const message = isAuthenticated + ? `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB.` + : `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`; + toast.error(message); hasInvalidFile = true; continue; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageFormatConverter.jsx` around lines 48 - 50, The oversize toast in ImageFormatConverter.jsx currently always suggests "Login for a higher limit (50MB)" even when maxSize already equals the authenticated cap; update the file size branch (the if (file.size > maxSize) block where toast.error is called) to make the message auth-aware: check the auth flag used in this component (e.g., isAuthenticated or currentUser) and if the user is authenticated omit the "Login..." hint (or show the correct authenticated limit), otherwise keep the prompt to log in and include the maxSize in MB dynamically; ensure you only change the toast.error message construction and keep the same maxSize calculation.backend/routes/imageConverter.js (1)
133-145:⚠️ Potential issue | 🟠 MajorLimit the
_dkutils_shortcut to basename-only files.
baseName.includes("_dkutils_")is still accepted even whenfilenamecontains a path. A key likeprivate/foo_dkutils_bar.pngbypassesallowedPrefixesand becomes downloadable from this public route. Requirefilename === baseNamebefore using that exception.Suggested guard
const baseName = path.basename(filename); const allowedPrefixes = [ "dkutils_", "dkutils-", "screenshot-", "screenshots/screenshot-", "favicons/", ]; - if ( - filename.includes("..") || - (!allowedPrefixes.some((prefix) => filename.startsWith(prefix)) && - !baseName.includes("_dkutils_")) - ) { + const isWhitelistedPath = allowedPrefixes.some((prefix) => filename.startsWith(prefix)); + const isGeneratedArtifact = filename === baseName && baseName.includes("_dkutils_"); + + if (filename.includes("..") || (!isWhitelistedPath && !isGeneratedArtifact)) { return res.status(403).json({ msg: "Invalid filename." }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/imageConverter.js` around lines 133 - 145, The current guard allows files with `_dkutils_` anywhere in the path because it checks baseName.includes("_dkutils_") even when filename contains directories; update the condition so the `_dkutils_` shortcut only applies to basename-only requests by requiring filename === baseName before permitting baseName.includes("_dkutils_"). Concretely, in the existing if that checks filename.includes("..") || (!allowedPrefixes.some(prefix => filename.startsWith(prefix)) && !baseName.includes("_dkutils_")), change the second part to use (!allowedPrefixes.some(prefix => filename.startsWith(prefix)) && !(filename === baseName && baseName.includes("_dkutils_"))), ensuring only top-level filenames with `_dkutils_` are exempt.backend/routes/pdfConverter.js (1)
45-46:⚠️ Potential issue | 🟠 MajorAdd PDFs to
PDFMergersequentially.
pdf-merger-js's own examples awaitmerger.add(...)one file at a time on a single merger instance. Running everyadd()throughPromise.allmakes the final page order depend on async completion timing instead of the original upload order. (github.com)Suggested fix
try { - await Promise.all(files.map((file) => merger.add(file.buffer))); + for (const file of files) { + await merger.add(file.buffer); + } } catch (err) { throw new Error( "Failed to merge PDFs. One or more files may be corrupted or password-protected.",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/pdfConverter.js` around lines 45 - 46, The code uses Promise.all(files.map(...)) to call merger.add(file.buffer) concurrently, which can reorder pages; change to adding PDFs sequentially by iterating over files and awaiting merger.add(file.buffer) for each (replace the Promise.all/files.map pattern with a for...of or for loop that awaits merger.add), referencing the merger.add call and the files.map usage to locate and update the logic so pages are added in upload order.backend/routes/favicon.js (2)
48-52:⚠️ Potential issue | 🟠 MajorHandle redirect responses explicitly on both fetch paths.
With redirects disabled and 2xx-only success criteria, ordinary
http→https, apex→www, and CDN favicon redirects are treated as failures for both the page fetch and the icon downloads. Follow 3xx responses manually and re-run the same URL/IP validation on eachLocationhop.Also applies to: 107-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/favicon.js` around lines 48 - 52, The code currently disables redirects for axios.get (the favicon and page fetch paths) and treats non-2xx as failures; change the fetch logic in the favicon route handler so that when axios returns a 3xx with a Location header you manually follow the redirect: resolve the absolute Location, perform the same URL/IP validation (SSRF/IP whitelist checks) on the redirected URL/IP, then re-run the fetch (and repeat for subsequent 3xx hops up to a safe maxRedirects). Apply this manual-follow behavior to both the page-fetch path and the icon-download path (the axios.get calls shown around the existing response block and the other fetch at lines ~107-111), ensure you stop on non-3xx success or when max hops reached, and propagate any validation failures as errors.
11-38:⚠️ Potential issue | 🔴 CriticalPin the validated address into the actual favicon requests.
validateUrl()only preflights DNS and returnstrue; the later axios calls do not use a vetted address set. A rebinding hostname can therefore pass validation and still steer the real request elsewhere. Return the resolved safe addresses fromvalidateUrl()and bind the outbound request to them instead of reconnecting unpinned.Also applies to: 45-52, 104-111
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/favicon.js` around lines 11 - 38, validateUrl currently only preflights DNS and returns true, which allows a race where the real axios requests resolve the hostname again to a different (potentially unsafe) address; change validateUrl to return the vetted list of resolved addresses (the lookupResults/addresses) instead of a boolean and ensure callers (the favicon request handlers that call axios) use those pinned addresses for the outbound connection—e.g., pass the resolved IP(s) back from validateUrl and bind the axios request by specifying the vetted address (via an HTTP/HTTPS agent or localAddress option) or by connecting directly to the IP while preserving the original Host header; update validateUrl, the code paths that call it (the favicon fetch functions that currently only check truthiness), and any related helpers like isPrivateIP to support and consume the returned addresses..github/workflows/publish.yml (1)
48-66:⚠️ Potential issue | 🟠 MajorGuard against frontend/backend version drift before stamping docs and tags.
These manifests are bumped independently, but the workflow later reads only the root
package.jsonand reuses that value for website release metadata and README updates. Iffrontend/package.jsonorbackend/package.jsonwas already out of sync, this job will publish mismatched version numbers. Read both versions here and fail the run when they diverge fromWEBSITE_VERSION.Suggested fix
- name: Get versions id: get_versions run: | WEBSITE_VERSION=$(node -p "require('./package.json').version") + FRONTEND_VERSION=$(node -p "require('./frontend/package.json').version") + BACKEND_VERSION=$(node -p "require('./backend/package.json').version") PACKAGE_VERSION=$(node -p "require('./package/package.json').version") + if [ "$FRONTEND_VERSION" != "$WEBSITE_VERSION" ] || [ "$BACKEND_VERSION" != "$WEBSITE_VERSION" ]; then + echo "frontend/backend versions must match root website_version before releasing" + exit 1 + fi echo "website_version=$WEBSITE_VERSION" >> "$GITHUB_OUTPUT" echo "package_version=$PACKAGE_VERSION" >> "$GITHUB_OUTPUT"Also applies to: 104-108
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/publish.yml around lines 48 - 66, The workflow currently only reads the root package.json into WEBSITE_VERSION in the get_versions step but ignores frontend/backend package.json, which can cause version drift; update the get_versions run to also read frontend/package.json and backend/package.json into FRONTEND_VERSION and BACKEND_VERSION (via node -p like the existing commands), compare both to WEBSITE_VERSION, and if either differs echo a clear error and exit non‑zero to fail the job so docs/tags aren’t stamped with mismatched versions; keep emitting the outputs (website_version, package_version) as before but add the validation check and failure path.
🟡 Minor comments (10)
frontend/src/components/ImageResizer.jsx-49-61 (1)
49-61:⚠️ Potential issue | 🟡 MinorReset the selected image when the initial read/decode fails.
setOriginalImage(file)happens before the file has been decoded, but this path has no failure cleanup. A corruptimage/*upload can leave the new file selected whilenewWidth/newHeightstill reflect the previous image until the later resize step fails.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageResizer.jsx` around lines 49 - 61, The component sets the selected file via setOriginalImage(file) before decoding, so a failed image decode leaves stale state; modify the flow to only call setOriginalImage after the Image onload succeeds (where setOriginalDimensions, setNewWidth, setNewHeight are set), and add an img.onerror (or reader.onerror) handler that clears the selected image and resets dimensions/inputs by calling setOriginalImage(null) and setOriginalDimensions({}), plus setNewWidth('') and setNewHeight('') to ensure no stale values remain; update the reader.onload/img.onload handlers to reflect this change.backend/routes/passwordStrength.js-52-57 (1)
52-57:⚠️ Potential issue | 🟡 MinorReject whitespace-only passwords at input validation.
Line 52 currently accepts values like
" "as valid input and returns a score instead of"Password is required.".🛠️ Suggested fix
const { password } = req.body; - if (typeof password !== "string" || password.length === 0) { + if (typeof password !== "string" || password.trim().length === 0) { return res.status(400).json({ msg: "Password is required." }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/passwordStrength.js` around lines 52 - 57, The current input validation in the passwordStrength route accepts whitespace-only strings; update the validation inside the block that checks the password variable before calling checkPasswordStrength to reject strings that are only whitespace by testing trimmed length (e.g., use password.trim() and treat empty after trim as missing), and return the same 400 JSON response ({ msg: "Password is required." }) when password is null/undefined/non-string or password.trim().length === 0 so checkPasswordStrength is only called for non-empty, non-whitespace passwords.frontend/src/components/PdfRotator.jsx-22-35 (1)
22-35:⚠️ Potential issue | 🟡 MinorHandle file-picker cancel as a no-op, not an error.
When the user cancels selection,
fileis undefined and currently triggers an error toast. Add an early return for empty selection.Proposed fix
const onFileChange = (e) => { const file = e.target.files[0]; const maxFileSize = isAuthenticated ? 50 * 1024 * 1024 : 10 * 1024 * 1024; + if (!file) { + setSelectedFile(null); + return; + } - if (file && file.type === "application/pdf") { + if (file.type === "application/pdf") {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfRotator.jsx` around lines 22 - 35, When the file input is cancelled the handler sees file as undefined and currently shows an error; update the file-selection logic in PdfRotator.jsx (the handler that checks `file`, `file.type`, and calls `setSelectedFile`) to treat an undefined/empty `file` as a no-op: if `file` is falsy, simply clear any input value if needed and return early without calling `toast.error` or `setSelectedFile(null)`; keep the existing size/type checks and error toasts for real selections (references: `file`, `setSelectedFile`, `toast`, and `e.target.value`).backend/routes/textToPdf.js-41-44 (1)
41-44:⚠️ Potential issue | 🟡 MinorInconsistent error response format.
Line 43 returns plain text
"Server Error"while line 12-14 returns JSON. Consider using JSON consistently for API error responses.Proposed fix
} catch (err) { console.error(err.message); - res.status(500).send("Server Error"); + res.status(500).json({ msg: "Server Error" }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/routes/textToPdf.js` around lines 41 - 44, The catch block in the textToPdf route currently sends plain text with res.status(500).send("Server Error") which is inconsistent with earlier JSON error responses; update the error handling in the catch (err) block to send a JSON response (e.g., res.status(500).json({ error: err.message || "Server Error" })) so it matches the format used elsewhere in this route and includes the error detail for debugging; locate the catch block around the textToPdf handler and replace the send call accordingly.frontend/src/components/JsonXmlConverter.jsx-82-85 (1)
82-85:⚠️ Potential issue | 🟡 MinorReplace
min-h-50with a valid Tailwind utility on both textareas.The Tailwind config defines no custom spacing token for
50, somin-h-50is not a valid utility and will be silently ignored. Both textareas lose their minimum height. Usemin-h-[200px]or another supported utility instead (lines 84 and 133).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/JsonXmlConverter.jsx` around lines 82 - 85, The two textarea elements (one with id="inputData" and the other textarea around line ~133) use the invalid Tailwind utility class "min-h-50"; replace "min-h-50" with a valid utility such as "min-h-[200px]" in both textarea className values so the minimum height is applied correctly (update the className strings for the textarea with id="inputData" and the corresponding output textarea).frontend/src/components/PngToJpgConverter.jsx-29-32 (1)
29-32:⚠️ Potential issue | 🟡 MinorHide the login upsell for authenticated users.
When
isAuthenticatedis true, this rendersMaximum size is 50MB. Login for a higher limit (50MB)., which is contradictory.Suggested change
if (file.size > maxSize) { + const limitMb = maxSize / (1024 * 1024); + const upgradeHint = isAuthenticated ? "" : " Login for a higher limit (50MB)."; toast.error( - `File too large: ${file.name}. Maximum size is ${maxSize / (1024 * 1024)}MB. Login for a higher limit (50MB).`, + `File too large: ${file.name}. Maximum size is ${limitMb}MB.${upgradeHint}`, ); setSelectedFiles([]); e.target.value = "";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PngToJpgConverter.jsx` around lines 29 - 32, The toast message in PngToJpgConverter.jsx shown inside the if (file.size > maxSize) branch includes a "Login for a higher limit" upsell even when isAuthenticated is true; update that branch to check the isAuthenticated flag and show a different message for authenticated users (e.g., state the correct maximum like "Maximum size is 50MB." or a plain "File too large" message) and only include the "Login for a higher limit" text when isAuthenticated is false, referencing the file.size > maxSize branch and the isAuthenticated variable and the toast.error call to locate where to change the message.frontend/src/components/FaviconExtractor.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the BOM character at the start of the file.
The file begins with a UTF-8 BOM character that can cause parsing issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/FaviconExtractor.jsx` at line 1, Remove the leading UTF-8 BOM from the top of the file so the first character is the "i" of the existing import statement (import useAnalytics from "@frontend/utils/useAnalytics";); open frontend/src/components/FaviconExtractor.jsx, delete the invisible BOM at the beginning of the file (or re-save as UTF-8 without BOM) to prevent parsing issues.frontend/src/components/LinkShortener.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the BOM character at the start of the file.
The file begins with a UTF-8 BOM character that can cause parsing issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/LinkShortener.jsx` at line 1, Remove the leading UTF-8 BOM from the top of the file so the import statement (import useAnalytics from "@frontend/utils/useAnalytics";) starts at column 0; open frontend/src/components/LinkShortener.jsx in an editor that can show invisible characters and delete the BOM, then save the file so the module import parses correctly.frontend/src/components/ImageToBase64Converter.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the BOM character at the start of the file.
The file begins with a UTF-8 BOM character that can cause parsing issues.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/ImageToBase64Converter.jsx` at line 1, The file starts with a UTF-8 BOM that can break parsing; open the module containing the import for AuthContext (the top-level import line "import { AuthContext } from \"@frontend/context/AuthContext.jsx\";" in ImageToBase64Converter.jsx) and remove the invisible BOM character at the very beginning of the file so the file starts with the 'import' character; save the file without BOM (ensure your editor/IDE encoding is UTF-8 without BOM) and re-run the build to confirm the parsing error is resolved.frontend/src/components/PdfCompressor.jsx-1-1 (1)
1-1:⚠️ Potential issue | 🟡 MinorRemove the BOM character at the start of the file.
The file begins with a UTF-8 Byte Order Mark (BOM) character
before theimportstatement. This invisible character can cause JavaScript parsing errors and module resolution failures in various environments and bundlers.🔧 Proposed fix
Save the file with UTF-8 encoding without BOM:
-import { AuthContext } from "@frontend/context/AuthContext.jsx"; +import { AuthContext } from "@frontend/context/AuthContext.jsx";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@frontend/src/components/PdfCompressor.jsx` at line 1, The file starts with a UTF-8 BOM character before the import which can break parsing; open frontend/src/components/PdfCompressor.jsx, remove the invisible BOM at the very start (so the first character is the "i" of "import"), and re-save the file as UTF-8 without BOM (ensure imports like AuthContext from "@frontend/context/AuthContext.jsx" remain unchanged).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0a92e398-4ab8-4e74-92ed-0a7feb07ceac
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (138)
.github/workflows/publish.yml.github/workflows/trigger-supabase-cleanup.yml.markdownlint.json.markdownlintignoreREADME.mdbackend/docs/README.mdbackend/docs/api-documentation.jsonbackend/index.jsbackend/middleware/apiActivityTracker.jsbackend/middleware/auth.jsbackend/middleware/authorize.jsbackend/middleware/uploadLimiter.jsbackend/models/ApiActivity.jsbackend/models/ServiceUsage.jsbackend/models/ToolUsage.jsbackend/models/Url.jsbackend/models/User.jsbackend/package.jsonbackend/routes/analytics.jsbackend/routes/auth.jsbackend/routes/cleanSupabase.jsbackend/routes/favicon.jsbackend/routes/imageConverter.jsbackend/routes/jsonXmlConverter.jsbackend/routes/keepAlive.jsbackend/routes/officeConverter.jsbackend/routes/passwordStrength.jsbackend/routes/pdfConverter.jsbackend/routes/redirectChecker.jsbackend/routes/screenshot.jsbackend/routes/seoTools.jsbackend/routes/shortener.jsbackend/routes/textConverter.jsbackend/routes/textToPdf.jsbackend/scripts/migrateTotalUsageKey.jsbackend/utils/filenameSanitizer.jsbackend/utils/ipValidation.jsbackend/utils/pdfErrorHandler.jsbackend/utils/supabaseCleaner.jsbackend/utils/supabaseClient.jsbiome.jsonfrontend/package.jsonfrontend/public/manifest.jsonfrontend/src/App.jsxfrontend/src/components/Base64TextConverter.jsxfrontend/src/components/CsvToJsonConverter.jsxfrontend/src/components/ExcelToPdfConverter.jsxfrontend/src/components/FaviconExtractor.jsxfrontend/src/components/Footer.jsxfrontend/src/components/HashGenerator.jsxfrontend/src/components/HtmlToMarkdownConverter.jsxfrontend/src/components/ImageBackgroundRemover.jsxfrontend/src/components/ImageCompressor.jsxfrontend/src/components/ImageCropper.jsxfrontend/src/components/ImageFlipper.jsxfrontend/src/components/ImageFormatConverter.jsxfrontend/src/components/ImageGrayscaler.jsxfrontend/src/components/ImageResizer.jsxfrontend/src/components/ImageToBase64Converter.jsxfrontend/src/components/ImageToPdfConverter.jsxfrontend/src/components/JsonFormatterValidator.jsxfrontend/src/components/JsonXmlConverter.jsxfrontend/src/components/LinkShortener.jsxfrontend/src/components/MarkdownToHtmlConverter.jsxfrontend/src/components/Navbar.jsxfrontend/src/components/PasswordGenerator.jsxfrontend/src/components/PasswordStrengthChecker.jsxfrontend/src/components/PdfCompressor.jsxfrontend/src/components/PdfMerger.jsxfrontend/src/components/PdfPageDeleter.jsxfrontend/src/components/PdfRotator.jsxfrontend/src/components/PdfSplitter.jsxfrontend/src/components/PdfToExcelConverter.jsxfrontend/src/components/PdfToTextConverter.jsxfrontend/src/components/PdfToWordConverter.jsxfrontend/src/components/PngToJpgConverter.jsxfrontend/src/components/QrCodeGenerator.jsxfrontend/src/components/QrCodeScanner.jsxfrontend/src/components/SeoTools.jsxfrontend/src/components/TextCaseConverter.jsxfrontend/src/components/TextDifferenceChecker.jsxfrontend/src/components/TextToPdfGenerator.jsxfrontend/src/components/ToolCard.jsxfrontend/src/components/UrlRedirectChecker.jsxfrontend/src/components/WebsiteScreenshotGenerator.jsxfrontend/src/components/auth/Login.jsxfrontend/src/components/auth/Register.jsxfrontend/src/components/mode-toggle.jsxfrontend/src/components/theme-provider.jsxfrontend/src/components/ui/badge.jsxfrontend/src/components/ui/button.jsxfrontend/src/components/ui/card.jsxfrontend/src/components/ui/input.jsxfrontend/src/components/ui/label.jsxfrontend/src/components/ui/textarea.jsxfrontend/src/context/AuthContext.jsxfrontend/src/index.cssfrontend/src/index.jsxfrontend/src/lib/utils.jsfrontend/src/pages/HomePage.jsxfrontend/src/pages/ImageToolsPage.jsxfrontend/src/pages/PdfToolsPage.jsxfrontend/src/pages/TextToolsPage.jsxfrontend/src/pages/WebToolsPage.jsxfrontend/src/utils/setAuthToken.jsfrontend/src/utils/useAnalytics.jsfrontend/src/utils/useSortedTools.jsfrontend/tailwind.config.jsfrontend/vercel.jsonfrontend/vite.config.jspackage.jsonpackage/README.mdpackage/package.jsonpackage/rslib.config.tspackage/scripts/postbuild.mjspackage/src/branding.tspackage/src/cli.tspackage/src/constants/index.tspackage/src/image/index.tspackage/src/index.tspackage/src/interactive.tspackage/src/interfaces/common.tspackage/src/interfaces/index.tspackage/src/pdf/index.tspackage/src/types/common.tspackage/src/types/shims.d.tspackage/src/utils/branding.tspackage/src/utils/config.tspackage/src/utils/errors.tspackage/src/utils/ffmpeg.tspackage/src/utils/files.tspackage/src/utils/index.tspackage/src/utils/watermark.tspackage/src/video/index.tspackage/tests/dist.test.jspackage/tsconfig.jsonpnpm-workspace.yamlturbo.json
💤 Files with no reviewable changes (2)
- .markdownlintignore
- .markdownlint.json
📜 Review details
🧰 Additional context used
🪛 markdownlint-cli2 (0.22.0)
backend/docs/README.md
[warning] 441-441: Code block style
Expected: fenced; Actual: indented
(MD046, code-block-style)
[warning] 447-447: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| $('link[rel~="icon"], link[rel~="shortcut icon"], link[rel~="apple-touch-icon"]').each( | ||
| (i, el) => { | ||
| let href = $(el).attr("href"); | ||
| if (href) { | ||
| if (href.startsWith("//")) { | ||
| href = `https:${href}`; | ||
| } else if (href.startsWith("/")) { | ||
| const urlObj = new URL(url); | ||
| href = `${urlObj.protocol}//${urlObj.host}${href}`; | ||
| } else if (!href.startsWith("http")) { | ||
| const urlObj = new URL(url); | ||
| href = `${urlObj.href.substring(0, urlObj.href.lastIndexOf("/") + 1)}${href}`; | ||
| } | ||
| faviconUrls.push(href); | ||
| } | ||
| }, | ||
| ); | ||
|
|
||
| const urlObj = new URL(url); | ||
| const defaultFavicon = `${urlObj.protocol}//${urlObj.host}/favicon.ico`; | ||
| if (!faviconUrls.includes(defaultFavicon)) { | ||
| faviconUrls.push(defaultFavicon); | ||
| } | ||
|
|
||
| const archive = archiver("zip", { | ||
| zlib: { level: 9 }, | ||
| }); | ||
|
|
||
| // Collect all file data first to avoid race conditions | ||
| const downloadPromises = faviconUrls.map(async (faviconUrl) => { | ||
| const fileData = await downloadFile(faviconUrl); | ||
| if (fileData?.buffer) { | ||
| const fileName = `favicon-${path.basename(new URL(faviconUrl).pathname || "default.ico")}`; | ||
| return { | ||
| buffer: fileData.buffer, | ||
| name: fileName, | ||
| contentType: fileData.contentType, | ||
| }; | ||
| } | ||
| return null; | ||
| }); | ||
|
|
||
| const fileDataArray = await Promise.all(downloadPromises); | ||
| const validFiles = fileDataArray.filter((file) => file !== null); |
There was a problem hiding this comment.
Cap and dedupe favicon downloads before Promise.all.
faviconUrls comes straight from arbitrary markup, then every entry is scheduled concurrently. A page with hundreds of <link rel="icon"> tags turns this public endpoint into an outbound fan-out and memory spike. Deduplicate the list and enforce a small upper bound before downloading.
Suggested fix
- const downloadPromises = faviconUrls.map(async (faviconUrl) => {
+ const limitedFaviconUrls = [...new Set(faviconUrls)].slice(0, 10);
+ const downloadPromises = limitedFaviconUrls.map(async (faviconUrl) => {
const fileData = await downloadFile(faviconUrl);
if (fileData?.buffer) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/favicon.js` around lines 115 - 158, The faviconUrls array is
used directly to spawn concurrent downloads (downloadPromises/Promise.all),
which can be abused; before creating downloadPromises deduplicate faviconUrls
(e.g., via a Set) and enforce a safe upper bound (slice to a small constant like
MAX_FAVICONS) so only unique and limited URLs are downloaded; apply this change
right where faviconUrls is finalized (before const archive and before mapping to
downloadPromises) and ensure the rest of the logic (file naming, downloadFile
calls, fileDataArray) uses the capped/deduped list.
| if (type === "decode") { | ||
| if (!base64String) { | ||
| return res.status(400).json({ msg: "No base64 string provided for decoding." }); | ||
| } | ||
| const buffer = Buffer.from(base64String, "base64"); | ||
| const outputFileName = `decoded_dkutils_${Date.now()}.png`; | ||
|
|
||
| const { error: uploadError } = await supabase.storage | ||
| .from("utilityhub") | ||
| .upload(outputFileName, buffer, { | ||
| contentType: "image/png", | ||
| }); |
There was a problem hiding this comment.
Validate the decoded payload before storing it as a PNG.
This branch uploads whatever Buffer.from(base64String, "base64") returns and labels it image/png. On a public endpoint, that lets callers store non-image blobs behind a .png name. Decode and verify the bytes first, then upload the validated image data.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/imageConverter.js` around lines 661 - 672, In the type ===
"decode" branch, don't upload the raw Buffer from Buffer.from(base64String,
"base64") without verifying it's an actual image; decode base64String into a
Buffer (base64String), validate the bytes using an image detection/decoding
utility (e.g., image-type, sharp, or similar) to confirm it's a supported image
and obtain the real MIME (not just assume "image/png"), and return a 400 if
validation fails; then set outputFileName and call
supabase.storage.from("utilityhub").upload(...) with the validated image buffer
and the detected contentType (and optionally sanitize/normalize the image via
sharp before upload).
| return res.status(400).json({ msg: "No PDF file uploaded." }); | ||
| } | ||
|
|
||
| console.log("PDF to Word (text extraction) conversion requested for:", file.originalname); |
There was a problem hiding this comment.
Stop logging raw uploaded filenames.
These log lines still write user-controlled filenames into server logs. Uploaded filenames often contain document titles or personal data, and they can also inject newlines into log streams. Log a generic conversion event or an opaque request/file ID instead.
Suggested change
- console.log("PDF to Word (text extraction) conversion requested for:", file.originalname);
+ console.log("PDF to Word (text extraction) conversion requested");
- console.log("PDF to Excel (text extraction) conversion requested for:", file.originalname);
+ console.log("PDF to Excel (text extraction) conversion requested");Also applies to: 129-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/officeConverter.js` at line 22, Replace the raw filename
logging in officeConverter.js (the console.log call that prints
file.originalname) with a privacy-safe identifier: log a generic event message
and an opaque request/file ID (e.g., req.id, file.id, or a generated UUID)
instead of file.originalname; update all occurrences (including the console.log
that prints "PDF to Word (text extraction) conversion requested for:" and the
other similar log at the later occurrence) to use the opaque ID and avoid
writing user-controlled strings to logs.
| switch (compressionLevel) { | ||
| case "low": | ||
| compressionOptions.useObjectStreams = false; | ||
| break; | ||
| case "medium": | ||
| compressionOptions.useObjectStreams = true; | ||
| break; | ||
| case "high": | ||
| compressionOptions.useObjectStreams = true; | ||
| compressionOptions.objectsPerTick = 50; | ||
| break; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and examine the pdfConverter.js file
find . -name "pdfConverter.js" -type f | head -5Repository: gaureshpai/dkutils
Length of output: 94
🏁 Script executed:
# Read the specific lines mentioned in the review (295-305)
if [ -f "backend/routes/pdfConverter.js" ]; then
sed -n '290,310p' "backend/routes/pdfConverter.js" | cat -n
fiRepository: gaureshpai/dkutils
Length of output: 816
🏁 Script executed:
# Search for compression-related code and pdf-lib usage
rg "compressionLevel|useObjectStreams|objectsPerTick" backend/routes/pdfConverter.js -B 2 -A 2Repository: gaureshpai/dkutils
Length of output: 1156
🏁 Script executed:
# Check package.json to see pdf-lib version
cat package.json | grep -A 2 -B 2 "pdf-lib"Repository: gaureshpai/dkutils
Length of output: 44
🌐 Web query:
pdf-lib save options useObjectStreams objectsPerTick API documentation
💡 Result:
In pdf-lib, the save method (and saveAsBase64) accepts an optional SaveOptions object with the following relevant properties: - useObjectStreams?: boolean - Whether to pack PDF objects into compressed object streams (PDF 1.5+ feature for smaller files). Defaults to true. - objectsPerTick?: number - Controls the number of objects processed per tick during serialization, useful for managing performance in environments like browsers. - addDefaultPage?: boolean - Whether to automatically add a default blank page if the document has none. - updateFieldAppearances?: boolean - Whether to update appearances for form fields. Usage example: const pdfBytes = await pdfDoc.save({ useObjectStreams: true, objectsPerTick: 4000, addDefaultPage: false, updateFieldAppearances: true }); These options are defined in the official API docs at pdf-lib.js.org and the source code in PDFDocumentOptions.ts. useObjectStreams: false may help in some optimization scenarios or compatibility cases, as noted in GitHub issues.
Citations:
- 1: https://pdf-lib.js.org/docs/api/interfaces/saveoptions
- 2: https://github.com/Hopding/pdf-lib/blob/b8a44bd/src/api/PDFDocumentOptions.ts
- 3: https://pdf-lib.js.org/docs/api/interfaces/base64saveoptions
🏁 Script executed:
# Also search for any comments or documentation in the codebase about compression
rg "pdf-lib|compression|object.*stream" backend/routes/pdfConverter.js -i -B 1 -A 1Repository: gaureshpai/dkutils
Length of output: 1982
high compression provides no additional file-size reduction compared to medium.
Both medium and high cases enable the same real compression setting (useObjectStreams = true), which is the only pdf-lib save option that affects file size. The objectsPerTick value set in the high case only controls serialization scheduling performance, not compression output. This means users selecting high compression will receive no file-size improvement over medium, making the compression levels misleading.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/pdfConverter.js` around lines 295 - 305, The switch handling
compressionLevel is misleading because only compressionOptions.useObjectStreams
affects PDF file size; objectsPerTick only affects serialization performance, so
change the mapping so "high" no longer pretends to give better file-size
compression than "medium": set both "medium" and "high" to
compressionOptions.useObjectStreams = true and remove or stop changing
compressionOptions.objectsPerTick in the "high" case (or add a clear comment
that objectsPerTick is performance-only), and ensure any other code expecting
different file-size behavior for compressionLevel treats "medium" and "high"
equivalently; update references to
compressionLevel/compressionOptions/objectsPerTick accordingly.
| try { | ||
| const addresses = await dns.promises.lookup(hostname, { all: true, verbatim: true }); | ||
| for (const { address: ip } of addresses) { | ||
| if (isPrivateIP(ip) || isLinkLocal(ip) || isLoopback(ip) || isMulticast(ip)) { | ||
| throw new Error(`Rejected unsafe IP address: ${ip} (resolved from ${hostname})`); | ||
| } | ||
| } | ||
| // Return the full list of validated addresses | ||
| return addresses.length > 0 ? addresses : null; | ||
| } catch (err) { | ||
| if (err.message.includes("Rejected")) { | ||
| throw err; | ||
| } | ||
| // Suppress certain DNS errors to allow the request to proceed and let axios handle resolution. | ||
| // ENOTFOUND: domain not found; ENODATA: no data in DNS response; ENOTIMP: not implemented. | ||
| // EAI_AGAIN: temporary DNS failure - suppressed to avoid blocking on transient network issues, | ||
| // though this could potentially be exploited in race conditions where DNS changes between | ||
| // this check and the actual request. The trade-off favors availability over strict validation. | ||
| if (!["ENOTFOUND", "ENODATA", "EAI_AGAIN", "ENOTIMP"].includes(err.code)) { | ||
| throw err; | ||
| } | ||
| return null; |
There was a problem hiding this comment.
Don't continue unpinned when DNS prevalidation returns null.
If checkIPSafety() suppresses a lookup error, validateUrl() still accepts the URL with safeAddresses = null, and createPinnedAgents() then falls back to raw dns.lookup(). That makes the SSRF guard fail open exactly when name resolution is unstable. Treat safeAddresses === null as a blocked request instead of issuing the probe.
Suggested fix
async function validateUrl(targetUrl) {
const parsed = new URL(targetUrl);
if (parsed.protocol !== "http:" && parsed.protocol !== "https:") {
throw new Error(`Invalid scheme: ${parsed.protocol}. Only http and https are allowed.`);
}
const safeAddresses = await checkIPSafety(parsed.hostname);
+ if (!safeAddresses || safeAddresses.length === 0) {
+ throw new Error(`Unable to validate DNS for ${parsed.hostname}.`);
+ }
return { hostname: parsed.hostname, safeAddresses };
}Also applies to: 157-158, 249-250
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@backend/routes/redirectChecker.js` around lines 119 - 140, checkIPSafety
currently returns null on suppressed DNS errors but validateUrl and
createPinnedAgents treat null as “no info” and continue, allowing raw
dns.lookup() to run and bypass the SSRF check; update validateUrl to treat
safeAddresses === null as a validation failure (reject/throw) and stop request
processing, and update createPinnedAgents to not fallback to raw dns.lookup()
when passed safeAddresses === null (instead abort/throw), referencing the
safeAddresses variable and the functions checkIPSafety, validateUrl, and
createPinnedAgents so the flow blocks when DNS prevalidation is suppressed.
| // Cleanup object URLs to prevent memory leaks | ||
| useEffect(() => { | ||
| return () => { | ||
| if (imageSrc?.startsWith("blob:")) { | ||
| URL.revokeObjectURL(imageSrc); | ||
| } | ||
| if (croppedImageSrc?.startsWith("blob:")) { | ||
| URL.revokeObjectURL(croppedImageSrc); | ||
| } | ||
| }; | ||
| }, [imageSrc, croppedImageSrc]); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd frontend/src/components && wc -l ImageCropper.jsxRepository: gaureshpai/dkutils
Length of output: 82
🏁 Script executed:
cd frontend/src/components && cat -n ImageCropper.jsxRepository: gaureshpai/dkutils
Length of output: 7384
Split blob URL cleanup into separate effects to prevent revoking the preview when only the crop output changes.
The combined useEffect at line 21 depends on both imageSrc and croppedImageSrc. When croppedImageSrc is updated (line 133), React runs the previous effect's cleanup function, which revokes both URLs from its closure—including the still-rendered imageSrc preview. This can invalidate the blob URL while the image is still displayed. Each URL should be revoked only when its own state changes.
♻️ Proposed fix
- // Cleanup object URLs to prevent memory leaks
- useEffect(() => {
- return () => {
- if (imageSrc?.startsWith("blob:")) {
- URL.revokeObjectURL(imageSrc);
- }
- if (croppedImageSrc?.startsWith("blob:")) {
- URL.revokeObjectURL(croppedImageSrc);
- }
- };
- }, [imageSrc, croppedImageSrc]);
+ useEffect(() => {
+ return () => {
+ if (imageSrc?.startsWith("blob:")) {
+ URL.revokeObjectURL(imageSrc);
+ }
+ };
+ }, [imageSrc]);
+
+ useEffect(() => {
+ return () => {
+ if (croppedImageSrc?.startsWith("blob:")) {
+ URL.revokeObjectURL(croppedImageSrc);
+ }
+ };
+ }, [croppedImageSrc]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageCropper.jsx` around lines 20 - 30, Current
combined useEffect cleans up both imageSrc and croppedImageSrc together, causing
imageSrc to be revoked when croppedImageSrc changes; split this into two
separate useEffect hooks: one that watches only imageSrc and in its cleanup
revokes imageSrc if it startsWith("blob:"), and another that watches only
croppedImageSrc and in its cleanup revokes croppedImageSrc if it
startsWith("blob:"); keep using URL.revokeObjectURL and the same safety checks
to avoid revoking non-blob URLs.
| if (Number.isNaN(width) || Number.isNaN(height) || width <= 0 || height <= 0) { | ||
| toast.error("Please enter valid positive numbers for width and height."); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Bound the requested dimensions before creating the canvas.
The current validation only rejects non-positive values. Inputs like 40000 x 40000 will try to allocate gigabytes of pixel data and can freeze the tab. Add a max dimension and/or total-pixel cap before assigning canvas.width and canvas.height.
Possible guard
+const MAX_DIMENSION = 8192;
+const MAX_PIXELS = 4096 * 4096;
+
- if (Number.isNaN(width) || Number.isNaN(height) || width <= 0 || height <= 0) {
+ if (
+ Number.isNaN(width) ||
+ Number.isNaN(height) ||
+ width <= 0 ||
+ height <= 0 ||
+ width > MAX_DIMENSION ||
+ height > MAX_DIMENSION ||
+ width * height > MAX_PIXELS
+ ) {
toast.error("Please enter valid positive numbers for width and height.");
return;
}Also applies to: 95-96
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageResizer.jsx` around lines 73 - 76, Add upper
bounds to the dimension validation in ImageResizer.jsx: define sensible
constants (e.g. MAX_DIMENSION and MAX_PIXELS) and, in the same validation that
currently checks Number.isNaN(width/height) and >0 (the block around the if at
lines shown), also reject requests where width or height > MAX_DIMENSION or
where width * height > MAX_PIXELS; show a toast.error with a clear message and
return early to avoid setting canvas.width/canvas.height. Do the same guard
before the second canvas assignment mentioned (the block around lines 95-96) so
neither canvas assignment can allocate huge pixel buffers.
| const dataUrl = canvas.toDataURL(originalImage.type); | ||
| setResizedImageSrc(dataUrl); | ||
|
|
||
| handleDownload(dataUrl, `resized-${originalImage ? originalImage.name : "image"}`); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Does HTMLCanvasElement.toDataURL(type) fall back to PNG when the requested image MIME is unsupported, and what is the recommended way to determine the actual MIME returned so the download extension matches?
💡 Result:
Yes, HTMLCanvasElement.toDataURL(type) falls back to PNG (image/png) when the requested MIME type is unsupported. Browsers are required to support PNG and must use it as the default or fallback for unsupported types. The recommended way to determine the actual MIME type returned is to inspect the data URL string prefix. If it starts with "data:image/png," or "data:image/png;", the output is PNG (indicating fallback). Otherwise, it matches the requested type. Example code: const dataURL = canvas.toDataURL('image/jpeg'); const mimeType = dataURL.match(/^data:([^;]+)/)?.[1] || 'unknown'; const link = document.createElement('a'); link.download = image.${mimeType.split('/')[1]}; link.href = dataURL; link.click; This ensures the download filename extension (e.g., .jpg, .png) matches the actual format.
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/HTMLCanvasElement/toDataURL
- 2: https://stackoverflow.com/questions/28544336/what-are-the-possible-data-types-for-canvas-todataurl-function
- 3: https://udn.realityripple.com/docs/Web/API/HTMLCanvasElement/toDataURL
- 4: https://stackoverflow.com/questions/23340752/detect-browser-support-for-image-type-in-canvas-todataurl
Build the download filename from the actual canvas output format, not the source file extension.
canvas.toDataURL(originalImage.type) falls back to PNG when the browser does not support the requested MIME type. When this fallback occurs, the generated bytes are PNG-encoded but the download uses the original filename extension, creating a file format mismatch. Extract the actual MIME type from the data URL (by parsing the prefix like data:image/png;) and construct the filename with the correct extension.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageResizer.jsx` around lines 104 - 107, The
download filename uses the source file's extension even when
canvas.toDataURL(originalImage.type) falls back to a different MIME type; after
creating dataUrl from canvas.toDataURL(...) (where dataUrl is assigned and
setResizedImageSrc is called), parse the MIME from the data URL prefix (e.g.,
extract between "data:" and ";") to determine the actual image type (like
image/png), map that MIME to the appropriate extension (png, jpeg, webp, etc.),
and then call handleDownload(dataUrl, `resized-${originalImage ?
originalImage.nameWithoutExtensionOrBaseName : "image"}.${ext}`) so the
downloaded filename extension matches the real canvas output format.
| const onSubmit = async (e) => { | ||
| e.preventDefault(); | ||
| if (!selectedFile) { | ||
| toast.error("Please select an image file first."); | ||
| return; | ||
| } | ||
|
|
||
| setLoading(true); | ||
| trackToolUsage("ImageToBase64Converter", "image"); | ||
| const formData = new FormData(); | ||
| formData.append("image", selectedFile); | ||
| setLoading(true); | ||
| const formData = new FormData(); | ||
| formData.append("image", selectedFile); | ||
|
|
||
| try { | ||
| const res = await axios.post( | ||
| `${import.meta.env.VITE_API_BASE_URL}/api/convert/image-to-base64`, | ||
| formData, | ||
| { | ||
| headers: { | ||
| "Content-Type": "multipart/form-data", | ||
| }, | ||
| }, | ||
| ); | ||
| const base64 = res.data.base64; | ||
| setBase64String(base64); | ||
| try { | ||
| const res = await axios.post( | ||
| `${import.meta.env.VITE_API_BASE_URL}/api/convert/image-to-base64`, | ||
| formData, | ||
| { | ||
| headers: { | ||
| "Content-Type": "multipart/form-data", | ||
| }, | ||
| }, | ||
| ); | ||
| const base64 = res.data.base64; | ||
| setBase64String(base64); | ||
|
|
||
| handleDownload(base64, `image-base64-${Date.now()}.txt`); | ||
| } catch (err) { | ||
| console.error(err); | ||
| toast.error( | ||
| err.response?.data?.msg || | ||
| "Error converting image to Base64. Please try again.", | ||
| ); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }; | ||
| handleDownload(base64, `image-base64-${Date.now()}.txt`); | ||
| trackToolUsage("ImageToBase64Converter", "image"); | ||
| } catch (err) { | ||
| console.error(err); | ||
| toast.error(err.response?.data?.msg || "Error converting image to Base64. Please try again."); | ||
| } finally { | ||
| setLoading(false); | ||
| } | ||
| }; |
There was a problem hiding this comment.
Race condition: stale conversion can still update state and trigger download.
The past review flagged that selectedFile can change while a POST is in flight, causing an older response to still set state and trigger a download. The suggested minimal fix (disable file input during loading + early return if loading) was not implemented.
🛡️ Minimal safeguard
const onSubmit = async (e) => {
e.preventDefault();
+ if (loading) return;
if (!selectedFile) { <input
className="block w-full text-sm text-foreground border border-input rounded-lg cursor-pointer bg-muted/30 focus:outline-none file:mr-4 file:py-2 file:px-4 file:rounded-md file:border-0 file:text-sm file:font-semibold file:bg-primary/10 file:text-primary hover:file:bg-primary/10"
id="image_file"
type="file"
+ disabled={loading}
onChange={onFileChange}
accept="image/*"
/>Also applies to: 96-102
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/ImageToBase64Converter.jsx` around lines 42 - 74,
Prevent stale responses by early-exiting and locking the form: in onSubmit
immediately return if loading is true, capture a local const (e.g., const file =
selectedFile) before awaiting the POST so response handling uses that snapshot,
and set the file input's disabled prop to loading so users cannot change
selectedFile while a request is in flight; only call setBase64String,
handleDownload(base64, ...), and trackToolUsage when file is still the same
snapshot and after confirming the request succeeded, and ensure setLoading(true)
is called before the request and setLoading(false) in finally as already
present.
| if (includeUppercase) charset += "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; | ||
| if (includeLowercase) charset += "abcdefghijklmnopqrstuvwxyz"; | ||
| if (includeNumbers) charset += "0123456789"; | ||
| if (includeSymbols) charset += "!@#$%^&*()_+-=[]{};:,.<>?"; | ||
|
|
||
| if (charset === "") { | ||
| setPassword("Please select at least one option."); | ||
| setLoading(false); | ||
| return; | ||
| } | ||
| if (charset === "") { | ||
| setPassword(""); | ||
| setError("Please select at least one option."); | ||
| return; | ||
| } | ||
|
|
||
| // Validate length to prevent NaN or invalid values | ||
| const validLength = | ||
| isNaN(length) || length < 4 ? 4 : Math.min(length, 32); | ||
| const validLength = Number.isNaN(length) || length < 4 ? 4 : Math.min(length, 32); | ||
| const randomValues = new Uint32Array(validLength); | ||
| crypto.getRandomValues(randomValues); | ||
|
|
||
| // Use Web Crypto API for cryptographically secure random generation | ||
| const randomValues = new Uint32Array(validLength); | ||
| crypto.getRandomValues(randomValues); | ||
| for (let i = 0; i < validLength; i++) { | ||
| newPassword += charset.charAt(randomValues[i] % charset.length); |
There was a problem hiding this comment.
Guarantee each selected character class appears at least once.
This loop samples from one merged charset, so users can enable numbers or symbols and still get a password without them. For a password generator, that breaks common policy expectations. Pick one character from each enabled pool first, fill the remainder from the merged pool, then shuffle.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@frontend/src/components/PasswordGenerator.jsx` around lines 24 - 40, The
generator currently builds a merged charset and samples only from it, which can
omit enabled classes; modify the logic in the PasswordGenerator component to
first collect one random character from each enabled class (use the pools
referenced by includeUppercase, includeLowercase, includeNumbers,
includeSymbols), then fill the remaining characters up to validLength from the
merged charset variable, using crypto.getRandomValues(randomValues) for
randomness, push results into newPassword array (not string) and finally shuffle
the array before joining to guarantee every selected class appears at least
once; ensure you still respect the validLength bounds and existing random
generation flow (validLength, randomValues, crypto.getRandomValues).
Summary by CodeRabbit
New Features
Improvements
Documentation
Chores