Conversation
Co-authored-by: Cursor <cursoragent@cursor.com>
…eateWallet Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
📝 WalkthroughWalkthroughLarge feature release: adds lending, savings, launchpad/allocation, dividends, credit scoring, staking, swap/pool improvements, many new Mongoose models, robust middleware (security, rate limiting, request context, admin), centralized logging/error handling, expanded routes/controllers, deployment and operational scripts, and extensive docs and OpenAPI additions. Changes
Sequence Diagram(s)sequenceDiagram
participant User as Frontend
participant API as API Server
participant Auth as Auth Middleware
participant Lending as LendingService
participant Credit as CreditScoreService
participant DB as MongoDB
participant Chain as Stellar/Horizon
User->>API: POST /v1/lending/pools/:poolId/borrow (payload)
API->>Auth: validate token
Auth-->>API: user identity
API->>Lending: borrow(poolId, userId, collateral, amount)
Lending->>DB: fetch LendingPool & positions
Lending->>Credit: getScore(userId)
Credit->>DB: read CreditScore
Credit-->>Lending: score
Lending->>Chain: fetch prices / check liquidity
Lending->>DB: create BorrowPosition & update totals
Lending-->>API: response (position, rates, fees)
API-->>User: 201 Created
sequenceDiagram
participant User as Frontend
participant API as API Server
participant LaunchSvc as LaunchService
participant Engage as EngagementService
participant Allocate as AllocationService
participant DB as MongoDB
User->>API: PATCH /v1/launchpad/launches/:id/status (allocation_running)
API->>LaunchSvc: transitionStatus(launchId, allocation_running)
LaunchSvc->>DB: fetch & update Launch
API->>Engage: snapshotEngagement(launchId)
Engage->>DB: compute scores & update Participations
API->>Allocate: runDesign1(launchId)
Allocate->>DB: read Participations
Allocate->>DB: update allocations & Launch listingPrice
Allocate-->>API: allocation result
API-->>User: 200 OK
sequenceDiagram
participant User as Frontend
participant API as API Server
participant SwapSvc as SwapService
participant PoolSvc as PoolService
participant HorizonQ as HorizonQueue
participant AccountSvc as AccountService
User->>API: POST /v1/swap/execute (from,to,amount,secret)
API->>SwapSvc: executeSwap(...)
SwapSvc->>PoolSvc: getPoolsForPair(...)
PoolSvc->>HorizonQ: fetch pool reserves
SwapSvc->>AccountSvc: loadAccountWithFallback(publicKey)
SwapSvc->>SwapSvc: build transaction (compute slippage)
SwapSvc->>HorizonQ: submit transaction (HTTP)
HorizonQ-->>SwapSvc: tx result
SwapSvc->>SwapSvc: collect platform fee (optional)
SwapSvc->>AccountSvc: clear caches
SwapSvc-->>API: { txHash, ledger }
API-->>User: 200 OK
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 27
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
services/liquidity-pools.service.ts (1)
467-485:⚠️ Potential issue | 🔴 CriticalStale sequence number: account loaded before trustline transactions but reused for deposit.
The
accountobject is loaded at Line 391, before the pool-share trustline transaction (Lines 416-465). After the trustline tx is submitted, the account's sequence number has been incremented on-chain, but the localaccountobject still holds the old sequence. Building the deposit transaction at Line 471 with the sameaccountobject will use the old sequence number, causing the deposit transaction to fail withtx_bad_seq.Proposed fix — reload account before building the deposit tx
+ // Reload account to get fresh sequence number after trustline tx + const freshAccount = await server.loadAccount(publicKey); + - const tx = new StellarSdk.TransactionBuilder(account, { + const tx = new StellarSdk.TransactionBuilder(freshAccount, { fee: baseFee.toString(), networkPassphrase: env.NETWORK, })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/liquidity-pools.service.ts` around lines 467 - 485, The build of the deposit transaction uses a stale Account object (variable account) that was loaded before submitting the pool-share trustline transaction, causing tx_bad_seq; reload the account from the ledger after the trustline tx completes and before creating the StellarSdk.TransactionBuilder for the liquidityPoolDeposit (i.e., refresh the account variable or call server.loadAccount(accountId) right after the trustline submission and before calling StellarSdk.TransactionBuilder/Operation.liquidityPoolDeposit) so the new transaction uses the updated sequence number.routes/toml.routes.ts (1)
28-74:⚠️ Potential issue | 🟠 MajorEscape TOML string values to prevent invalid output.
The
formatString()function only wraps values in quotes but doesn't escape special characters. Per the TOML specification, double quotes ("), backslashes (\), and control characters like newlines must be escaped in basic strings. If any token data (name, description, issuer, etc.) contains these characters, the generated TOML will be invalid and fail to parse.🔧 Suggested fix
- const formatString = (value: any): string => { - return `"${value}"`; - }; + const formatString = (value: any): string => { + const escaped = String(value) + .replace(/\\/g, '\\\\') + .replace(/"/g, '\\"') + .replace(/\n/g, '\\n'); + return `"${escaped}"`; + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@routes/toml.routes.ts` around lines 28 - 74, The formatString function currently only wraps values in quotes and must escape TOML basic-string special characters to avoid invalid output; update formatString(value: any) to convert value to string, replace backslashes (\) with \\ , double quotes (") with \" , and control characters (newline, tab, carriage return, etc.) with their escape sequences (e.g., \n, \t, \r) before returning `"${escaped}"`; ensure this escaped formatting is used for all token fields referenced (formatString usages for token.assetCode, token.issuer, token.name, token.description, token.imgUrl, token.anchorAssetType, token.conditions, token.status, token.redemptionInstructions) so generated TOML is valid per the TOML spec.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (118)
.env.example.gitignoreBRANCHES.mdPiRCapp.tsconfig/db.tsconfig/env.tsconfig/fees.tsconfig/lending.tsconfig/platiform.config.tsconfig/stellar.tscontrollers/account/index.tscontrollers/account/send.tscontrollers/admin/logs.controller.tscontrollers/dividend/index.tscontrollers/fees/index.tscontrollers/launchpad/index.tscontrollers/lending/index.tscontrollers/liquidity-pools/index.tscontrollers/orderbook/index.tscontrollers/orderbook/search-assets.tscontrollers/savings/index.tscontrollers/swap/index.tscontrollers/tokens/index.tscontrollers/trade-analytics/index.tscontrollers/trade/index.tscontrollers/transaction-history/index.tscontrollers/users/index.tsdocs/LENDING.mddocs/WALLET_CHANGE_FRONTEND.mddocumentation/account.doc.jsondocumentation/index.tsdocumentation/passkey.doc.jsondocumentation/trade.doc.jsonmiddlewares/error-handler.tsmiddlewares/isAdmin.tsmiddlewares/isAuthenticated.tsmiddlewares/launchpad-guard.tsmiddlewares/login-rate-limiter.tsmiddlewares/rateLimiter.tsmiddlewares/request-context.tsmiddlewares/security.tsmodels/BalanceCache.tsmodels/BorrowPosition.tsmodels/CreditScore.tsmodels/DividendHolderSnapshot.tsmodels/DividendRound.tsmodels/EngagementEvent.tsmodels/Fee.tsmodels/Launch.tsmodels/LendingPool.tsmodels/Log.tsmodels/LoginAttempt.tsmodels/Participation.tsmodels/PoolCache.tsmodels/RateLimit.tsmodels/SavingsPosition.tsmodels/SavingsProduct.tsmodels/SupplyPosition.tsmodels/TransactionCache.tsmodels/User.tspackage.jsonrender.yamlroutes/account.routes.tsroutes/admin.routes.tsroutes/dividend.routes.tsroutes/fee.routes.tsroutes/index.tsroutes/launchpad.routes.tsroutes/lending.routes.tsroutes/liquidity-pools.routes.tsroutes/orderbook.routes.tsroutes/pairs.routes.tsroutes/savings.routes.tsroutes/swap.routes.tsroutes/token.routes.tsroutes/toml.routes.tsroutes/trade.routes.tsroutes/user.routes.tsscripts/cleanup-old-wallet-data.tsscripts/create-zyra-token-and-pool-direct.tsscripts/create-zyra-token-and-pool.tsscripts/generate-secret-from-mnemonic.tsscripts/init-db.tsscripts/test-all-endpoints.tsscripts/test-endpoints.tsscripts/test-mint-with-token.tsservices/account.service.tsservices/allocation.service.tsservices/credit-score.service.tsservices/dividend.service.tsservices/engagement.service.tsservices/escrow.service.tsservices/fee.service.tsservices/jwt.service.tsservices/launch.service.tsservices/launchpad.service.tsservices/lending.service.tsservices/liquidity-pools.service.tsservices/logging.service.tsservices/orderbook.service.tsservices/price.service.tsservices/savings.service.tsservices/staking-adapter.tsservices/staking.service.tsservices/swap.service.tsservices/token.service.tsservices/trade-analytics.service.tsservices/trade.service.tsservices/transaction-history.service.tsservices/users.service.tsservices/wallet.service.tstsconfig.jsontypes.tsutils/asset.tsutils/horizon-queue.tsutils/keypair.tsutils/logger.ts
💤 Files with no reviewable changes (1)
- .env.example
| export const getMyPiPower = async (req: Request, res: Response) => { | ||
| try { | ||
| const launchId = getLaunchIdParam(req); | ||
| const userId = (req as any).user?.uid ?? req.query.userId ?? req.body?.userId; | ||
| if (!userId) return res.status(400).json({ message: 'userId required (or authenticated user)' }); | ||
| const data = await stakingService.getPiPowerForUser(launchId, String(userId)); | ||
| return res.status(200).json(data); | ||
| } catch (error: any) { | ||
| logger.error('getMyPiPower failed:', error); | ||
| return res.status(500).json({ message: error?.message ?? 'Failed to get PiPower' }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "*launchpad*" | head -20Repository: Pi-Defi-world/Tokenization-bn
Length of output: 166
🏁 Script executed:
git ls-files | grep -i launchpadRepository: Pi-Defi-world/Tokenization-bn
Length of output: 191
🏁 Script executed:
cat -n routes/launchpad.routes.tsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 1491
🏁 Script executed:
cat -n middlewares/launchpad-guard.tsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 894
🏁 Script executed:
cat -n controllers/launchpad/index.ts | head -150Repository: Pi-Defi-world/Tokenization-bn
Length of output: 7877
These endpoints allow arbitrary userId specification without authentication, creating an authorization bypass.
The routes for getMyPiPower (line 11), commitPi (line 12), and recordEngagement (line 13) in routes/launchpad.routes.ts are not protected by authentication middleware. All three endpoints in controllers/launchpad/index.ts fall back to req.query.userId or req.body.userId when (req as any).user?.uid is absent, allowing any unauthenticated caller to impersonate any user and perform actions like committing Pi or recording engagement.
Apply authentication middleware to these routes or remove the unauthenticated fallback paths.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/launchpad/index.ts` around lines 95 - 106, The three controller
endpoints getMyPiPower, commitPi, and recordEngagement accept unauthenticated
userId from req.query or req.body — remove that fallback and require the
authenticated user ID only, and ensure the routes are protected by the
authentication middleware: update routes/launchpad.routes.ts to attach the auth
middleware to the routes that call getMyPiPower, commitPi, and recordEngagement,
and modify the controller functions (getMyPiPower, commitPi, recordEngagement in
controllers/launchpad/index.ts) to obtain userId only from (req as any).user.uid
and return a 401/400 if missing instead of using req.query.userId or
req.body.userId.
| export const supply = async (req: Request, res: Response) => { | ||
| try { | ||
| const poolId = getPoolIdParam(req); | ||
| const userId = (req as any).user?.id ?? req.body?.userId; | ||
| if (!userId) return res.status(400).json({ message: 'userId required (auth or body)' }); | ||
| const { amount } = req.body || {}; | ||
| if (!amount) return res.status(400).json({ message: 'Missing required: amount' }); | ||
| const position = await lendingService.supply(poolId, String(userId), String(amount)); | ||
| return res.status(200).json(position); | ||
| } catch (error: any) { | ||
| logger.error('supply failed:', error); | ||
| return res.status(400).json({ message: error?.message || 'Supply failed' }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
User impersonation via req.body.userId — any client can act as any user.
The pattern (req as any).user?.id ?? req.body?.userId (repeated in supply, withdraw, borrow, getPositions, liquidate, getCreditScore) falls back to a client-supplied userId when no authenticated user is present. Since the routes have no auth middleware, any client can supply, withdraw, borrow, or liquidate on behalf of any user.
For financial endpoints, userId must come exclusively from the authenticated session. The body fallback should be removed or restricted to admin/service-to-service calls only.
🔒️ Proposed fix
- const userId = (req as any).user?.id ?? req.body?.userId;
- if (!userId) return res.status(400).json({ message: 'userId required (auth or body)' });
+ const userId = (req as any).user?.id;
+ if (!userId) return res.status(401).json({ message: 'Authentication required' });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/lending/index.ts` around lines 49 - 62, The handlers (e.g.,
supply) allow user impersonation by falling back to req.body.userId; change them
to require the authenticated user only: use (req as any).user?.id as the sole
source of userId, remove the req.body.userId fallback, and return an appropriate
401/403 error if no authenticated user is present; apply the same change to
withdraw, borrow, getPositions, liquidate, and getCreditScore and ensure calls
like lendingService.supply receive String(userId) only after verifying
authentication (or gate the body-based override behind a distinct
admin/service-auth check).
| /** Set credit score (e.g. by backend/admin). Score 0-100. */ | ||
| export const setCreditScore = async (req: Request, res: Response) => { | ||
| try { | ||
| const { userId, score } = req.body || {}; | ||
| if (!userId || score == null) return res.status(400).json({ message: 'Missing required: userId, score' }); | ||
| const result = await creditScoreService.setScore(String(userId), Number(score)); | ||
| return res.status(200).json(result); | ||
| } catch (error: any) { | ||
| logger.error('setCreditScore failed:', error); | ||
| return res.status(400).json({ message: error?.message || 'Failed to set credit score' }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
setCreditScore lacks authorization — allows arbitrary score manipulation.
Any unauthenticated client can set any user's credit score to 100, bypassing all credit checks and potentially borrowing unlimited amounts. This endpoint must be restricted to admin or internal-service callers only.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/lending/index.ts` around lines 174 - 185, The setCreditScore
controller currently accepts any request and must enforce authorization; update
the setCreditScore function to validate the caller is an admin or trusted
internal service before processing (e.g., check req.user.role or an
authenticated token/scoped claim and/or a configured internal-service header/API
key), return 403 when unauthorized, and only then call
creditScoreService.setScore; reference the setCreditScore handler and integrate
with the existing auth middleware/util (or add a short role check) so only
admin/internal callers can change scores.
| // Clear login attempts on successful login | ||
| await clearLoginAttempts(req, { ip, username, uid }); | ||
|
|
There was a problem hiding this comment.
If clearLoginAttempts throws, a successful sign-in is incorrectly reported as a failure.
After signInUser succeeds on Line 13 and a valid { user, token } is produced, an exception from clearLoginAttempts on Line 16 would fall into the catch block. This would:
- Record a failed login attempt (Line 24) despite successful authentication.
- Return an error response to the client, discarding the valid token.
Wrap this call in its own try-catch so a cleanup failure doesn't derail a successful sign-in.
🐛 Suggested fix
const { user, token } = await usersService.signInUser(authResult);
- // Clear login attempts on successful login
- await clearLoginAttempts(req, { ip, username, uid });
+ // Clear login attempts on successful login (best-effort)
+ try {
+ await clearLoginAttempts(req, { ip, username, uid });
+ } catch (clearErr) {
+ logger.error('Failed to clear login attempts after successful sign-in:', clearErr);
+ }
return res.status(200).json({📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Clear login attempts on successful login | |
| await clearLoginAttempts(req, { ip, username, uid }); | |
| // Clear login attempts on successful login (best-effort) | |
| try { | |
| await clearLoginAttempts(req, { ip, username, uid }); | |
| } catch (clearErr) { | |
| logger.error('Failed to clear login attempts after successful sign-in:', clearErr); | |
| } | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/users/index.ts` around lines 15 - 17, After signInUser returns {
user, token }, wrap the call to clearLoginAttempts(req, { ip, username, uid })
in its own try-catch so any exception from clearLoginAttempts does not propagate
to the outer catch and cause recordLoginAttempt or an error response; on failure
of clearLoginAttempts, log the error (using the existing logger or
console.error) and continue to return the successful { user, token } response,
ensuring you do not call recordLoginAttempt or alter the HTTP response for a
cleanup failure.
docs/LENDING.md
Outdated
| - **Who:** Any user (liquidator) who repays part or all of the debt on behalf of the borrower. | ||
|
|
||
| - **How:** | ||
| 1. Liquidator calls **`POST /lending/positions/:borrowPositionId/liquidate`** with body `{ repayAmount, userId }` (userId = liquidator). |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find lending-related controller files
find . -type f -name "*lending*" -o -name "*controller*" | grep -E "(lending|controller)" | head -20Repository: Pi-Defi-world/Tokenization-bn
Length of output: 202
🏁 Script executed:
# Search for liquidate endpoint implementation
rg -n -A 25 "liquidate" --type=ts --type=js | grep -E "(controller|route|handler|POST)" -A 20Repository: Pi-Defi-world/Tokenization-bn
Length of output: 4861
🏁 Script executed:
# More specific search in controllers directory
find . -path "*/controllers*" -name "*lending*" -type fRepository: Pi-Defi-world/Tokenization-bn
Length of output: 55
🏁 Script executed:
# Check the lending routes setup and middleware
cat -n routes/lending.routes.tsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 1215
🏁 Script executed:
# Look for middleware setup at the top of the routes file
head -30 routes/lending.routes.tsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 1082
🏁 Script executed:
# Search for authentication middleware usage
rg -n "middleware|auth|auth\.js|authMiddleware" routes/ -A 3 -B 3Repository: Pi-Defi-world/Tokenization-bn
Length of output: 4123
🏁 Script executed:
# Check how lending routes are mounted in the main router
rg -n "lendingRoutes" routes/index.ts -B 5 -A 5Repository: Pi-Defi-world/Tokenization-bn
Length of output: 808
🏁 Script executed:
# Check if there's a global middleware that applies authentication to lending
cat -n routes/index.ts | head -50Repository: Pi-Defi-world/Tokenization-bn
Length of output: 1592
🏁 Script executed:
# Check if there's global middleware in the app setup
find . -name "*.ts" -o -name "*.js" | xargs grep -l "app.use.*middleware" | head -5Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
🏁 Script executed:
# Check the main app/server file
find . -name "app.ts" -o -name "server.ts" -o -name "index.ts" | grep -v node_modules | head -5Repository: Pi-Defi-world/Tokenization-bn
Length of output: 205
🏁 Script executed:
# Look at main server setup to see if authentication is global
rg -n "isAuthenticated|middleware" routes/index.ts -B 3 -A 3Repository: Pi-Defi-world/Tokenization-bn
Length of output: 597
🏁 Script executed:
# Check main app.ts to see middleware setup
cat -n app.ts | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 2785
🏁 Script executed:
# Check if isAuthenticated middleware is applied globally or per route
rg -n "isAuthenticated" app.ts -B 2 -A 2Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
Critical: Liquidate endpoint lacks authentication middleware, allowing unauthorized liquidations.
The liquidate endpoint (POST /lending/positions/:borrowPositionId/liquidate) is mounted without authentication middleware. Line 31 in routes/index.ts mounts lending routes with appRoutes.use("/lending", lendingRoutes) — no isAuthenticated middleware is applied, unlike protected routes such as tokens.
The controller code attempts a fallback pattern (line 132 in controllers/lending/index.ts): const liquidatorUserId = (req as any).user?.id ?? req.body?.userId. Without auth middleware, req.user?.id will always be undefined, causing the code to trust the client-supplied userId from the request body. This allows any unauthenticated user to liquidate positions on behalf of any other user.
Add the isAuthenticated middleware to the lending routes:
appRoutes.use("/lending", isAuthenticated, lendingRoutes)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/LENDING.md` at line 24, The lending routes are mounted without
authentication so the controller fallback (const liquidatorUserId = (req as
any).user?.id ?? req.body?.userId in controllers/lending/index.ts) trusts
client-supplied userId; update the router setup to enforce auth by adding the
isAuthenticated middleware when mounting lending routes (replace
appRoutes.use("/lending", lendingRoutes) with appRoutes.use("/lending",
isAuthenticated, lendingRoutes)) so POST
/lending/positions/:borrowPositionId/liquidate can only be called by
authenticated users.
| feeTxBuilder.addOperation( | ||
| StellarSdk.Operation.payment({ | ||
| destination: env.PI_TEST_USER_PUBLIC_KEY, | ||
| asset: StellarSdk.Asset.native(), | ||
| amount: feeAmount, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check how PI_TEST_USER_PUBLIC_KEY is defined and if there's a separate PLATFORM_FEE_PUBLIC_KEY
rg -n "PI_TEST_USER_PUBLIC_KEY|PLATFORM_FEE_PUBLIC_KEY" --type=ts -g '!node_modules'Repository: Pi-Defi-world/Tokenization-bn
Length of output: 1419
🏁 Script executed:
#!/bin/bash
# Check the environment configuration and fee documentation
echo "=== config/env.ts ==="
head -50 config/env.ts
echo -e "\n=== config/fees.ts ==="
cat config/fees.ts
echo -e "\n=== config/lending.ts ==="
head -50 config/lending.tsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 4474
Replace PI_TEST_USER_PUBLIC_KEY with PLATFORM_FEE_PUBLIC_KEY for platform fee recipient.
Platform fees are incorrectly sent to env.PI_TEST_USER_PUBLIC_KEY (a test/faucet account). The codebase explicitly documents that all fees should be sent to env.PLATFORM_FEE_PUBLIC_KEY (see config/env.ts and config/fees.ts). This issue affects multiple service files:
services/liquidity-pools.service.ts:166services/token.service.ts:281, 346services/swap.service.ts:239controllers/tokens/index.ts:30
Use the existing getPlatformFeePublicKey() helper function from config/lending.ts or reference env.PLATFORM_FEE_PUBLIC_KEY directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/liquidity-pools.service.ts` around lines 164 - 169, Replace the
test/faucet recipient env.PI_TEST_USER_PUBLIC_KEY used as the payment
destination for platform fees with the real platform fee recipient; update the
Stellar payment operations (e.g., the payment call inside
feeTxBuilder.addOperation where destination is env.PI_TEST_USER_PUBLIC_KEY) to
use either getPlatformFeePublicKey() from config/lending.ts or
env.PLATFORM_FEE_PUBLIC_KEY directly so all platform fees route to the intended
PLATFORM_FEE_PUBLIC_KEY; apply the same replacement in the other occurrences
noted (token.service, swap.service, controllers) to ensure consistency.
| async getTradeAggregations( | ||
| base: StellarSdk.Asset, | ||
| counter: StellarSdk.Asset, | ||
| resolution: number = 3600000, // 1 hour in milliseconds | ||
| startTime?: Date, | ||
| endTime?: Date, | ||
| limit: number = 24 | ||
| ) { | ||
| const builder = server | ||
| .tradeAggregations() | ||
| .forBaseAsset(base) | ||
| .forCounterAsset(counter) | ||
| .resolution(resolution) | ||
| .limit(limit); | ||
|
|
||
| if (startTime) { | ||
| builder.startTime(startTime); | ||
| } | ||
| if (endTime) { | ||
| builder.endTime(endTime); | ||
| } | ||
|
|
||
| const res = await builder.call(); | ||
| return res.records.map((agg: any) => ({ | ||
| timestamp: agg.timestamp, | ||
| trade_count: Number(agg.trade_count), | ||
| base_volume: agg.base_volume, | ||
| counter_volume: agg.counter_volume, | ||
| avg: agg.avg, | ||
| high: agg.high, | ||
| low: agg.low, | ||
| open: agg.open, | ||
| close: agg.close, | ||
| })); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check package.json for `@stellar/stellar-sdk` version
if [ -f package.json ]; then
echo "=== Package.json SDK dependency ==="
cat package.json | jq '.dependencies["@stellar/stellar-sdk"], .devDependencies["@stellar/stellar-sdk"]' 2>/dev/null || cat package.json | grep -i stellar
fiRepository: Pi-Defi-world/Tokenization-bn
Length of output: 123
🏁 Script executed:
#!/bin/bash
# Search for tradeAggregation usage patterns
echo "=== tradeAggregation usage in codebase ==="
rg -n 'tradeAggregation' --type=ts --type=js -C 3Repository: Pi-Defi-world/Tokenization-bn
Length of output: 517
🌐 Web query:
Stellar SDK @stellar/stellar-sdk Horizon server tradeAggregations API documentation
💡 Result:
@stellar/stellar-sdk (JS) — Horizon.Server.tradeAggregation(...)
In the JS Stellar SDK, the Horizon call is:
server.tradeAggregation(base, counter, start_time, end_time, resolution, offset)
It returns a TradeAggregationCallBuilder. Parameter details (incl. supported resolution values and offset constraints) are documented in the generated SDK docs for Horizon.Server. [1]
Horizon REST API — GET /trade_aggregations
Horizon’s endpoint is:
GET /trade_aggregations
It takes query parameters for the asset pair plus time window/aggregation settings, including (commonly used): base_asset_type/base_asset_code/base_asset_issuer, counter_asset_type/counter_asset_code/counter_asset_issuer, start_time, end_time, resolution, and optional offset (plus paging/sort like limit/order). [2][3]
Response shape (“trade aggregation object”)
Fields include timestamp, trade_count, base_volume, counter_volume, avg, high/high_r, low/low_r, open/open_r, close/close_r. [4]
Sources: [1] [2] [3] [4]
🌐 Web query:
@stellar/stellar-sdk TradeAggregationCallBuilder forBaseAsset forCounterAsset builder pattern methods
💡 Result:
In @stellar/stellar-sdk, TradeAggregationCallBuilder does not expose builder methods like forBaseAsset() / forCounterAsset(). The base and counter assets are provided up-front when you create the builder via server.tradeAggregation(base, counter, start_time, end_time, resolution, offset). [1]
After creation, the chainable “builder pattern” methods you can use are the standard CallBuilder ones: cursor(), limit(), order(), plus call() (execute once) and stream() (SSE). [2]
import { Horizon, Asset } from "@stellar/stellar-sdk";
const server = new Horizon.Server("https://horizon.stellar.org");
const base = Asset.native(); // XLM
const counter = new Asset("USDC", "GA5ZSEJYB37JRC5AVCIA5MOP4RHTM335X2KGX3IHOJAPP5RE34K4KZVN");
const start_time = Date.now() - 24 * 60 * 60 * 1000; // ms since epoch
const end_time = Date.now();
const resolution = 60_000; // 1 minute (ms)
const offset = 0;
const page = await server
.tradeAggregation(base, counter, start_time, end_time, resolution, offset) // sets base/counter here
.order("asc")
.limit(200)
.cursor("now")
.call();Sources: [1] [2]
🌐 Web query:
@stellar/stellar-sdk tradeAggregation singular vs plural method Horizon server
💡 Result:
In @stellar/stellar-sdk (JS), the Horizon trade_aggregations endpoint is accessed via the singular method:
server.tradeAggregation(base, counter, start_time, end_time, resolution, offset)→ returns aTradeAggregationCallBuilderyou then.call()(or.stream()). [1][2]
So JS uses tradeAggregation (singular) even though:
- The Horizon REST path is plural-ish:
GET /trade_aggregations(underscore + plural). - Other SDKs often expose a plural naming (e.g., Python
trade_aggregations(), .NETTradeAggregations, etc.). [3][4]
Example (JS):
const server = new StellarSdk.Horizon.Server("https://horizon.stellar.org");
const res = await server
.tradeAggregation(baseAsset, counterAsset, startTimeMs, endTimeMs, resolutionMs, offsetMs)
.order("asc")
.limit(200)
.call();Sources: JS TradeAggregationCallBuilder docs and Server method definition [1][2], plus examples of plural naming in other SDKs [3][4].
[1] https://stellar.github.io/js-stellar-sdk/TradeAggregationCallBuilder.html
[2] https://stellar.github.io/js-stellar-sdk/lib_horizon_server.js.html
[3] https://stellar-sdk.readthedocs.io/en/stable/api.html
[4] https://elucidsoft.github.io/dotnet-stellar-sdk/api/stellar_dotnet_sdk.Server.html
The Stellar SDK API is fundamentally misused; this code will fail at runtime.
The method is server.tradeAggregation() (singular), not tradeAggregations(), and it requires constructor parameters, not a builder chain. The correct signature per @stellar/stellar-sdk v14.2.0 is:
server.tradeAggregation(base, counter, start_time, end_time, resolution, offset)
.limit(limit)
.call()
There are no .forBaseAsset(), .forCounterAsset(), .resolution(), .startTime(), or .endTime() methods. Additionally, start_time and end_time must be epoch milliseconds (number), not Date objects—pass startTime?.getTime() and endTime?.getTime() (or epoch values directly) to the constructor.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/orderbook.service.ts` around lines 78 - 112, The
getTradeAggregations implementation is using a non-existent builder API; replace
the builder-chain with the correct server.tradeAggregation(...) call: call
server.tradeAggregation(base, counter, startTime ? startTime.getTime() :
undefined, endTime ? endTime.getTime() : undefined, resolution, /* offset */
undefined).then or await that, then chain .limit(limit).call(); remove all
.forBaseAsset/.forCounterAsset/.resolution/.startTime/.endTime usage and ensure
startTime and endTime are passed as epoch millisecond numbers (use
startTime.getTime()/endTime.getTime() when present), then map res.records as
before in getTradeAggregations.
| counterAssetIssuer, | ||
| startTime, | ||
| endTime, | ||
| resolution = 3600, // Default: 1 hour |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Stellar Horizon API trade_aggregations resolution parameter units
💡 Result:
In Stellar Horizon’s GET /trade_aggregations, the resolution parameter is a segment duration in milliseconds (ms) (i.e., you pass an integer count of ms per bucket). Supported values are: 60000 (1m), 300000 (5m), 900000 (15m), 3600000 (1h), 86400000 (1d), 604800000 (1w). start_time and end_time are also milliseconds since Unix epoch. [1][2]
Fix resolution parameter units — Horizon API expects milliseconds, not seconds.
The Horizon /trade_aggregations endpoint requires resolution in milliseconds. The code passes values in seconds:
- Line 83:
3600should be3600000(1 hour in ms) - Line 249:
3600should be3600000(1 day in ms) and86400should be86400000(7 days in ms)
Supported Horizon values are: 60000 (1m), 300000 (5m), 900000 (15m), 3600000 (1h), 86400000 (1d), 604800000 (1w).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/trade-analytics.service.ts` at line 83, The resolution parameter is
being passed in seconds but Horizon expects milliseconds; update the resolution
values used by the trade aggregation calls by converting seconds to ms: change
the default resolution variable named resolution from 3600 to 3600000 (1h in
ms), and update the other hardcoded resolutions used in the file (the 3600 ->
3600000 and 86400 -> 86400000) so the Horizon /trade_aggregations requests use
the supported ms values (e.g., 60000, 300000, 900000, 3600000, 86400000,
604800000); search for the resolution variable and any hardcoded numeric
resolution literals in the trade analytics service and replace them accordingly.
| if (useCache && !forceRefresh) { | ||
| try { | ||
| cachedTransactions = await TransactionCache.findOne({ | ||
| publicKey, | ||
| cursor: cacheKey, | ||
| expiresAt: { $gt: new Date() } | ||
| }) | ||
| .select('transactions expiresAt lastFetched') | ||
| .lean(); | ||
|
|
||
| if (cachedTransactions) { | ||
| return { | ||
| data: cachedTransactions.transactions, | ||
| pagination: { | ||
| limit, | ||
| nextCursor: this.extractNextCursor(cachedTransactions.transactions, limit), | ||
| hasMore: cachedTransactions.transactions.length === limit, | ||
| order, | ||
| }, | ||
| cached: true, | ||
| }; | ||
| } | ||
| } catch (dbError) { | ||
| logger.warn(`Error reading from transaction cache DB: ${dbError instanceof Error ? dbError.message : String(dbError)}`); | ||
| } | ||
| } |
There was a problem hiding this comment.
Stale-cache fallback on fetch error is dead code.
The cache query on line 44 filters with expiresAt: { $gt: new Date() }, so only non-expired entries match. If a valid cache entry is found, line 50 returns immediately — the fetch code is never reached. If the cache entry is expired (or absent), cachedTransactions is null.
Therefore, at line 162, cachedTransactions is always null, making lines 162-173 unreachable with data. The intended "serve stale cache on fetch failure" pattern is broken.
To fix, read the cache without the expiry filter for fallback purposes, and only early-return when the cache is fresh.
Proposed fix sketch
// Check cache first
let cachedTransactions: any = null;
if (useCache && !forceRefresh) {
try {
- cachedTransactions = await TransactionCache.findOne({
+ cachedTransactions = await TransactionCache.findOne({
publicKey,
cursor: cacheKey,
- expiresAt: { $gt: new Date() }
})
.select('transactions expiresAt lastFetched')
.lean();
- if (cachedTransactions) {
+ if (cachedTransactions && cachedTransactions.expiresAt > new Date()) {
return {
data: cachedTransactions.transactions,
// ...
};
}
} catch (dbError) {This way cachedTransactions retains the stale entry for the error-path fallback at line 162.
Also applies to: 158-173
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/transaction-history.service.ts` around lines 39 - 64, The cache read
currently filters by expiresAt > new Date(), so cachedTransactions is only set
for fresh entries and stale fallback is impossible; change the
TransactionCache.findOne query (used when useCache && !forceRefresh) to omit the
expiresAt condition so you always load any cache row into cachedTransactions,
then immediately check expiresAt on the returned cachedTransactions to decide
whether to early-return a fresh cache (use cachedTransactions.expiresAt > now)
while preserving the stale cachedTransactions value for the later fetch-error
fallback; update any logic around extractNextCursor, hasMore and the cached flag
to only mark cached:true on the fresh-path.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
types.ts (1)
118-118: Consider using a descriptive string union or enum forAllocationDesign.Numeric literal types (
1 | 2) are opaque in logs, API responses, and debugging. A string union (e.g.,'pro_rata' | 'tiered') or an enum would be more self-documenting.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@types.ts` at line 118, Replace the opaque numeric literal type AllocationDesign = 1 | 2 with a descriptive string union or enum (e.g., type AllocationDesign = 'pro_rata' | 'tiered' or enum AllocationDesign { ProRata = 'pro_rata', Tiered = 'tiered' }); update all usages, function signatures and any serialization/deserialization code to use the new string values (or enum members) and adjust any switch/case or comparison logic that expects 1 or 2 to the new names; ensure tests and API contracts (responses/logs) are updated accordingly so consumers receive the human-readable values.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@types.ts`:
- Around line 148-164: The IParticipationDoc interface declares launchId as
ObjectId but userId as string, causing inconsistent foreign-key typing; update
IParticipationDoc so userId is typed as ObjectId (to match launchId and the
stored MongoDB _id) and adjust any consumers that construct/find by { launchId,
userId } to pass an ObjectId, or if userId is intentionally a different external
identifier, rename the property to userUid (or similar) and keep it as
string—focus your change on the IParticipationDoc declaration and any references
to userId in queries/functions to ensure types and names stay consistent (look
for IParticipationDoc, launchId, userId in code).
- Around line 125-146: Remove the dead interfaces ILaunchDoc and
IParticipationDoc from types.ts and replace any references (if present) with the
existing ILaunch and IParticipation model interfaces; ensure no code imports
those removed symbols. Rename the fields T_available → t_available and
PiPowerBaseline → piPowerBaseline in types.ts and update the corresponding
schema field names in models/Launch.ts (and any other files that reference those
properties) so naming is camelCase-consistent across types and model schemas;
verify all imports/usages compile after the change.
---
Nitpick comments:
In `@types.ts`:
- Line 118: Replace the opaque numeric literal type AllocationDesign = 1 | 2
with a descriptive string union or enum (e.g., type AllocationDesign =
'pro_rata' | 'tiered' or enum AllocationDesign { ProRata = 'pro_rata', Tiered =
'tiered' }); update all usages, function signatures and any
serialization/deserialization code to use the new string values (or enum
members) and adjust any switch/case or comparison logic that expects 1 or 2 to
the new names; ensure tests and API contracts (responses/logs) are updated
accordingly so consumers receive the human-readable values.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (1)
types.ts
| export interface ILaunchDoc { | ||
| _id: ObjectId; | ||
| projectId: string; | ||
| projectAppUrl?: string; | ||
| teamVestingSchedule?: string; | ||
| tokenAsset: ITokenAsset; | ||
| T_available: string; | ||
| participationWindowStart?: Date; | ||
| participationWindowEnd?: Date; | ||
| stakeDurationDays: number; | ||
| allocationDesign: AllocationDesign; | ||
| status: LaunchStatus; | ||
| escrowPublicKey?: string; | ||
| escrowLocked?: boolean; | ||
| poolId?: string; | ||
| tgeAt?: Date; | ||
| listingPrice?: string; | ||
| PiPowerBaseline?: string; | ||
| createdBeforeCutoff: boolean; | ||
| createdAt: Date; | ||
| updatedAt: Date; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, check ILaunchDoc and IParticipationDoc usage
rg -n --type=ts 'ILaunchDoc|IParticipationDoc' -C 3Repository: Pi-Defi-world/Tokenization-bn
Length of output: 488
🏁 Script executed:
#!/bin/bash
# Check what Document interfaces extend in types.ts
rg -n --type=ts 'interface.*Doc.*extends' -C 2Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
🏁 Script executed:
#!/bin/bash
# Look for Launch model definition to see how it uses ILaunchDoc
fd -e ts -path '*odel*' | head -20Repository: Pi-Defi-world/Tokenization-bn
Length of output: 241
🏁 Script executed:
#!/bin/bash
# Read types.ts to see which interfaces extend Document
cat -n types.ts | head -200Repository: Pi-Defi-world/Tokenization-bn
Length of output: 4580
🏁 Script executed:
#!/bin/bash
# Search for Launch model definition
rg -n 'Launch.*Model\|export.*Launch' --type ts -C 3Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
🏁 Script executed:
#!/bin/bash
# Find all model files
fd -e ts | grep -i model | head -20Repository: Pi-Defi-world/Tokenization-bn
Length of output: 501
🏁 Script executed:
#!/bin/bash
# Check Launch.ts model definition
cat -n models/Launch.ts | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 3692
🏁 Script executed:
#!/bin/bash
# Check Participation.ts model definition
cat -n models/Participation.ts | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 1773
🏁 Script executed:
#!/bin/bash
# Search for actual usage of ILaunchDoc and IParticipationDoc
rg -n 'ILaunchDoc|IParticipationDoc' --type ts --type jsRepository: Pi-Defi-world/Tokenization-bn
Length of output: 165
🏁 Script executed:
#!/bin/bash
# Also search for imports of these types
rg -n "from.*types" --type ts --type js | grep -i "launch\|participation" | head -30Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
Remove unused ILaunchDoc and IParticipationDoc interfaces from types.ts — they are dead code.
ILaunchDoc and IParticipationDoc are defined in types.ts but never used anywhere in the codebase. The actual Mongoose model interfaces are ILaunch and IParticipation (both in models/Launch.ts and models/Participation.ts), which correctly extend Document and are used with mongoose.model<ILaunch>() and mongoose.model<IParticipation>().
Also rename fields T_available → t_available and PiPowerBaseline → piPowerBaseline to follow camelCase naming convention (this applies to both types.ts and the model schemas in models/Launch.ts).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types.ts` around lines 125 - 146, Remove the dead interfaces ILaunchDoc and
IParticipationDoc from types.ts and replace any references (if present) with the
existing ILaunch and IParticipation model interfaces; ensure no code imports
those removed symbols. Rename the fields T_available → t_available and
PiPowerBaseline → piPowerBaseline in types.ts and update the corresponding
schema field names in models/Launch.ts (and any other files that reference those
properties) so naming is camelCase-consistent across types and model schemas;
verify all imports/usages compile after the change.
| export interface IParticipationDoc { | ||
| _id: ObjectId; | ||
| launchId: ObjectId; | ||
| userId: string; | ||
| stakedPi: string; | ||
| committedPi: string; | ||
| piPower: string; | ||
| engagementScore: number; | ||
| engagementRank: number; | ||
| allocatedTokens: string; | ||
| effectivePrice: string; | ||
| tier?: 'top' | 'mid' | 'bottom'; | ||
| swapOrder?: number; | ||
| lockupEnd?: Date; | ||
| createdAt: Date; | ||
| updatedAt: Date; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n --type=ts 'userId' -C 3 | head -150Repository: Pi-Defi-world/Tokenization-bn
Length of output: 8627
🏁 Script executed:
# Check the IUser interface definition
rg -n --type=ts 'interface IUser|interface I.*User' -A 10 | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 464
🏁 Script executed:
# Look for how userId is populated or used in queries
rg -n --type=ts 'userId:' -C 3 | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 5386
🏁 Script executed:
# Check if there are any schema/model definitions that clarify the type
fd -e ts | xargs rg -l 'participationSchema\|ParticipationModel' | head -5Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
Inconsistent foreign key typing: launchId: ObjectId but userId: string
Both launchId and userId are used together in MongoDB queries (e.g., findOne({ launchId, userId })), yet they have inconsistent types. The test data shows userId as a MongoDB ObjectId string ('6917551b11d9514682ccf741'). If userId stores the user's MongoDB _id, it should be typed as ObjectId to match launchId for type safety. If it intentionally stores a different user identifier (such as IUser.id or IUser.uid), clarify by renaming to userUid or userId respectively.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@types.ts` around lines 148 - 164, The IParticipationDoc interface declares
launchId as ObjectId but userId as string, causing inconsistent foreign-key
typing; update IParticipationDoc so userId is typed as ObjectId (to match
launchId and the stored MongoDB _id) and adjust any consumers that
construct/find by { launchId, userId } to pass an ObjectId, or if userId is
intentionally a different external identifier, rename the property to userUid
(or similar) and keep it as string—focus your change on the IParticipationDoc
declaration and any references to userId in queries/functions to ensure types
and names stay consistent (look for IParticipationDoc, launchId, userId in
code).
There was a problem hiding this comment.
Actionable comments posted: 16
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
♻️ Duplicate comments (6)
controllers/launchpad/index.ts (1)
95-137:⚠️ Potential issue | 🔴 CriticalAuth bypass via
userIdfallback (must require auth).
req.query.userId/req.body.userIdfallback allows impersonation whenreq.useris missing. Require authenticated user IDs only and ensure the routes are protected by auth middleware.🔐 Suggested controller changes (auth‑only)
- const userId = (req as any).user?.uid ?? req.query.userId ?? req.body?.userId; - if (!userId) return res.status(400).json({ message: 'userId required (or authenticated user)' }); + const userId = (req as any).user?.uid; + if (!userId) return res.status(401).json({ message: 'Authentication required' }); @@ - const userId = (req as any).user?.uid ?? req.query.userId ?? bodyUserId; - if (!userId) return res.status(400).json({ message: 'userId required (or authenticated user)' }); + const userId = (req as any).user?.uid; + if (!userId) return res.status(401).json({ message: 'Authentication required' }); @@ - const uid = (req as any).user?.uid ?? userId; - if (!uid) return res.status(400).json({ message: 'userId required (or authenticated user)' }); + const uid = (req as any).user?.uid; + if (!uid) return res.status(401).json({ message: 'Authentication required' });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/launchpad/index.ts` around lines 95 - 137, The handlers getMyPiPower, commitPi and postEngagement currently fall back to req.query.userId / req.body.userId which allows impersonation; change each to require an authenticated user only by using (req as any).user?.uid as the sole source of userId, remove fallbacks to query/body, return 401 Unauthorized (or 403) when no authenticated uid, and update error messages accordingly; also ensure the corresponding routes are protected by the authentication middleware in your router so unauthenticated requests are rejected before reaching getMyPiPower, commitPi, or postEngagement.services/lending.service.ts (1)
417-418:⚠️ Potential issue | 🔴 CriticalLiquidation reward is divided by
collPricetwice.
collateralRewardValueis already in collateral units; dividing bycollPriceagain makes liquidation rewards far too small.🐛 Fix the double-division
- const grossCollateralReward = Math.min(collateralRewardValue / parseFloat(collPrice), parseFloat(position.collateralAmount)); + const grossCollateralReward = Math.min(collateralRewardValue, parseFloat(position.collateralAmount));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/lending.service.ts` around lines 417 - 418, The liquidation reward is being divided by collPrice twice: collateralRewardValue (computed as repay * parseFloat(borrowPrice) / parseFloat(collPrice) * (1 + LIQUIDATION_BONUS)) is already in collateral units, so remove the extra division when computing grossCollateralReward; replace the current Math.min(collateralRewardValue / parseFloat(collPrice), parseFloat(position.collateralAmount)) with Math.min(collateralRewardValue, parseFloat(position.collateralAmount)) so the reward stays in collateral units and still caps at position.collateralAmount.services/liquidity-pools.service.ts (1)
164-169:⚠️ Potential issue | 🔴 CriticalPlatform fees are still routed to the test key.
Line 166 uses
env.PI_TEST_USER_PUBLIC_KEY; this should be the real platform fee recipient.✅ Suggested fix
- destination: env.PI_TEST_USER_PUBLIC_KEY, + destination: env.PLATFORM_FEE_PUBLIC_KEY,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/liquidity-pools.service.ts` around lines 164 - 169, The payment operation in feeTxBuilder.addOperation is sending fees to the test key env.PI_TEST_USER_PUBLIC_KEY; update this to use the real platform fee recipient (e.g., env.PLATFORM_FEE_PUBLIC_KEY or the canonical env variable your app uses) in the StellarSdk.Operation.payment call so platform fees route to the production account, and add a short validation check (non-empty/valid pubkey) before building the transaction; references: feeTxBuilder.addOperation, StellarSdk.Operation.payment, feeAmount.services/swap.service.ts (1)
165-243:⚠️ Potential issue | 🔴 CriticalSwap fees are still routed to the test key.
Line 239 uses
env.PI_TEST_USER_PUBLIC_KEY; this should be the real platform fee recipient.✅ Suggested fix
- destination: env.PI_TEST_USER_PUBLIC_KEY, + destination: env.PLATFORM_FEE_PUBLIC_KEY,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/swap.service.ts` around lines 165 - 243, The collectSwapFee function currently sends platform fees to env.PI_TEST_USER_PUBLIC_KEY (test key); change all uses and validation of PI_TEST_USER_PUBLIC_KEY in collectSwapFee to the real recipient env (e.g., env.PLATFORM_FEE_PUBLIC_KEY or PLATFORM_SWAP_FEE_RECIPIENT), add a check at the top (similar to the existing env checks) to throw/log if that real recipient env var is missing, and update the Stellar payment operation destination in TransactionBuilder from env.PI_TEST_USER_PUBLIC_KEY to the new env variable so fees go to the correct production platform account.controllers/lending/index.ts (2)
49-63:⚠️ Potential issue | 🔴 CriticalUser impersonation via
req.body?.userIdfallback still present across all financial handlers.The pattern
(req as any).user?.id ?? req.body?.userIdpersists insupply(line 52),withdraw(line 68),borrow(line 83),getPositions(line 109),liquidate(line 136), andgetCreditScore(line 170). Any unauthenticated client can act as any user for financial operations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/lending/index.ts` around lines 49 - 63, Remove the insecure fallback to req.body?.userId and require authentication for all financial handlers that currently use the pattern (req as any).user?.id ?? req.body?.userId (e.g., supply, withdraw, borrow, getPositions, liquidate, getCreditScore): change each handler to read only the authenticated user id from (req as any).user?.id, return a 401/403 (or 400 with clear message) when that value is missing, and remove any logic that accepts userId from the request body; ensure calls to lendingService methods (e.g., lendingService.supply) pass String((req as any).user.id) and update error messages accordingly so no body-supplied userId is used for authorization.
186-197:⚠️ Potential issue | 🔴 Critical
setCreditScoreremains unprotected — arbitrary score manipulation still possible.No auth or role check before
creditScoreService.setScore. Any caller can set any user's credit score to 100 and borrow without limit.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/lending/index.ts` around lines 186 - 197, The setCreditScore handler currently calls creditScoreService.setScore without authorization; update the setCreditScore function to enforce auth/role checks before calling creditScoreService.setScore — e.g., verify req.user exists and has an admin (or permitted) role (req.user.role === 'admin' or use an existing hasRole/isAdmin helper) and return res.status(403) if not authorized; also validate score is a number between 0 and 100 and userId is present before invoking creditScoreService.setScore so only authorized callers can modify scores.
🟡 Minor comments (17)
services/staking.service.ts-64-71 (1)
64-71:⚠️ Potential issue | 🟡 MinorReturn remaining commitment headroom.
maxCommitmentAllowedcurrently equals total PiPower even when the user already committed, which can mislead clients. Return remaining headroom instead.✏️ Suggested tweak
- return { - piPower, - stakedPi, - sumStakedPi, - committedPi, - maxCommitmentAllowed: piPower, - }; + const remaining = Math.max(parseFloat(piPower) - parseFloat(committedPi), 0).toFixed(7); + return { + piPower, + stakedPi, + sumStakedPi, + committedPi, + maxCommitmentAllowed: remaining, + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/staking.service.ts` around lines 64 - 71, The returned maxCommitmentAllowed currently equals piPower even if the user already has a committedPi; update the return so maxCommitmentAllowed represents remaining headroom = piPower minus committedPi (use the value from Participation.findOne(...) -> committedPi), converting committedPi from its stored string to the same numeric type as piPower and ensuring the result is not negative before returning; adjust the return object in the function that builds { piPower, stakedPi, sumStakedPi, committedPi, maxCommitmentAllowed } to compute the difference rather than returning piPower directly.controllers/launchpad/index.ts-66-70 (1)
66-70:⚠️ Potential issue | 🟡 MinorValidate
limitparameter to ensure it's a finite positive integer.
Number(req.query.limit)can returnNaN(e.g.,Number("invalid")), which then flows intoMath.min(NaN, 100)returningNaN. This invalid value is then passed tolaunchService.list, causing unpredictable MongoDB query behavior. Add validation to check that the parsed value is finite and positive before clamping.🔧 Safer limit parsing
- const limit = req.query.limit ? Math.min(Number(req.query.limit), 100) : 50; + const rawLimit = Number(req.query.limit); + const limit = Number.isFinite(rawLimit) && rawLimit > 0 ? Math.min(rawLimit, 100) : 50;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/launchpad/index.ts` around lines 66 - 70, The limit parsing currently uses Number(req.query.limit) which can yield NaN and propagate into launchService.list; fix by explicitly parsing and validating a finite positive integer before clamping: parse req.query.limit (e.g., with parseInt), check Number.isFinite and >0, then set limit = Math.min(parsed, 100) or fallback to default 50 if invalid; update the variable around the existing symbols req.query.limit, limit, and the call to launchService.list so launchService.list always receives a valid integer.services/staking.service.ts-95-96 (1)
95-96:⚠️ Potential issue | 🟡 MinorValidate
committedPiformat and normalize before payment.Stellar amounts require exactly 7 decimal places maximum.
parseFloataccepts invalid formats (e.g., "1.2.3", scientific notation) that will fail at submission. Validate format, normalize once withtoFixed(7), then reuse for both cap checks and the payment amount.At line 136, the raw
committedPistring is sent to Stellar without normalization. This can cause the payment operation to fail if the input has more than 7 decimals or invalid formatting, even if validation at line 95 passed for the parsed amount.✅ Format validation and normalized reuse
+ if (!/^\d+(\.\d{1,7})?$/.test(String(committedPi))) { + throw new Error('Invalid committed Pi amount (max 7 decimals)'); + } + const normalizedCommitted = Number(committedPi).toFixed(7); const amount = parseFloat(committedPi); - if (isNaN(amount) || amount <= 0) throw new Error('Invalid committed Pi amount'); + if (isNaN(amount) || amount <= 0) throw new Error('Invalid committed Pi amount');Then reuse
normalizedCommittedat line 124 and line 136:const newCommitted = addStrings(participation.committedPi, committedPi); + // or: const newCommitted = addStrings(participation.committedPi, normalizedCommitted);amount: committedPi, + // Use: amount: normalizedCommitted,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/staking.service.ts` around lines 95 - 96, The committedPi string is parsed with parseFloat and later the raw string is sent to Stellar which can fail for invalid formats or >7 decimals; fix by validating committedPi against a numeric regex allowing an optional integer part and up to 7 decimal places (reject things like "1.2.3" or scientific notation), parse it once to a Number (e.g., amount), compute a normalized string using amount.toFixed(7) (call it normalizedCommitted), use normalizedCommitted for the cap checks that currently use amount/committedPi and for the Stellar payment call, and throw a clear error when format validation fails; locate and update the code around the committedPi variable, the amount calculation, cap-check logic, and the payment call in the staking service (symbols: committedPi, amount, and the payment operation) to reuse the single normalized value.docs/LENDING.md-19-30 (1)
19-30:⚠️ Potential issue | 🟡 MinorContradictory statements about on-chain transaction submission.
Line 19 states the "backend submits" repay and collateral transfers and returns
transactionHashRepay/transactionHashCollateral. Line 30 then states "The API only updates balances and records the liquidation; it does not hold keys or submit Stellar transactions." These directly contradict each other and will confuse integrators. Please reconcile which is the actual behavior and update the doc accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/LENDING.md` around lines 19 - 30, The doc contains conflicting statements about who submits on-chain transfers: reconcile and clarify the single actual behavior for the liquidate flow described for POST /lending/positions/:borrowPositionId/liquidate (body { repayAmount, userId, userSecret }) and the presence/meaning of transactionHashRepay and transactionHashCollateral; update the text to state whether the backend (custody) will submit the repay and collateral Stellar transactions and return transactionHashRepay/transactionHashCollateral, or whether the API only records balances and the liquidator/client must submit on-chain transactions (and then explain how userSecret is used in that case), and remove the contradictory sentence so integrators clearly know if the system holds keys and returns transaction hashes or if clients must perform on-chain transfers themselves.controllers/swap/index.ts-96-102 (1)
96-102:⚠️ Potential issue | 🟡 MinorTrailing colon when object
from/tohas no issuer.If
fromis{ code: "USDC" }(noissuerproperty), Line 99 produces"USDC:"— a trailing colon that downstream parsing (e.g.,getAssetFromCodeIssuer) would interpret as an empty issuer string, then throw"issuer required for non-native asset".Proposed fix
const fromStr = typeof from === 'string' ? from - : (from.code === 'native' ? 'native' : `${from.code}:${from.issuer || ''}`); + : (from.code === 'native' ? 'native' : from.issuer ? `${from.code}:${from.issuer}` : from.code); const toStr = typeof to === 'string' ? to - : (to.code === 'native' ? 'native' : `${to.code}:${to.issuer || ''}`); + : (to.code === 'native' ? 'native' : to.issuer ? `${to.code}:${to.issuer}` : to.code);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/swap/index.ts` around lines 96 - 102, The fromStr/toStr construction can produce a trailing colon when an asset object lacks issuer (e.g., from = {code: "USDC"} -> "USDC:"), which breaks downstream parsing like getAssetFromCodeIssuer; update the logic in the fromStr and toStr assignments to only append ":issuer" when issuer is present (for non-native assets), e.g., use a conditional that returns 'native' for native assets and `${from.code}${from.issuer ? `:${from.issuer}` : ''}` (same for to) so no trailing colon is emitted.controllers/swap/index.ts-66-73 (1)
66-73:⚠️ Potential issue | 🟡 Minor
NaNpropagation whenslippagePercentis a non-numeric string.
Number(slippagePercent)on a garbage query value (e.g.,?slippagePercent=abc) producesNaN, which will propagate through the service and returnNaNforminOut. Add a guard or default.Proposed fix
+ const slip = Number(slippagePercent); + if (isNaN(slip) || slip < 0 || slip > 100) + return res.status(400).json(errorBody('Slippage percent must be a number between 0 and 100.')); + const result = await swapService.quoteSwap( poolId as string, fromAsset, toAsset, amount as string, - Number(slippagePercent), + slip, publicKey as string | undefined );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/swap/index.ts` around lines 66 - 73, The handler calls swapService.quoteSwap passing Number(slippagePercent) which becomes NaN for non-numeric inputs and will propagate (affecting minOut); validate and sanitize slippagePercent before calling quoteSwap by parsing it (e.g., parseFloat or Number) and if the result is NaN or out of expected range, replace it with a safe default (e.g., 0) or clamp to allowed bounds, then pass that validated number into swapService.quoteSwap (refer to the slippagePercent variable and the quoteSwap call to locate the change).models/LendingPool.ts-38-53 (1)
38-53:⚠️ Potential issue | 🟡 MinorAdd a unique compound index on
asset.code+asset.issuerto prevent duplicate lending pools.The seed script includes idempotency logic (
findOnecheck before creation), but thecreatePoolmethod inservices/lending.service.ts(line 71) creates pools directly without checking for existing entries. This leaves the application vulnerable to duplicate pools if called through the service API or in concurrent scenarios. A database-level unique constraint would enforce this invariant across all code paths.Proposed addition
const lendingPoolSchema = new Schema<ILendingPool>( { ... }, { timestamps: true } ); + +lendingPoolSchema.index({ 'asset.code': 1, 'asset.issuer': 1 }, { unique: true });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/LendingPool.ts` around lines 38 - 53, Add a DB-level unique compound index on asset.code + asset.issuer in the existing lendingPoolSchema to prevent duplicate pools; update the schema by calling lendingPoolSchema.index({ 'asset.code': 1, 'asset.issuer': 1 }, { unique: true }) (keeping ILendingPool and collateralConfigSchema unchanged) so all code paths (including createPool in services/lending.service.ts) will be rejected by MongoDB on duplicates; run/add a migration or ensure existing duplicates are resolved before deploying so the index creation won't fail.docs/products.md-358-375 (1)
358-375:⚠️ Potential issue | 🟡 MinorProofread and formalize the trailing notes section.
This block has multiple typos and run‑on sentences (e.g., “intrest”, “liqidating”, “actualcost”). Consider moving it into a dedicated “Open questions / TODO” section and cleaning spelling/punctuation.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/products.md` around lines 358 - 375, The trailing notes block contains numerous typos, run‑on sentences and unstructured ideas; move this text into a new dedicated "Open questions / TODO" subsection in docs/products.md, split the content into concise bullet points or short paragraphs, correct misspellings (e.g., "intrest"→interest, "liqidating"→liquidating, "actualcost"→actual cost, "nativetoken"→native token, "intrest"→interest, "wehave"→we have), fix punctuation/capitalization, and rephrase sentences for clarity (examples: clarify deposit-time withdrawal rules, segmentation of funds, buffer/emergency funds, liquidation mechanism, fee recipients, and future wallet strategy). Ensure each TODO is a single, actionable sentence and preserve the original intent while making the language formal and unambiguous.docs/onchain-vs-db.md-8-16 (1)
8-16:⚠️ Potential issue | 🟡 MinorReconcile savings‑withdraw on‑chain status with other docs.
This file says savings withdrawals are on‑chain with a
transactionHash, whiledocs/products.mddescribes them as DB-only instructions. Please align to the current behavior to avoid confusion.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/onchain-vs-db.md` around lines 8 - 16, The "Savings withdraw" entry currently conflicts between the onchain-vs-db doc (which lists it as on‑chain and returning `transactionHash`) and the products doc (which describes it as DB-only); reconcile them to reflect the actual implementation: confirm whether Savings withdraw is on‑chain (custody sends principal+interest and response includes `transactionHash`) or DB-only, then update the "Savings withdraw" entry and any mention of `transactionHash` in the docs to match that behavior and remove or adjust the contradictory description in the other document so both docs consistently describe the same flow.config/savings.ts-19-22 (1)
19-22:⚠️ Potential issue | 🟡 MinorComment doesn’t match actual term-premium behavior.
The implementation interpolates within bounds and clamps outside, so it does not “return 0 for unknown terms.”
✍️ Suggested comment update
- * Interpolates between known points; returns 0 for unknown terms. + * Interpolates between known points; clamps to nearest boundary when out of range.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@config/savings.ts` around lines 19 - 22, The doc comment for the "Get term premium" logic is incorrect: update the comment for the term-premium function to state that it linearly interpolates between known term points and clamps to the nearest endpoint for terms outside the known range (instead of saying it "returns 0 for unknown terms"); reference the same function/description text ("Get term premium in percentage points for a given term (days)" / the term-premium interpolation logic) so the comment accurately describes interpolation within bounds and endpoint clamping outside bounds.docs/products.md-120-143 (1)
120-143:⚠️ Potential issue | 🟡 MinorAdd a language for the diagram code fence.
markdownlint MD040 flags the fence;
textkeeps the ASCII diagram intact and quiets the warning.✍️ Suggested tweak
-``` +```text ┌─────────────────────────────────────────────────────────────────┐ │ PRODUCT LAYER (SavingsProduct, LendingPool, etc.) │ │ - References: asset, termBuckets, algorithmId, param overrides │ │ - No raw “apy” or “rate”; only “rateSource = algorithm + indices”│ └─────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ ALGORITHM LAYER (pure functions) │ │ - getSavingsApy(termDays, indices) │ │ - getBorrowRate(utilization, score, indices) │ │ - getMaxLTV(asset, indices) │ │ - getCreditScore(userId, behaviour) or getScoreTier(score) │ └─────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────┐ │ INDICES LAYER │ │ - getIndex(id): baseRate, fundingCost, utilization(poolId), │ │ termPremium(termDays), volatility(asset), defaultRate, … │ │ - Values from: DB, env, internal calc (e.g. utilization), API │ └─────────────────────────────────────────────────────────────────┘ -``` +```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/products.md` around lines 120 - 143, The fenced ASCII diagram block lacks a language tag so markdownlint MD040 flags it; update the opening fence for the diagram (the triple-backtick that precedes the ASCII box containing "PRODUCT LAYER", "ALGORITHM LAYER", and "INDICES LAYER") to use the text language (```text) so the diagram remains intact and the lint warning is suppressed.services/liquidity-pools.service.ts-969-990 (1)
969-990:⚠️ Potential issue | 🟡 MinorValidate amount inputs before ratio math.
amountBof0or a non-numeric value will produceInfinity/NaNratios and confusing errors.🔧 Suggested guard
- const poolRatio = parseFloat(resA.amount) / parseFloat(resB.amount); - const inputRatio = parseFloat(amountA) / parseFloat(amountB); + const amountANum = parseFloat(amountA); + const amountBNum = parseFloat(amountB); + if (!Number.isFinite(amountANum) || !Number.isFinite(amountBNum) || amountANum <= 0 || amountBNum <= 0) { + throw new Error('amountA and amountB must be positive numbers.'); + } + const poolRatio = parseFloat(resA.amount) / parseFloat(resB.amount); + const inputRatio = amountANum / amountBNum;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/liquidity-pools.service.ts` around lines 969 - 990, The ratio calculation can produce Infinity/NaN when amountA, amountB, resA.amount or resB.amount are zero/non-numeric; before computing poolRatio/inputRatio in the block around parseFloat(resA.amount)/parseFloat(resB.amount) and parseFloat(amountA)/parseFloat(amountB) validate that parseFloat(amountA), parseFloat(amountB), parseFloat(resA.amount) and parseFloat(resB.amount) are finite numbers and that any denominator (resB.amount for poolRatio, amountB for inputRatio) is not zero; if validation fails, throw a clear Error (e.g., "Invalid amounts: amountA/amountB/resA.amount/resB.amount must be numeric and non-zero") rather than proceeding with the math so downstream ratioDiff and correctAmountB calculations cannot produce Infinity/NaN.services/liquidity-pools.service.ts-640-714 (1)
640-714:⚠️ Potential issue | 🟡 MinorPagination cursor check should use
validLimit.If callers pass
limit > 200,validLimitis capped to 200 but the next-cursor condition still uses the originallimit, which can prematurely stop pagination.✅ Suggested fix
- if (pools.records.length === limit && pools.paging_token) { + if (pools.records.length === validLimit && pools.paging_token) { // Use paging_token as cursor for next page nextCursor = String(pools.paging_token); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/liquidity-pools.service.ts` around lines 640 - 714, The pagination-next check is comparing pools.records.length to the original limit (variable limit) instead of the sanitized validLimit, so pages can stop early when callers pass limit > 200; update the condition in the block that sets nextCursor to use validLimit (i.e., change the check that currently references limit to validLimit) so it matches the actual request size used by server.liquidityPools().limit(validLimit) and the HTTP request; ensure you update any other comparisons or comments in the same function that rely on the requested page size to reference validLimit (symbols: validLimit, limit, pools.records, pools.paging_token, nextCursor).services/liquidity-pools.service.ts-205-255 (1)
205-255:⚠️ Potential issue | 🟡 MinorNormalize token codes during matching to avoid false negatives.
Cache keys are uppercased, but the match compares against raw input; lowercase inputs can miss an existing pool.
🔧 Suggested normalization
- const tokenACode = tokenA.code === 'native' ? 'native' : tokenA.code.toUpperCase(); - const tokenBCode = tokenB.code === 'native' ? 'native' : tokenB.code.toUpperCase(); + const tokenACode = tokenA.code === 'native' ? 'native' : tokenA.code.toUpperCase(); + const tokenBCode = tokenB.code === 'native' ? 'native' : tokenB.code.toUpperCase(); + const tokenAIssuer = tokenA.issuer || ''; + const tokenBIssuer = tokenB.issuer || ''; @@ - const matchesTokenA = assets.some((a: any) => - a.code === tokenA.code && - (tokenA.code === 'native' || a.issuer === tokenA.issuer) - ); - const matchesTokenB = assets.some((a: any) => - a.code === tokenB.code && - (tokenB.code === 'native' || a.issuer === tokenB.issuer) - ); + const matchesTokenA = assets.some((a: any) => + (a.code === 'native' ? 'native' : a.code.toUpperCase()) === tokenACode && + (tokenACode === 'native' || a.issuer === tokenAIssuer) + ); + const matchesTokenB = assets.some((a: any) => + (a.code === 'native' ? 'native' : a.code.toUpperCase()) === tokenBCode && + (tokenBCode === 'native' || a.issuer === tokenBIssuer) + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/liquidity-pools.service.ts` around lines 205 - 255, The cache key uses uppercased token codes but the pool matching compares against raw input, causing misses; update the matching logic in the cache-handling block (where tokenACode, tokenBCode, cacheKey1, cacheKey2, PoolCache, isPoolEmpty, assets, matchesTokenA and matchesTokenB are used) to normalize asset codes and input tokens the same way (e.g., compare asset.code.toUpperCase() against tokenACode/tokenBCode or normalize tokenA.code/tokenB.code to lower/upper before comparison), preserve the existing native handling, and ensure issuer checks still compare raw issuer strings so cached pools are detected regardless of input case.services/swap.service.ts-303-316 (1)
303-316:⚠️ Potential issue | 🟡 MinorValidate that the pool contains both requested assets.
If neither reserve matches
fromorto,isAtoBdefaults and the quote becomes incorrect.🔧 Suggested guard
- const resAAssetCode = resA.asset === "native" ? "native" : resA.asset.split(':')[0].toUpperCase(); - const resBAssetCode = resB.asset === "native" ? "native" : resB.asset.split(':')[0].toUpperCase(); - const fromCodeUpper = from.code === "native" ? "native" : from.code.toUpperCase(); + const resAAssetCode = resA.asset === "native" ? "native" : resA.asset.split(':')[0].toUpperCase(); + const resBAssetCode = resB.asset === "native" ? "native" : resB.asset.split(':')[0].toUpperCase(); + const fromCodeUpper = from.code === "native" ? "native" : from.code.toUpperCase(); + const toCodeUpper = to.code === "native" ? "native" : to.code.toUpperCase(); + if ( + ![resAAssetCode, resBAssetCode].includes(fromCodeUpper) || + ![resAAssetCode, resBAssetCode].includes(toCodeUpper) + ) { + throw new Error(`Pool ${poolId} does not contain the requested assets.`); + } const isAtoB = resAAssetCode === fromCodeUpper;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/swap.service.ts` around lines 303 - 316, The code assumes one of the pool reserves matches the requested "from" asset but never verifies the pool contains both requested assets; compute toCodeUpper (like fromCodeUpper) and validate that the pair (resAAssetCode,resBAssetCode) matches (fromCodeUpper,toCodeUpper) in either order before using isAtoB/inputReserve/outputReserve, and if not throw a clear error indicating the pool does not contain the requested asset pair; adjust how isAtoB is derived to rely on that validated match (use resAAssetCode === fromCodeUpper to set isAtoB after validation).services/savings.service.ts-129-132 (1)
129-132:⚠️ Potential issue | 🟡 MinorMisleading error message and mixing fee and custody accounts.
Two issues:
- The error message on line 131 instructs operators to set
PLATFORM_ISSUER_SECRET, but the actual missing variables arePLATFORM_CUSTODY_PUBLIC_KEY/PLATFORM_FEE_PUBLIC_KEY. This will confuse ops during an incident.- Line 129 silently falls back to the fee public key as the savings custody address. Deposits would go to the fee account, commingling savings principal with fee funds and making reconciliation impossible.
🔒 Proposed fix
- const custodyPublicKey = env.PLATFORM_CUSTODY_PUBLIC_KEY || env.PLATFORM_FEE_PUBLIC_KEY; - if (!custodyPublicKey || custodyPublicKey.trim() === '') { - throw new Error('Savings custody address not configured (set PLATFORM_ISSUER_SECRET)'); - } + const custodyPublicKey = env.PLATFORM_CUSTODY_PUBLIC_KEY; + if (!custodyPublicKey?.trim()) { + throw new Error('Savings custody address not configured (set PLATFORM_CUSTODY_PUBLIC_KEY)'); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/savings.service.ts` around lines 129 - 132, Remove the silent fallback to the fee key and correct the error message: stop assigning custodyPublicKey from env.PLATFORM_FEE_PUBLIC_KEY (only read env.PLATFORM_CUSTODY_PUBLIC_KEY), validate it (non-empty after trim), and if missing throw a clear Error telling operators to set PLATFORM_CUSTODY_PUBLIC_KEY (no mention of PLATFORM_ISSUER_SECRET); update references to the custodyPublicKey variable and any checks around it accordingly so deposits cannot be routed to the fee account.services/credit-score.service.ts-97-105 (1)
97-105:⚠️ Potential issue | 🟡 Minor
setScore:docis typed asT | null— accessingdoc.userId/doc.scorewithout null check.Mongoose's
findOneAndUpdatereturn type includesnulleven with{ new: true, upsert: true }. While the upsert path should always return a document in practice, TypeScript strict mode will flag this and a transient error (e.g., write conflict) could surface a runtime NPE.🛡️ Proposed fix
const doc = await CreditScore.findOneAndUpdate( { userId }, { score, source: 'manual' }, { new: true, upsert: true } ).exec(); + if (!doc) throw new Error('Failed to persist credit score'); return { userId: doc.userId, score: doc.score };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/credit-score.service.ts` around lines 97 - 105, The setScore method uses CreditScore.findOneAndUpdate(...).exec() which can return null; update the setScore implementation to handle a null doc: after awaiting findOneAndUpdate (in setScore), check if doc is null and throw a descriptive error (e.g., "Failed to set credit score for userId ...") or otherwise handle the failure, then safely return doc.userId and doc.score; reference the setScore function, the CreditScore.findOneAndUpdate call, and the doc variable when making this change.
🧹 Nitpick comments (12)
utils/asset.ts (2)
28-31: Minor: redundant exact-match check before case-insensitive check.
s === "native"is already covered bys.toLowerCase() === "native". The explicit check is harmless (micro-optimization for the common case) but slightly noisy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/asset.ts` around lines 28 - 31, The conditional in the utils/asset.ts helper redundantly checks s === "native" before s.toLowerCase() === "native"; simplify by normalizing once (e.g., const s = input.trim().toLowerCase()) and then only checking s === "native" and returning StellarSdk.Asset.native() inside that branch (update any subsequent uses to use the normalized variable). This removes the duplicate exact-match check while preserving behavior in the function that returns StellarSdk.Asset.native().
15-21: Swallowed errors make network failures indistinguishable from non-existent assets.If the Horizon server is unreachable,
assetExistsOnChainsilently returnsfalse. Callers may then reject a perfectly valid asset. Consider logging the error or distinguishing network failures from a genuine "no trustlines" result.Proposed fix
} catch (err) { + // Log so operators can distinguish "asset doesn't exist" from "Horizon is down" + const msg = err instanceof Error ? err.message : String(err); + console.warn(`assetExistsOnChain check failed for ${asset.code}:${asset.issuer} — ${msg}`); return false; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@utils/asset.ts` around lines 15 - 21, The try/catch in assetExistsOnChain currently swallows all errors and returns false, conflating network/Horizon failures with a genuine “no trustlines” result; update assetExistsOnChain to catch the error as a variable (e.g., err) and either log the error with context (using the project logger or console.error) and rethrow or return a distinct failure indication (e.g., throw a specific NetworkError or return {exists: false, error: err}) so callers can differentiate unreachable Horizon from a non-existent asset; locate this logic in the assetExistsOnChain function and replace the empty catch with the chosen error-handling approach, ensuring the log includes the asset.code and asset.issuer and the original error details.package.json (1)
42-42:nodemonbelongs indevDependencies.
nodemonis a development-only tool and shouldn't ship in productiondependencies. Moving it todevDependenciesreduces the production install footprint.Proposed fix
Move
nodemonfromdependenciestodevDependencies:"devDependencies": { ... + "nodemon": "^3.1.10", "ts-node": "^10.9.2", "typescript": "^5.9.3" }, "dependencies": { ... - "nodemon": "^3.1.10", ... }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@package.json` at line 42, The package.json currently lists "nodemon" in dependencies; move "nodemon" into devDependencies instead: remove the "nodemon" entry from the top-level dependencies object and add the same version string under devDependencies, then run your package manager (npm/yarn/pnpm install) to update lockfiles so production installs no longer include nodemon. Refer to the "nodemon" dependency entry in package.json when making the change.models/SavingsPosition.ts (1)
21-40: StandaloneuserIdindex is redundant with the compound index.The compound index on
{ userId: 1, status: 1 }(Line 40) already covers queries filtering onuserIdalone (MongoDB left-prefix rule). Theindex: trueon Line 23 creates a second single-field index that adds write overhead without query benefit.Proposed fix
- userId: { type: String, required: true, index: true }, + userId: { type: String, required: true },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/SavingsPosition.ts` around lines 21 - 40, The schema defines a standalone index on userId alongside a compound index on { userId: 1, status: 1 }, which is redundant; remove the single-field index by deleting the index: true option from the userId field in savingsPositionSchema so only the compound index savingsPositionSchema.index({ userId: 1, status: 1 }) remains (leave other field indexes like productId and status unchanged).models/LendingPool.ts (1)
13-25: Clarify the dualcollateralFactorfields.
collateralFactorappears both at the pool level (Line 20, Line 48) and within eachcollateralAssetsentry (Line 10, Line 33). The pool-level field likely represents the LTV ratio for the pool's primary asset when used as collateral elsewhere, while eachcollateralAssets[].collateralFactorrepresents the LTV for that specific collateral within this pool. If that's the intent, a brief doc-comment on each would prevent misuse. If they mean the same thing, one is redundant.Also applies to: 38-53
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@models/LendingPool.ts` around lines 13 - 25, The two similarly named fields are ambiguous: ILendingPool.collateralFactor and each collateralAssets[].collateralFactor; clarify intent by either (A) adding concise doc-comments to ILendingPool.collateralFactor and ICollateralConfig.collateralFactor explaining their distinct semantics (e.g., pool-level LTV when the pool's primary asset is used as collateral elsewhere vs. per-collateral LTV inside this pool) or (B) if they are the same concept, remove one redundant field and update references accordingly; update the type/interface names or comments in the ILendingPool and ICollateralConfig definitions (and any code using collateralAssets or collateralFactor) so the meaning is unambiguous.controllers/swap/index.ts (1)
40-64: Duplicated from/to parsing logic — extract a helper.The
fromandtoparsing blocks (Lines 40-51 and Lines 53-64) are identical in structure. Extracting a small helper keeps this DRY and makes it easier to add future formats (e.g., object inputs from query).Proposed refactor
+function parseAssetParam( + value: unknown, + label: string +): { code: string; issuer?: string } | { error: string } { + if (typeof value !== 'string') return { error: `Please choose a valid token to ${label}.` }; + if (value === 'native') return { code: 'native' }; + if (value.includes(':')) { + const [code, issuer] = value.split(':'); + return { code, issuer }; + } + return { error: `Please choose a valid token to ${label}.` }; +}Then in
quoteSwap:- let fromAsset: { code: string; issuer?: string }; - ...all the from/to parsing blocks... + const fromParsed = parseAssetParam(from, 'swap from'); + if ('error' in fromParsed) return res.status(400).json(errorBody(fromParsed.error)); + const toParsed = parseAssetParam(to, 'receive'); + if ('error' in toParsed) return res.status(400).json(errorBody(toParsed.error));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/swap/index.ts` around lines 40 - 64, Extract the duplicated parsing logic for `from` and `to` into a single helper (e.g., parseAsset) and use it from the controller (in the quoteSwap request handler) to produce `fromAsset` and `toAsset`; the helper should accept the raw input (string|other) and return a normalized asset object like { code: 'native' } or { code, issuer } or null/throw on invalid input, then replace the two identical blocks that set `fromAsset`/`toAsset` (the current if typeof from === 'string' / if typeof to === 'string' branches) with calls to that helper and return the same 400 error when the helper indicates invalid input.controllers/account/index.ts (1)
88-90: Use standardized error shape for missing publicKey.This early 400 returns
{ message }, which breaks theerrorBodyschema used elsewhere in this controller.💡 Suggested change
- if (!publicKey) { - return res.status(400).json({ message: 'publicKey is required' }); - } + if (!publicKey) { + return res.status(400).json(errorBody('publicKey is required')); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@controllers/account/index.ts` around lines 88 - 90, The early return for missing publicKey currently responds with a raw { message } shape; change it to emit the controller's standardized errorBody shape instead (match the other responses in this file). Locate the missing-publicKey check in controllers/account/index.ts (the block referencing publicKey and return res.status(400).json(...)) and replace the payload with the same errorBody structure used elsewhere in this controller (e.g., wrap the message in the standard error object used by other error responses) so all 400 errors share the same schema.scripts/seed-savings-lending.ts (2)
32-88: Ensuremongoose.disconnect()always runs, even on errors.Wrapping the body in
try/finallyavoids leaving open connections on failures.🔧 Suggested structure
- await mongoose.connect(mongoURI); - logger.info('Connected to MongoDB'); - - const platformIssuer = StellarSdk.Keypair.fromSecret(env.PLATFORM_ISSUER_SECRET).publicKey(); - const lendingService = new LendingService(); - const savingsService = new SavingsService(); - - const zyradex02Asset = { code: 'zyradex02', issuer: platformIssuer }; - let collateralAssets: { asset: { code: string; issuer: string }; collateralFactor: string }[] = []; - const exists = await assetExistsOnChain(server, zyradex02Asset); - if (exists) { - collateralAssets = [ - { asset: zyradex02Asset, collateralFactor: '0.8' }, - ]; - logger.info('zyradex02 found on-chain; using as collateral for lending pool.'); - } else { - logger.info('zyradex02 not found on this network (e.g. token is on production only). Creating pool without zyradex02 collateral.'); - } + await mongoose.connect(mongoURI); + logger.info('Connected to MongoDB'); + try { + const platformIssuer = StellarSdk.Keypair.fromSecret(env.PLATFORM_ISSUER_SECRET).publicKey(); + const lendingService = new LendingService(); + const savingsService = new SavingsService(); + + const zyradex02Asset = { code: 'zyradex02', issuer: platformIssuer }; + let collateralAssets: { asset: { code: string; issuer: string }; collateralFactor: string }[] = []; + const exists = await assetExistsOnChain(server, zyradex02Asset); + if (exists) { + collateralAssets = [ + { asset: zyradex02Asset, collateralFactor: '0.8' }, + ]; + logger.info('zyradex02 found on-chain; using as collateral for lending pool.'); + } else { + logger.info('zyradex02 not found on this network (e.g. token is on production only). Creating pool without zyradex02 collateral.'); + } @@ - await mongoose.disconnect(); - logger.info('Disconnected from MongoDB'); + } finally { + await mongoose.disconnect(); + logger.info('Disconnected from MongoDB'); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/seed-savings-lending.ts` around lines 32 - 88, The script currently calls mongoose.disconnect() at the end but can skip it if an error is thrown; wrap the main seeding logic (including calls to mongoose.connect, assetExistsOnChain, lendingService.createPool, SavingsService.createProduct, LendingPool.findOne, SavingsProduct.findOne) in a try/finally so mongoose.disconnect() is always invoked in the finally block, and in the try/catch log any caught error before rethrowing or exiting to preserve idempotent behavior and ensure clean DB teardown.
6-20: Remove the redundantdotenvimport and usedotenv/configfor cleaner environment loading.The
config/env.tsmodule already callsdotenv.config()internally before reading environment variables, so the redundant call at line 14 is unnecessary. Usingimport 'dotenv/config'as the first import ensures environment variables are loaded before any other module initialization, making the intent clearer and eliminating the manual configuration call.Suggested fix
-import dotenv from 'dotenv'; +import 'dotenv/config';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/seed-savings-lending.ts` around lines 6 - 20, Remove the redundant dotenv import and explicit dotenv.config() call and instead use the side-effect import form as the first import: replace "import dotenv from 'dotenv';" and the subsequent "dotenv.config();" with "import 'dotenv/config';" so environment variables are loaded before importing config/env and other modules (ensure the import appears at the top before imports like env, server, LendingPool, SavingsProduct, LendingService, SavingsService, assetExistsOnChain, logger).services/swap.service.ts (1)
462-489: Consider including platform fee in native-balance checks for non-native swaps.If the platform fee is mandatory, the native balance check should include it to avoid predictable fee-collection failures after a successful swap.
🔧 Suggested tweak
- const nativeRequired = baseFeePi + minReserve; + const nativeRequired = baseFeePi + minReserve + platformFeePi;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/swap.service.ts` around lines 462 - 489, The native-balance check for non-native swaps currently sets nativeRequired = baseFeePi + minReserve but omits platformFeePi; update the check in the block where actualFromCode !== 'native' (inside the swap validation using initialAccount and nativeAvailable) to include platformFeePi so nativeRequired = baseFeePi + platformFeePi + minReserve, and update the error message to reflect the added platform fee amount.services/savings.service.ts (2)
83-93: Dynamicimport()for a static constant — use a top-level import instead.
await import('../config/savings')on everygetTermOptions()call is unnecessary;TERM_DAYSis a compile-time constant with no circular-dependency risk. Replace with a static import at the top of the file.♻️ Proposed refactor
+import { TERM_DAYS } from '../config/savings'; ... async getTermOptions(): Promise<{ days: number; apy: string; unlockDate: string }[]> { - const { TERM_DAYS } = await import('../config/savings'); const now = new Date();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/savings.service.ts` around lines 83 - 93, Replace the dynamic import of TERM_DAYS inside getTermOptions with a top-level static import: remove "const { TERM_DAYS } = await import('../config/savings');" from getTermOptions and instead import TERM_DAYS at the top of the module; keep the rest of getTermOptions (loop calling this.getSavingsApy and computing unlockDate) unchanged so TERM_DAYS is read directly as a compile-time constant.
14-14: CommonJSrequireforUsermodel loses type safety.
const User = require('../models/User').defaultbypasses TypeScript's static type checking. If there's no circular dependency forcing this, replace with a proper ES import:♻️ Proposed refactor
-const User = require('../models/User').default; +import User from '../models/User';🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@services/savings.service.ts` at line 14, The code uses CommonJS require for the User model which loses TypeScript type safety; replace "const User = require('../models/User').default" with an ES module import (e.g. "import User from '../models/User'" or "import { User } from '../models/User'" depending on how User is exported) so the compiler can check types, remove the ".default" usage, and update any references in services/savings.service.ts to the typed User symbol; if the module export shape differs, adjust the export in the model or the import form and ensure tsconfig (esModuleInterop/moduleResolution) supports the chosen import style.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (33)
config/env.tsconfig/savings.tscontrollers/account/index.tscontrollers/launchpad/index.tscontrollers/lending/index.tscontrollers/liquidity-pools/index.tscontrollers/pairs/index.tscontrollers/savings/index.tscontrollers/swap/index.tsdemo-app-fronteddocs/LENDING.mddocs/fund-segmentation.mddocs/onchain-vs-db.mddocs/products.mddocs/treasury-amm.mdmiddlewares/error-handler.tsmodels/BorrowPosition.tsmodels/CreditScore.tsmodels/LendingPool.tsmodels/SavingsPosition.tsmodels/SavingsProduct.tspackage.jsonroutes/savings.routes.tsscripts/seed-savings-lending.tsservices/credit-score.service.tsservices/indices.service.tsservices/lending.service.tsservices/liquidity-pools.service.tsservices/savings.service.tsservices/staking.service.tsservices/swap.service.tsutils/asset.tsutils/zyradex-error.ts
✅ Files skipped from review due to trivial changes (1)
- demo-app-fronted
🚧 Files skipped from review as they are similar to previous changes (2)
- routes/savings.routes.ts
- models/BorrowPosition.ts
| try { | ||
| const BalanceCache = require('../models/BalanceCache').default; | ||
| const cached = await BalanceCache.findOne({ publicKey: publicKeyString }); | ||
| if (cached && cached.balances && cached.balances.length > 0) { |
There was a problem hiding this comment.
BalanceCache import path looks wrong; cache fallback likely never runs.
require('../models/BalanceCache') from controllers/account/index.ts resolves to controllers/models, so it will throw and skip the cache fallback.
🐛 Fix the relative path
- const BalanceCache = require('../models/BalanceCache').default;
+ const BalanceCache = require('../../models/BalanceCache').default;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try { | |
| const BalanceCache = require('../models/BalanceCache').default; | |
| const cached = await BalanceCache.findOne({ publicKey: publicKeyString }); | |
| if (cached && cached.balances && cached.balances.length > 0) { | |
| try { | |
| const BalanceCache = require('../../models/BalanceCache').default; | |
| const cached = await BalanceCache.findOne({ publicKey: publicKeyString }); | |
| if (cached && cached.balances && cached.balances.length > 0) { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/account/index.ts` around lines 111 - 114, The BalanceCache
require path is incorrect so the cache lookup never runs; update the import in
the account handler to point to the correct relative module (replace
require('../models/BalanceCache') with the correct relative path to the shared
models module, e.g., require('../../models/BalanceCache').default or convert to
a proper ES import) so BalanceCache (and the cached lookup using cached and
cached.balances) resolves correctly and the cache fallback executes.
| export const createPool = async (req: Request, res: Response) => { | ||
| try { | ||
| const { asset, supplyRate, borrowRate, collateralFactor, collateralAssets } = req.body || {}; | ||
| if (!asset?.code || !asset?.issuer || !supplyRate || !borrowRate || !collateralFactor) { | ||
| return res.status(400).json({ | ||
| message: 'Missing required: asset { code, issuer }, supplyRate, borrowRate, collateralFactor', | ||
| }); | ||
| } | ||
| const pool = await lendingService.createPool({ | ||
| asset: { code: asset.code, issuer: asset.issuer }, | ||
| supplyRate: String(supplyRate), | ||
| borrowRate: String(borrowRate), | ||
| collateralFactor: String(collateralFactor), | ||
| collateralAssets, | ||
| }); | ||
| return res.status(201).json(pool); | ||
| } catch (error: any) { | ||
| logger.error('createPool failed:', error); | ||
| return res.status(500).json({ message: error?.message || 'Failed to create pool' }); | ||
| } | ||
| }; |
There was a problem hiding this comment.
createPool is unguarded — anyone can create lending pools with arbitrary rates.
No authentication or role check before lendingService.createPool. An attacker can create pools with 0% borrow rates, 100% collateral factors, or malicious collateral asset configurations. This endpoint must be restricted to admin/operator callers.
🔒 Proposed fix
export const createPool = async (req: Request, res: Response) => {
try {
+ const caller = (req as any).user;
+ if (!caller || caller.role !== 'admin') {
+ return res.status(403).json({ message: 'Admin access required' });
+ }
const { asset, supplyRate, borrowRate, collateralFactor, collateralAssets } = req.body || {};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const createPool = async (req: Request, res: Response) => { | |
| try { | |
| const { asset, supplyRate, borrowRate, collateralFactor, collateralAssets } = req.body || {}; | |
| if (!asset?.code || !asset?.issuer || !supplyRate || !borrowRate || !collateralFactor) { | |
| return res.status(400).json({ | |
| message: 'Missing required: asset { code, issuer }, supplyRate, borrowRate, collateralFactor', | |
| }); | |
| } | |
| const pool = await lendingService.createPool({ | |
| asset: { code: asset.code, issuer: asset.issuer }, | |
| supplyRate: String(supplyRate), | |
| borrowRate: String(borrowRate), | |
| collateralFactor: String(collateralFactor), | |
| collateralAssets, | |
| }); | |
| return res.status(201).json(pool); | |
| } catch (error: any) { | |
| logger.error('createPool failed:', error); | |
| return res.status(500).json({ message: error?.message || 'Failed to create pool' }); | |
| } | |
| }; | |
| export const createPool = async (req: Request, res: Response) => { | |
| try { | |
| const caller = (req as any).user; | |
| if (!caller || caller.role !== 'admin') { | |
| return res.status(403).json({ message: 'Admin access required' }); | |
| } | |
| const { asset, supplyRate, borrowRate, collateralFactor, collateralAssets } = req.body || {}; | |
| if (!asset?.code || !asset?.issuer || !supplyRate || !borrowRate || !collateralFactor) { | |
| return res.status(400).json({ | |
| message: 'Missing required: asset { code, issuer }, supplyRate, borrowRate, collateralFactor', | |
| }); | |
| } | |
| const pool = await lendingService.createPool({ | |
| asset: { code: asset.code, issuer: asset.issuer }, | |
| supplyRate: String(supplyRate), | |
| borrowRate: String(borrowRate), | |
| collateralFactor: String(collateralFactor), | |
| collateralAssets, | |
| }); | |
| return res.status(201).json(pool); | |
| } catch (error: any) { | |
| logger.error('createPool failed:', error); | |
| return res.status(500).json({ message: error?.message || 'Failed to create pool' }); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/lending/index.ts` around lines 27 - 47, The createPool handler is
unguarded and calls lendingService.createPool directly; add an authorization
check so only admins/operators can call it: validate req.user (or equivalent
auth principal) and check a role flag like isAdmin or role === 'operator' before
proceeding, returning 403 if unauthorized, and only then call
lendingService.createPool with the validated payload; ensure the check is placed
at the top of the createPool function and use existing auth middleware or helper
functions if available to avoid duplicating logic.
| export const createProduct = async (req: Request, res: Response) => { | ||
| try { | ||
| const { asset, termDays, apy, minAmount, active } = req.body || {}; | ||
| if (!asset?.code || !asset?.issuer || termDays == null || !apy) { | ||
| return res.status(400).json(errorBody('Please provide asset, term, and rate to create a savings product.')); | ||
| } |
There was a problem hiding this comment.
createProduct is completely unguarded — anyone can create savings products with arbitrary APY.
No authentication or authorization check before calling savingsService.createProduct. An attacker can mint products with a 0% APY or fraudulent terms and surface them to users.
🔒 Proposed fix
export const createProduct = async (req: Request, res: Response) => {
try {
+ const caller = (req as any).user;
+ if (!caller || caller.role !== 'admin') {
+ return res.status(403).json(errorBody('Admin access required.'));
+ }
const { asset, termDays, apy, minAmount, active } = req.body || {};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const createProduct = async (req: Request, res: Response) => { | |
| try { | |
| const { asset, termDays, apy, minAmount, active } = req.body || {}; | |
| if (!asset?.code || !asset?.issuer || termDays == null || !apy) { | |
| return res.status(400).json(errorBody('Please provide asset, term, and rate to create a savings product.')); | |
| } | |
| export const createProduct = async (req: Request, res: Response) => { | |
| try { | |
| const caller = (req as any).user; | |
| if (!caller || caller.role !== 'admin') { | |
| return res.status(403).json(errorBody('Admin access required.')); | |
| } | |
| const { asset, termDays, apy, minAmount, active } = req.body || {}; | |
| if (!asset?.code || !asset?.issuer || termDays == null || !apy) { | |
| return res.status(400).json(errorBody('Please provide asset, term, and rate to create a savings product.')); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/savings/index.ts` around lines 34 - 39, The createProduct
controller is missing authentication/authorization checks before calling
savingsService.createProduct, allowing anyone to create products; add a guard at
the start of export const createProduct to verify the requester is authenticated
and authorized (e.g., has an admin or product-manager role) and return 401/403
if not authorized, then proceed to validate payload and call
savingsService.createProduct only after the check; reference the createProduct
function and the call to savingsService.createProduct to locate where to insert
the auth/role check or call a shared auth helper/middleware that enforces roles.
| export const deposit = async (req: Request, res: Response) => { | ||
| try { | ||
| const userId = (req as any).user?.id ?? req.body?.userId; | ||
| if (!userId) return res.status(400).json(errorBody('Please sign in or provide your user id.')); | ||
| const { productId, amount, userSecret, depositAddress, termDays } = req.body || {}; | ||
| if (!productId || !amount) return res.status(400).json(errorBody('Please select a product and enter an amount.')); | ||
| if (!userSecret) return res.status(400).json(errorBody('Please sign in with your wallet to deposit.')); | ||
| const result = await savingsService.deposit({ | ||
| userId: String(userId), | ||
| productId, | ||
| amount: String(amount), | ||
| userSecret, | ||
| depositAddress, | ||
| termDays: termDays != null ? Number(termDays) : undefined, | ||
| }); | ||
| return res.status(201).json(result); | ||
| } catch (error: any) { | ||
| logger.error('deposit failed:', error); | ||
| const status = error?.status ?? 400; | ||
| return res.status(status).json(errorBodyFrom(error)); | ||
| } | ||
| }; |
There was a problem hiding this comment.
deposit: user impersonation via req.body?.userId fallback, and Stellar private key exposed in request body.
Two distinct issues:
-
Line 56 — The
req.body?.userIdfallback lets any caller claim any user's identity for a real on-chain financial transaction. TheuserIdmust come exclusively from the authenticated session. -
Lines 60 / 65 —
userSecretis a raw Stellar private key traversing the request body. Any logging middleware, reverse proxy access log, or APM tool that records request bodies will capture it. Consider a client-side signing model, or at minimum ensure the secret is explicitly scrubbed from all logging/tracing pipelines.
🔒 Proposed fix for issue 1
- const userId = (req as any).user?.id ?? req.body?.userId;
- if (!userId) return res.status(400).json(errorBody('Please sign in or provide your user id.'));
+ const userId = (req as any).user?.id;
+ if (!userId) return res.status(401).json(errorBody('Authentication required.'));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const deposit = async (req: Request, res: Response) => { | |
| try { | |
| const userId = (req as any).user?.id ?? req.body?.userId; | |
| if (!userId) return res.status(400).json(errorBody('Please sign in or provide your user id.')); | |
| const { productId, amount, userSecret, depositAddress, termDays } = req.body || {}; | |
| if (!productId || !amount) return res.status(400).json(errorBody('Please select a product and enter an amount.')); | |
| if (!userSecret) return res.status(400).json(errorBody('Please sign in with your wallet to deposit.')); | |
| const result = await savingsService.deposit({ | |
| userId: String(userId), | |
| productId, | |
| amount: String(amount), | |
| userSecret, | |
| depositAddress, | |
| termDays: termDays != null ? Number(termDays) : undefined, | |
| }); | |
| return res.status(201).json(result); | |
| } catch (error: any) { | |
| logger.error('deposit failed:', error); | |
| const status = error?.status ?? 400; | |
| return res.status(status).json(errorBodyFrom(error)); | |
| } | |
| }; | |
| export const deposit = async (req: Request, res: Response) => { | |
| try { | |
| const userId = (req as any).user?.id; | |
| if (!userId) return res.status(401).json(errorBody('Authentication required.')); | |
| const { productId, amount, userSecret, depositAddress, termDays } = req.body || {}; | |
| if (!productId || !amount) return res.status(400).json(errorBody('Please select a product and enter an amount.')); | |
| if (!userSecret) return res.status(400).json(errorBody('Please sign in with your wallet to deposit.')); | |
| const result = await savingsService.deposit({ | |
| userId: String(userId), | |
| productId, | |
| amount: String(amount), | |
| userSecret, | |
| depositAddress, | |
| termDays: termDays != null ? Number(termDays) : undefined, | |
| }); | |
| return res.status(201).json(result); | |
| } catch (error: any) { | |
| logger.error('deposit failed:', error); | |
| const status = error?.status ?? 400; | |
| return res.status(status).json(errorBodyFrom(error)); | |
| } | |
| }; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/savings/index.ts` around lines 54 - 75, The deposit handler
currently allows impersonation by falling back to req.body.userId and exposes a
raw Stellar private key via req.body.userSecret; stop accepting userId from the
request body and require the authenticated id only (use (req as any).user.id
exclusively and return 401/400 if absent) in deposit, remove the
req.body?.userId fallback, and ensure userSecret is never written to logs or
returned in responses by scrubbing it before any logging/tracing or before
calling errorBodyFrom/returning result; also audit usages of
savingsService.deposit and logger.error in this function to avoid printing
userSecret (pass it as needed but strip it from any objects that might be
stringified/returned).
| export const listPositions = async (req: Request, res: Response) => { | ||
| try { | ||
| const userId = (req as any).user?.id ?? req.query.userId; | ||
| if (!userId) return res.status(400).json(errorBody('Please sign in or provide your user id.')); | ||
| const status = req.query.status as 'locked' | 'withdrawn' | undefined; | ||
| const positions = await savingsService.listPositions(String(userId), status); | ||
| return res.status(200).json({ data: positions }); | ||
| } catch (error: any) { | ||
| logger.error('listPositions failed:', error); | ||
| return res.status(500).json(errorBodyFrom(error)); | ||
| } | ||
| }; |
There was a problem hiding this comment.
listPositions IDOR — any caller can view any user's savings positions via ?userId=<target>.
The req.query.userId fallback bypasses authentication and exposes another user's position history, amounts, and lock dates.
🔒 Proposed fix
- const userId = (req as any).user?.id ?? req.query.userId;
- if (!userId) return res.status(400).json(errorBody('Please sign in or provide your user id.'));
+ const userId = (req as any).user?.id;
+ if (!userId) return res.status(401).json(errorBody('Authentication required.'));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@controllers/savings/index.ts` around lines 77 - 88, The listPositions handler
currently allows req.query.userId fallback which enables IDOR; change it to
require and use the authenticated user id from (req as any).user.id and remove
or restrict the req.query.userId fallback in function listPositions so callers
cannot supply an arbitrary userId. If you need to support admins, explicitly
check the authenticated user's role (e.g., req.user.role === 'admin') before
allowing a different query userId; otherwise always use the authenticated id and
return 403 when missing or unauthorized. Ensure error messages and logging still
occur via logger.error and errorBodyFrom on exceptions.
| const response = await axios.post(submitUrl, `tx=${encodeURIComponent(txXdr)}`, { | ||
| headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, | ||
| timeout: 30000, | ||
| }); | ||
|
|
||
| logger.success(`Savings deposit tx submitted: ${response.data.hash}`); | ||
|
|
||
| const depositedAt = new Date(); | ||
| const unlockedAt = new Date(depositedAt); | ||
| unlockedAt.setDate(unlockedAt.getDate() + termDays); | ||
|
|
||
| const apyAtDeposit = (await this.getSavingsApy(termDays)) || product.apy; | ||
|
|
||
| const position = await SavingsPosition.create({ | ||
| userId: params.userId, | ||
| productId: params.productId, | ||
| amount: params.amount, | ||
| depositedAt, | ||
| unlockedAt, | ||
| status: 'locked', | ||
| apyAtDeposit, | ||
| }); |
There was a problem hiding this comment.
Funds can be permanently lost: on-chain payment is submitted before the SavingsPosition record is created.
If SavingsPosition.create() (line 190) throws for any reason after the Stellar transaction succeeds (line 177–180), the user's funds are locked in the custody address with no corresponding position record. There is no API-accessible way to recover them. A client retry would also submit a fresh transaction (new sequence number) and transfer again.
The safe pattern is to create the position record in a 'pending' state before submitting the transaction, then update it to 'locked' on success:
🔒 Proposed fix
- const response = await axios.post(submitUrl, `tx=${encodeURIComponent(txXdr)}`, {
- headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
- timeout: 30000,
- });
- logger.success(`Savings deposit tx submitted: ${response.data.hash}`);
-
const depositedAt = new Date();
const unlockedAt = new Date(depositedAt);
unlockedAt.setDate(unlockedAt.getDate() + termDays);
-
const apyAtDeposit = (await this.getSavingsApy(termDays)) || product.apy;
+ // Create position BEFORE submitting on-chain to prevent fund loss on DB failure
const position = await SavingsPosition.create({
userId: params.userId,
productId: params.productId,
amount: params.amount,
depositedAt,
unlockedAt,
- status: 'locked',
+ status: 'pending',
apyAtDeposit,
});
+ let response: any;
+ try {
+ response = await axios.post(submitUrl, `tx=${encodeURIComponent(txXdr)}`, {
+ headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
+ timeout: 30000,
+ });
+ } catch (txError) {
+ await SavingsPosition.deleteOne({ _id: position._id });
+ throw txError;
+ }
+ position.status = 'locked';
+ await position.save();
+ logger.success(`Savings deposit tx submitted: ${response.data.hash}`);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await axios.post(submitUrl, `tx=${encodeURIComponent(txXdr)}`, { | |
| headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, | |
| timeout: 30000, | |
| }); | |
| logger.success(`Savings deposit tx submitted: ${response.data.hash}`); | |
| const depositedAt = new Date(); | |
| const unlockedAt = new Date(depositedAt); | |
| unlockedAt.setDate(unlockedAt.getDate() + termDays); | |
| const apyAtDeposit = (await this.getSavingsApy(termDays)) || product.apy; | |
| const position = await SavingsPosition.create({ | |
| userId: params.userId, | |
| productId: params.productId, | |
| amount: params.amount, | |
| depositedAt, | |
| unlockedAt, | |
| status: 'locked', | |
| apyAtDeposit, | |
| }); | |
| const depositedAt = new Date(); | |
| const unlockedAt = new Date(depositedAt); | |
| unlockedAt.setDate(unlockedAt.getDate() + termDays); | |
| const apyAtDeposit = (await this.getSavingsApy(termDays)) || product.apy; | |
| // Create position BEFORE submitting on-chain to prevent fund loss on DB failure | |
| const position = await SavingsPosition.create({ | |
| userId: params.userId, | |
| productId: params.productId, | |
| amount: params.amount, | |
| depositedAt, | |
| unlockedAt, | |
| status: 'pending', | |
| apyAtDeposit, | |
| }); | |
| let response: any; | |
| try { | |
| response = await axios.post(submitUrl, `tx=${encodeURIComponent(txXdr)}`, { | |
| headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, | |
| timeout: 30000, | |
| }); | |
| } catch (txError) { | |
| await SavingsPosition.deleteOne({ _id: position._id }); | |
| throw txError; | |
| } | |
| position.status = 'locked'; | |
| await position.save(); | |
| logger.success(`Savings deposit tx submitted: ${response.data.hash}`); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/savings.service.ts` around lines 177 - 198, Create the
SavingsPosition record in a 'pending' state before calling axios.post(submitUrl,
...) so that a position exists even if the on-chain submit succeeds but DB write
later fails: call SavingsPosition.create(...) with status: 'pending' (and
include userId, productId, amount, termDays or a placeholder for
depositedAt/unlockedAt), then perform the axios.post and on success update that
same record to status: 'locked' and set depositedAt, unlockedAt, apyAtDeposit
and tx hash; on submit failure update the pending record to a failure state or
delete/compensate as appropriate. Ensure you reference the created instance
(from SavingsPosition.create) when updating and handle exceptions from both the
HTTP submit (axios.post) and the DB update so funds and record remain
consistent.
| const response = await axios.post(`${env.HORIZON_URL}/transactions`, `tx=${encodeURIComponent(tx.toXDR())}`, { | ||
| headers: { 'Content-Type': 'application/x-www-form-urlencoded' }, | ||
| timeout: 30000, | ||
| }); | ||
| logger.success(`Savings withdraw tx submitted: ${response.data.hash}`); | ||
|
|
||
| position.status = 'withdrawn'; | ||
| position.interestAccrued = interestAfterFee; | ||
| await position.save(); |
There was a problem hiding this comment.
Double-spend: on-chain payout is submitted before position.status is updated in DB — a save() failure enables a second withdrawal.
If position.save() (line 293) fails (DB timeout, transient error) after the Stellar payout transaction is already submitted (line 285), the position remains 'locked'. Any subsequent call to withdraw(positionId) passes all guards and submits a second payment to the user's wallet.
Set the position to an intermediate state ('withdrawing') before building the transaction to make it idempotent:
🔒 Proposed fix
+ // Mark as withdrawing BEFORE on-chain submission to prevent double-spend
+ position.status = 'withdrawing' as any;
+ await position.save();
+
const baseFee = await server.fetchBaseFee();
const tx = new StellarSdk.TransactionBuilder(custodyAccount, { fee: baseFee.toString(), networkPassphrase: env.NETWORK })
.addOperation(StellarSdk.Operation.payment({ destination: userPublicKey, asset: paymentAsset, amount: totalPayout }))
.setTimeout(60)
.build();
tx.sign(custodyKeypair);
- const response = await axios.post(`${env.HORIZON_URL}/transactions`, `tx=${encodeURIComponent(tx.toXDR())}`, {
- headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
- timeout: 30000,
- });
+ let response: any;
+ try {
+ response = await axios.post(`${env.HORIZON_URL}/transactions`, `tx=${encodeURIComponent(tx.toXDR())}`, {
+ headers: { 'Content-Type': 'application/x-www-form-urlencoded' },
+ timeout: 30000,
+ });
+ } catch (txError) {
+ position.status = 'locked'; // revert on TX failure
+ await position.save();
+ throw txError;
+ }
logger.success(`Savings withdraw tx submitted: ${response.data.hash}`);
position.status = 'withdrawn';Also, add 'withdrawing' as a valid status to the SavingsPosition model schema to keep validation consistent.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/savings.service.ts` around lines 285 - 293, The code submits the
Stellar payout via axios before persisting the updated position.status, allowing
a DB save failure to enable double-spend; change the flow in withdraw (the
function that builds/submits tx and calls position.save()) so you set
position.status = 'withdrawing' and persist that state (await position.save())
before building or posting the Stellar transaction, then update position.status
= 'withdrawn' and other fields (interestAccrued) and save again after successful
submission; also add 'withdrawing' as an allowed enum/status in the
SavingsPosition model/schema validation and ensure errors during post-submit
save are handled (e.g., retry/alert or compensating logic) to avoid leaving
positions incorrectly locked.
| function addStrings(a: string, b: string): string { | ||
| return (parseFloat(a) + parseFloat(b)).toFixed(7); | ||
| } | ||
|
|
||
| function mulStrings(a: string, b: string): string { | ||
| return (parseFloat(a) * parseFloat(b)).toFixed(7); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd /tmp && find . -name "package*.json" -o -name "pnpm-lock.yaml" -o -name "yarn.lock" 2>/dev/null | head -5Repository: Pi-Defi-world/Tokenization-bn
Length of output: 55
🏁 Script executed:
git ls-files | grep -E 'package\.json|pnpm-lock\.yaml|yarn\.lock' | head -10Repository: Pi-Defi-world/Tokenization-bn
Length of output: 100
🏁 Script executed:
cat package.json 2>/dev/null | head -50Repository: Pi-Defi-world/Tokenization-bn
Length of output: 1280
🏁 Script executed:
rg -n "decimal\.js|big\.js|bignumber\.js|decimal|Decimal" package.json pnpm-lock.yaml yarn.lock 2>/dev/nullRepository: Pi-Defi-world/Tokenization-bn
Length of output: 250
🏁 Script executed:
cat -n services/staking.service.ts 2>/dev/null | head -100Repository: Pi-Defi-world/Tokenization-bn
Length of output: 4541
Use BigNumber.js for staking amount arithmetic instead of floating-point math.
parseFloat + toFixed(7) introduces rounding drift in staking calculations (Pi ratio computation, cap checks, commitment validation). BigNumber.js is already available in your dependencies via @stellar/stellar-sdk, so use it to maintain exact decimal precision and prevent desync with on-chain amounts.
Suggested fix using BigNumber.js
+import BigNumber from 'bignumber.js';function addStrings(a: string, b: string): string {
- return (parseFloat(a) + parseFloat(b)).toFixed(7);
+ return new BigNumber(a).plus(b).toFixed(7);
}
function mulStrings(a: string, b: string): string {
- return (parseFloat(a) * parseFloat(b)).toFixed(7);
+ return new BigNumber(a).times(b).toFixed(7);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function addStrings(a: string, b: string): string { | |
| return (parseFloat(a) + parseFloat(b)).toFixed(7); | |
| } | |
| function mulStrings(a: string, b: string): string { | |
| return (parseFloat(a) * parseFloat(b)).toFixed(7); | |
| } | |
| import BigNumber from 'bignumber.js'; | |
| function addStrings(a: string, b: string): string { | |
| return new BigNumber(a).plus(b).toFixed(7); | |
| } | |
| function mulStrings(a: string, b: string): string { | |
| return new BigNumber(a).times(b).toFixed(7); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/staking.service.ts` around lines 14 - 20, Replace floating-point
arithmetic in addStrings and mulStrings with BigNumber operations to avoid
rounding drift: import the BigNumber class (available via `@stellar/stellar-sdk`
or bignumber.js) and in addStrings use new BigNumber(a).plus(new BigNumber(b))
then format to 7 decimals, and in mulStrings use new BigNumber(a).times(new
BigNumber(b)) then format to 7 decimals; ensure you preserve string
inputs/outputs and use BigNumber's toFixed or decimalPlaces to produce exactly 7
decimal places consistently (handle NaN/invalid input the same way existing code
did or throw/return a clear error).
| const { stakedPi, sumStakedPi } = await this.adapter.getStakingData(launchId, userId); | ||
| const sum = parseFloat(sumStakedPi) || 1; | ||
| const ratio = parseFloat(stakedPi) / sum; | ||
| const baseline = qualifiesForBaseline && PiPowerBaseline ? parseFloat(PiPowerBaseline) : 0; | ||
| const piPower = mulStrings(T_available, String(ratio + baseline)); |
There was a problem hiding this comment.
Don’t default sumStakedPi to 1.
|| 1 masks zero/invalid totals and can inflate the ratio, producing incorrect PiPower. Handle sum <= 0 explicitly (baseline‑only or fail).
🔧 Safer guard for invalid totals
- const sum = parseFloat(sumStakedPi) || 1;
- const ratio = parseFloat(stakedPi) / sum;
+ const sum = Number(sumStakedPi);
+ if (!Number.isFinite(sum) || sum <= 0) {
+ const baseline = qualifiesForBaseline && PiPowerBaseline ? parseFloat(PiPowerBaseline) : 0;
+ const piPower = mulStrings(T_available, String(baseline));
+ return { piPower, stakedPi, sumStakedPi };
+ }
+ const ratio = parseFloat(stakedPi) / sum;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@services/staking.service.ts` around lines 36 - 40, The code currently masks
zero/invalid totals by defaulting sumStakedPi to 1 which inflates the ratio;
update the logic after calling this.adapter.getStakingData (getStakingData,
stakedPi, sumStakedPi) to parse sumStakedPi into a numeric sum and explicitly
handle sum <= 0 (e.g., set ratio = 0 or short‑circuit to baseline‑only
computation) instead of using "|| 1"; then compute ratio = parseFloat(stakedPi)
/ sum when sum > 0, keep baseline logic (qualifiesForBaseline and
PiPowerBaseline) and finally call mulStrings(T_available, String(ratio +
baseline)) for piPower (or return/throw if you prefer failing on invalid
totals).
| server: HorizonServer, | ||
| asset: { code: string; issuer: string } | ||
| ): Promise<boolean> { | ||
| if (asset.code === "native" || !asset.issuer) return true; |
There was a problem hiding this comment.
Bug: Non-native assets with an empty issuer are incorrectly treated as existing.
!asset.issuer is truthy for "", so assetExistsOnChain(server, { code: "USDC", issuer: "" }) returns true without any on-chain check. The guard should only short-circuit for genuinely native assets.
Proposed fix
- if (asset.code === "native" || !asset.issuer) return true;
+ if (asset.code.toLowerCase() === "native") return true;
+ if (!asset.issuer) return false; // non-native without issuer cannot exist📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (asset.code === "native" || !asset.issuer) return true; | |
| if (asset.code.toLowerCase() === "native") return true; | |
| if (!asset.issuer) return false; // non-native without issuer cannot exist |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@utils/asset.ts` at line 14, The guard in assetExistsOnChain incorrectly
treats an empty string issuer as native; update the short-circuit in
assetExistsOnChain to only bypass the on-chain check for genuine native assets
or when issuer is null/undefined (not empty string). Locate the condition that
currently reads asset.code === "native" || !asset.issuer and change it to check
asset.code === "native" || asset.issuer == null (or explicit === null/===
undefined) so that assets with issuer === "" will still trigger the on-chain
existence check using server.
Summary by CodeRabbit
New Features
Security & Performance
Documentation