Skip to content

fix(FhenixHRE): handle hex-encoded ciphertexts correctly in hardhatMockDecrypt#24

Open
amathxbt wants to merge 1 commit intoFhenixProtocol:masterfrom
amathxbt:fix/hardhat-mock-decrypt-hex-handling
Open

fix(FhenixHRE): handle hex-encoded ciphertexts correctly in hardhatMockDecrypt#24
amathxbt wants to merge 1 commit intoFhenixProtocol:masterfrom
amathxbt:fix/hardhat-mock-decrypt-hex-handling

Conversation

@amathxbt
Copy link
Copy Markdown

@amathxbt amathxbt commented May 3, 2026

Summary

hardhatMockDecrypt in FhenixHardhatRuntimeEnvironment.ts converts a ciphertext string to a bigint by splitting it into characters and calling charCodeAt(0) on each, treating the result as a byte value. This is incorrect for hex-encoded ciphertexts (the format returned by FHE sealoutput on real networks) and for any byte value > 127.

Bug

// Before
function hardhatMockDecrypt(value: string): bigint {
  const byteArray = new Uint8Array(
    value.split("").map((c) => c.charCodeAt(0))
    //                            ↑ returns a Unicode code point, not a byte
    //                            For a hex string "0x1a2b", this would encode
    //                            ['0','x','1','a','2','b'] = [48,120,49,97,50,98]
    //                            instead of [0x1a, 0x2b]
  );
  ...
}

Consequences:

  1. Any hex-prefixed ciphertext (e.g. returned by a forked network or a mock sealoutput that emits hex) is decoded completely wrong, producing an arbitrary bigint that does not correspond to the original plaintext.
  2. Bytes > 127 are silently handled as their Unicode code point (same as the byte value for ASCII, but the original code lacked the & 0xff mask, which is a portability hazard when the JS engine returns a multi-byte code point).

Fix

Detect hex-encoded inputs (0x-prefix or valid even-length hex string) and parse directly with BigInt("0x..."). Fall back to the byte-array path for plain mock strings, now with an explicit & 0xff mask for safety.

The previous implementation split the ciphertext string into characters
and called charCodeAt(0) on each, treating the result as a byte. This
works only for ASCII strings but fails for:
1. Hex-encoded values (e.g. "0x1a2b3c...") returned by real sealoutput
   calls - each hex digit pair was interpreted as two char codes instead
   of a single byte, producing a wildly wrong bigint.
2. Bytes with values > 127 - charCodeAt returns code points not byte
   values, and the high bit gets set incorrectly.

The fix detects 0x-prefixed or raw hex strings and parses them directly
with BigInt("0x..."), falling back to the byte-array path for plaintext
mock values.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant