Skip to content

Feature/mintlayer app#1

Open
OBorce wants to merge 309 commits intomasterfrom
feature/mintlayer-app
Open

Feature/mintlayer app#1
OBorce wants to merge 309 commits intomasterfrom
feature/mintlayer-app

Conversation

@OBorce
Copy link
Collaborator

@OBorce OBorce commented Sep 30, 2025

No description provided.

Ericson2314 and others added 30 commits January 26, 2023 10:07
 - All Pull requests are run in GitHub Actions

 - All pushes to `main` are run in GitHub Actions
use dedicated sections for each device
@OBorce OBorce force-pushed the feature/mintlayer-app branch 8 times, most recently from ab1fa35 to 877082e Compare October 26, 2025 23:47
@OBorce OBorce force-pushed the feature/mintlayer-app branch from 877082e to 2fa1c33 Compare October 28, 2025 09:02
src/main.rs Outdated
Comment on lines 173 to 197
(Ins::GET_VERSION, P1_GET_VERSION, P2_DONE) => Ok(Instruction::GetVersion),
(Ins::APP_NAME, P1_APP_NAME, P2_DONE) => Ok(Instruction::GetAppName),
(Ins::PUB_KEY, p1, P2_DONE) => {
let p1: PubKeyP1 = p1.try_into()?;
Ok(Instruction::GetPubkey {
display: p1.display(),
})
}
(Ins::SIGN_TX, p1, P2_SIGN_MORE | P2_DONE) => Ok(Instruction::SignTx {
p1: p1.try_into()?,
more: value.p2 == P2_SIGN_MORE,
}),
(Ins::SIGN_MSG, P1_SIGN_START, P2_DONE)
| (Ins::SIGN_MSG, P1_SIGN_NEXT..=P1_SIGN_MAX_CHUNKS, P2_DONE | P2_SIGN_MORE) => {
Ok(Instruction::SignMessage {
chunk: value.p1,
more: value.p2 == P2_SIGN_MORE,
})
}
(
Ins::GET_VERSION | Ins::APP_NAME | Ins::PUB_KEY | Ins::SIGN_TX | Ins::SIGN_MSG,
_,
_,
) => Err(StatusWord::WrongP1P2),
(_, _, _) => Err(StatusWord::InsNotSupported),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is too messy. I think we should go with a more layered approach.

  1. Since p2 values are universal, we can have a lower layer that handles APDUs and collects one or more of them into a single "low-level instruction", which would consist of ins, p1, and the data (possibly collected from multiple chunks).
    This layer doesn't need to know what those ins/p1/data mean, it should just look at p2 - if p2 says "more chunks", then it should expect another APDU with the same ins/p1, collecting the received data in a "context" (this "context" should be separate from any other "context" that a higher-level layer may need).
    Btw, on each APDU that says "more chunks" IMO it's better to return a special status word saying "expecting more chunks", so that the host can do sanity checks ensuring that it and the app are in sync.
  2. Once a full "low-level instruction" (ins, p1 and full data) has been collected, it can be passed to the higher-level layer (also, perhaps p1 should just be named "parameter" at this point). This layer will interpret ins/p1, decode the data accordingly and produce a "high-level instruction" (need to come up with decent terminology, e.g. one of them may be called "command" and the other "instruction"), which should be a Rust enum containing all the data in the decoded form.
  3. The "high-level instruction" is then passed to the handler, which returns Result<Response, Error>, where both Response and Error are Rust enums. This Result can then be converted to a status word and response data, which can be sent back to host.

@OBorce OBorce force-pushed the feature/mintlayer-app branch from 372ff0d to efb1812 Compare December 15, 2025 09:06
@OBorce OBorce force-pushed the feature/mintlayer-app branch from efb1812 to f0a534a Compare December 15, 2025 09:34
@OBorce OBorce force-pushed the feature/mintlayer-app branch from 98949a4 to 5faeaaf Compare December 19, 2025 02:36
Copy link
Collaborator

@ImplOfAnImpl ImplOfAnImpl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't have enough time to look through all the code, I'll continue after returning from my vacation.

Comment on lines 11 to 15
# By default, heap size is enforced to 8192 bytes.
# Authorized values are [2048, 4096, 8192, 16384, 24576]
# Uncomment the following lines to set the heap size to 4096 bytes for instance
[env]
HEAP_SIZE = "16384"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz remove the original comment and write our own, explaining why we need this particular heap size

Cargo.toml Outdated
bech32 = { version = "0.11", default-features = false, features = ["alloc"] }
chrono = { version = "0.4", default-features = false, features = ["alloc"] }

mintlayer-core-primitives = { git = "https://github.com/mintlayer/mintlayer-core-primitives", branch = "feature/ledger-coin-type" }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plz don't forget to update this.
Also, IMO it's better to specify a specific revision and mention the commit message in a comment.

@@ -0,0 +1,44 @@
name: Build and run functional tests using ragger through reusable workflow
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rename the file to build_and_run_functional_tests.yml?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't rename it as it comes from their boilerplate app conventions

Cargo.toml Outdated
Comment on lines 27 to 29
[profile.release]
opt-level = 'z'
lto = true
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another thing worth adding here is codegen-units = 1
E.g. I tried building for apex_p. Before:

# llvm-size /app/target/apex_p/release/mintlayer-app
   text	  data	   bss	   dec	   hex	filename
 145080	  1024	 19576	165680	 28730	/app/target/apex_p/release/mintlayer-app

and with codegen-units = 1:

# llvm-size /app/target/apex_p/release/mintlayer-app
   text	  data	   bss	   dec	   hex	filename
 136224	  1024	 19576	156824	 26498	/app/target/apex_p/release/mintlayer-app

I.e. the app will occupy 136224+1024 bytes of flash instead of 145080+1024.

P.S. the BSS section is the amount of RAM that the app's static data will occupy (it will be filled with zeroes at startup). I wonder whether this value is high and, if yes, whether we should look for ways to reduce it. IMO it'd make sense to at least look at other app's binaries (e.g. Ethereum and Bitcoin) and check their BSS sections (will need to build the apps from source for this).
One possible way of trying to reduce it would going through all dependencies and turning off all unneeded default features.

@OBorce OBorce force-pushed the feature/mintlayer-app branch from bf565b1 to 9ca7570 Compare January 15, 2026 07:30
Copy link
Collaborator

@ImplOfAnImpl ImplOfAnImpl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TBF I'm tired from your approach to PRs. From now on I want:

  1. A clear indication of when a PR is ready for review. Normally, after changes have been requested, the PR should be changed to draft and when it comes back to "ready for review", then it's the indication that it is ready. If the PR never went to the draft state, then there should be some other indication, e.g. a comment "ready for review" or something. I will no longer look at PRs without clear indication that they're ready.
  2. The code in a non-draft PR should be of production quality. No forgotten pieces of the original boilerplate app, no unused entities, outdated comments. Also, the usual general best practices should have already been applied - code duplication reduced to minimum, named constants instead of magic numbers etc.
    (Btw, I'm not telling that the code should not have issues at all. But there is a huge difference between missing an issue and not bothering to even check).
  3. To achieve #2 you have to review your own PRs. At least scroll through the diff looking for obvious issues/dirty code. E.g. I do this with my own PRs.
    If you don't bother reviewing your own code, why should I. Doing code review is not pleasant and having to proof-read someones dirty code over and over again sucks all the fun from developer's life.
  4. All review comments should be addressed, which means: applying the comment, initialing its discussion, either on GitHub or Slack, dropping it with a counter-comment, adding TODO to investigate it later, etc. Basically anything will do, except for silently ignoring it.
    Again, accidents can happen, especialy in a large PR, but if they happen too often, then maybe the author of the PR doesn't care enough. And if he doesn't, then why should the reviewer care to write them, right?

Comment on lines +205 to +206
// Implement constructor for TxInfo with default values
impl TxContext {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What?

P.S. check all comments and make sure they make sense

Comment on lines +222 to +225
// version
tx_hasher
.update(&[version])
.map_err(|_| StatusWord::TxHashFail)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is conceptually wrong - if we ever have Transaction::V2, it may have a completely different structure. So, instead of blindly encoding the version value, we should check if it's 1 and fail if it's not.
Also, the tx version should probably be an enum inside messages, e.g. TransactionVersion.

}

/// Creates a Vec of APDUs by chunking the data to MAX_ADPU_DATA_LEN
pub fn new_chunks(instruction_byte: u8, param1_byte: u8, data: &'a [u8]) -> Vec<Self> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the reason for not returning ExactSizeIterator here?

}

#[derive(Encode, Decode)]
pub struct GetVersionRespones {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo - Respones

}

#[derive(Encode, Decode)]
pub struct GetPublicKeyRespones {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo - Respones
(same in the python tests)

DelegationStake,
DelegationWithdrawl,
CreateStakePool,
DecommissionStakePool,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like this is never constructed.

Plz check other variants too.

Comment on lines +688 to +690
TxOutput::CreateOrder(_) => {
ctx.tx_type = merge_tx_type(ctx.tx_type, TxType::CreateOrder);
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't increase_output_totals be called here as well.

If so, plz also make sure there is a unit test for this.

}

struct Context {
pub data: DataContext,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

data_context

Though I'm not sure why the word "data" is here, probably the struct should be just Context and the field context.

Comment on lines +64 to +81
more: bool,
ctx: &mut DataContext,
) -> Result<(), StatusWord> {
let ctx = match ctx {
DataContext::SignMessageContext(ctx) => ctx,
_ => return Err(StatusWord::WrongContext),
};
let chunk = comm.get_data().map_err(|_| StatusWord::WrongApduLength)?;

if ctx.message.len() + chunk.len() > MAX_MESSAGE_LEN {
return Err(StatusWord::TxWrongLength);
}

ctx.message.extend(chunk);

if more {
ctx.review_finished = false;
Ok(())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a mess.

  1. more is always false here, so only 2 chunks can be received.
  2. Why is chunking handled at this level? Why not handle it the same way as in the tx case, so that the entire data is passed into this function?
  3. The data part of the first chunk is already in the data field of Command::SignMessage. But you ignore it and call comm.get_data() again.

self.backend.exchange(
cla=CLA, ins=InsType.SIGN_MESSAGE, p1=P1.P1_START, p2=P2.P2_LAST, data=data
)
messages = split_message(message, MAX_APDU_LEN)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is chunks not messages

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.