L1 Stages Classification Framework #249
Replies: 5 comments 3 replies
-
|
I like the idea of having such a standardized evaluation process for trust. But I see a few improvements needed:
Again I like the idea but we should also clarify where such a "standard" and its scores would live and who would vote/decide on the scores. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @iJaack, for starting this very important conversation. Sorry in advance for the lengthy comment, but, as an L1 tooling builder for some time, this is something I have been thinking about for quite a while! While I do get the idea in the message that No stage is "better"—they serve different use cases and risk preferences, we cannot ignore that most Avalanche L1s start with a very small concentrated validator set, making them vulnerable to a single entity's failure. This is why I think that we SHOULD have proper stages for Avalanche L1s, very similar to what we can find on L2BEAT for Ethereum L2s, to (i) clearly inform users of the risks they are taking and (ii) guide L1s on the road to improving security through decentralization. That being said, we can of course also envision a classification like you proposed in parallel. What is the biggest risk? And to what extent?In short, bridged TVL via ICM can be entirely lost. In detail: Since we are rating L1s in the context of the Avalanche ecosystem, the assumption is that they are communicating with other chains (esp. the C-Chain for access to liquidity and integrations) using ICM (InterChain Messaging), and paying their rent to the P-Chain for that. While very efficient, ICM has known weaknesses that impact specifically chains with a low number of validators, and more importantly low number of operators (= independent entities running validators). ICM relies on BLS signatures to verify messages on destination chains, requiring a signature of at least 67% of the L1 weight to be considered valid. A typical message would be the bridging of assets to the Avalanche C-Chain. If an attacker gets access to BLS keys corresponding to 67+% of the total weight, they can basically bridge all assets without corresponding transactions on the L1 itself. Other properties to optimize forOn top of bridged TVL security, which is IMO the most important one, critical properties of a chain like liveness and censorship resistance could also be taken into account for L1s stages. Proposed requirements per stageStage 1
Stage 2
Stage 3
Other rating criteriaI do agree that other criteria for rating L1s, like the degree of permissionlessness re: validators’ onboarding, are more subjective, and I think that a framework like proposed initially would work better for those. |
Beta Was this translation helpful? Give feedback.
-
|
This addresses an important attribute of validator sets, and is in line with a draft Im producing centered around the broader need for more granular attributes (or the validation of asserted ones) being supported at the protocol level. The thoughts below borrow from some that are still in draft, but to roughly summarize... I generally like the framework above. I think it may have room to be much more granular across a lot of dimensions of validator set attributes. (More to come.) But as long as that extensibility of the attribute set becomes part of this proposal set, I think these particular attributes you're proposing are a good start. The question arises as to who classifies them into the proposed categories (for easy info discovery) .. and more importantly to decentralization, how is it verified/trusted. What I am be proposing would be to consider an extension of this, which is that certain properties are asserted and validated, either via polling amongst the validators, as is done for some current "proof of..." metrics, or by incorporating (and incentivizing reserved compute for amongst the v set) some ZK-proof mechanisms within the protocol, that allows these and other "properties" to be asserted and proved in a decentralized manner. For some properties, polling is suffient to establish trust. Other properties may require a zK-based mechanism, which th v-set can handle "in-house" in most cases. Again, this is more of a comment on extension, not contradiction/alternative of the proposal above. L1s looking to "recruit" niche validation from the main chain v-set (or incentivize "private" / exclusive or partially / "softly" exclusive L1 validators to meet its niche needs, can use these attributes as a market discovery/ information communication mechanism. I've term it a "RFV" (Request For Validation). One such parameter (or set of them) could be desired Stage, as proposed here. The end goal is a "market", with granular, verifiable information about both the L1s, and their requests (complete with incentivization offers / participant requirements) for niche validators, but also allowing validators themselves to "advertise", in this more efficient validation market, which attributes they offer. One of which is whether it is willing to restrict itself to a particular Stage, as proposed above. Im proposing combining that concept of parameterized request/offer/advertise market dynamics with a concept of advertising/requiring additional reserved AVAX stake, the handling of which is subject to negotiate between L1 and a candidate validation, that can effectively ensure SLA-like (smart contract enforced) characteristics. One dimension I'm proposing be added to those you specify is along the lines of what I'm calling "soft vs hard exclusivity". This will reflect, for me example, whether the L1s v set allows/has ANY overlap with validation of the main chains. Hard exclusivity means it can not. It's "private" to that L1 (or disjoint to main chains). Soft exclusivity may be a better of degree. The strictist being "yes you can validate main chain, but we want ours to be the only other L1 you validate" (Note this is niche and may require incentivization). Continuing along that spectrum may be partial exclusively. Yes validate other L1, but not our competitors or only if they're in this jurisdiction. It's granular, so it can loosen along several parameters, but on the other end is completely unrestricted decentralized validation. This is a long way of saying "exclusivity" could be another dimension to add in your stages/ratings. |
Beta Was this translation helpful? Give feedback.
-
|
The only reason L2Beat exists, is because Ethereum L2s are a horrendous pile of complexity and risk that no one understands. They try valiantly, bless them, but IMO it has not helped because let's face it, ain't nobody got time for that. The beauty of Avalanche L1s are their simplicity. Everyone knows how an L1 works -- you must trust the validator set. We are doing ourselves a disservice if we try to complexify things too much. I see there being only 2 types of L1s (from a user's perspective):
For AppChains or dApps, the industry as a whole should adopt some kind of SOC-type audit that considers many aspects of their security posture -- opsec, contract audits, pen testing, validators, economic security, multisigs, key mgmt, insurance, physical security, etc etc. A/B/C/D/F rating. Easy for users to understand. I guess my main point is that the Avalanche C-Chain is an honest-to-God permissionless L1, with a validator set that is now large enough, Lindy enough, with enough stake, to be beyond reproach. All of the "L1" chains built on top of Avalanche will never match this, nor should they try. And their validator set is but one of many trust points -- by focusing on just validator security you are giving a false sense of security due to all the things that are left out. |
Beta Was this translation helpful? Give feedback.
-
|
I wanted to add that this Framework is just a piece of a bigger project that I think is due for delivery on Avalanche:
I know that this is likely going to be very hard to do, but it's definitely a way to increase investors' confidence and present a stronger community effort overall. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
🎯 Proposal Overview
I'm proposing the L1 Stages Classification Framework (SCF) — a standardized way to evaluate and communicate the trust characteristics of Avalanche L1 blockchains.
As the L1 ecosystem grows, participants need clear information about:
This framework answers these questions in a neutral way, recognizing that different trust models serve different purposes.
🔍 How It Works
Four Simple Dimensions
Each L1 is scored on four dimensions using a five-level scale: Minimal, Low, Moderate, High, or Maximum trust required.
Four Stages
The combination of dimension scores produces one of four stages:
No stage is "better"—they serve different use cases and risk preferences.
📊 Example
✅ Why This Matters
For Validators:
For Builders:
For Users:
For the Ecosystem:
🤔 Open Questions
We'd love community input on:
Are the four dimensions right? Should we add/remove dimensions or change their focus?
Is the five-level scale appropriate? Too granular? Not granular enough?
Bridge trust priority: What's the ideal weight we should give it?
Mixed L1s: How should we handle L1s with very different scores across dimensions? Should we create intermediate stages?
Fee tracking: Should L1s disclose monthly validator fees to the Primary Network? How detailed should this be?
Governance emphasis: Are we treating different governance models (DAO, multi-sig, permissionless) fairly? Does the framework over/under-emphasize DAOs?
Implementation: Which analytics platforms might implement this? What support would they need?
🎤 Next Steps
This discussion will help refine the L1 Stages Classification Framework before formal submission. Your feedback should address:
Please share:
Looking forward to building a better way to understand L1 trust together.
Beta Was this translation helpful? Give feedback.
All reactions