Skip to main content

Atomic composability

Introduction #

In the rapidly evolving domain of distributed ledger technology, scalability and interoperability have become paramount challenges for both academic and industry sectors. Ethereum, recognized as a pioneering smart contract platform, has initiated a myriad of advancements, with rollups being a significant answer to the blockchain trilemma: balancing scalability, security, and decentralization (Buterin, V. 2020). However, as rollups appear promising for scalability, they might unintentionally lead to fragmented composability. Given the intertwining nature of systems and applications, ensuring atomicity in transactions across systems is vital.

Atomic composability is predicated on the principle that a transaction (A) can only be finalized if another transaction (B) is likewise finalized (Micali, S. 2016). For decentralized applications that operate over multiple rollups, this assurance is critical. Yet, actualizing this atomicity with disconnected rollups on Ethereum presents major obstacles.

This paper offers a thorough formal model that addresses atomic composability across multiple rollups on Ethereum. Incorporating insights from established distributed system solutions and contemporary cryptographic methodologies, the proposed model encompasses buffering, dependency management, concurrency control, and the groundbreaking zero-knowledge proofs (Ben-Sasson, E. et al. 2013). Beyond proposing the model, we evaluate its practical repercussions, strengths, and weaknesses, ensuring resilience against manipulative or erroneous actions.

Our intent extends beyond presenting a solution; we seek to stimulate a wider discourse on the future trajectory of interconnected blockchains. With a surge in applications shifting to a multi-rollup framework on Ethereum and elsewhere, a robust system guaranteeing atomic composability becomes indispensable. Through our model and ensuing discussions, we aim to make substantial contributions to this burgeoning field of blockchain study.

To grasp the intricacies of composability between rollups on Ethereum, it is imperative to first delineate the nature of rollups and composability, before exploring the challenges in achieving cross-rollup atomic composability.

What are Rollups? #

Rollups, in the Ethereum context, are scaling mechanisms that bolster network throughput. They operate by conducting transactions off-chain and subsequently submitting a transaction summary to the primary Ethereum chain, thus enhancing transaction capacity without overloading the main Ethereum network (Buterin, V. 2020).

What is Composability? #

Within blockchain and Ethereum paradigms, composability pertains to the capability of decentralized applications (dApps) and smart contracts to effortlessly integrate and leverage one another’s features (Schär, F. 2020). This can be analogized to “money Legos”, where each protocol or dApp represents an individual Lego piece, capable of diverse combinations.

Limits of Composability Between Rollups #

With the deployment of multiple rollups on Ethereum, challenges arise in ensuring seamless interaction of dApps and contracts across these rollups. This dilemma is intensified if rollups operate in isolation or lack an effective bridging mechanism.

Atomic Transactions Across Rollups: Ensuring atomic composability between rollups necessitates that a transaction in one rollup is only finalized if its counterpart in another rollup is as well. This is intricate because each rollup might possess unique consensus algorithms, validation methodologies, and operational latency.

Data Availability: For a contract in one rollup to interface with data or another contract on a distinct rollup, the requisite data from the latter may not be readily accessible or may be costly to retrieve.

Differing Rules and Standards: Distinct rollups with divergent standards or rules regarding transaction processing can further impede cross-rollup interactions.

Formal Model for Rollups with Decentralized Common Pool (DCP) #

Addressing the multifaceted nature of blockchain ecosystems, especially those spanning several rollups, demands a structured, rigorous approach to uphold transactional integrity and reliability. To this end, our formal model seeks to methodically dissect and elucidate atomic composability across Ethereum’s multiple rollups.

Our model is an amalgamation of classic distributed system theories and innovative cryptographic practices. We recognize that merely adapting traditional system theories to the blockchain milieu is not sufficient, necessitating a model tailored for the peculiarities of decentralized ledgers, especially within Ethereum’s ecosystem (Narayanan, A. et al. 2016).

Subsequent sections will delve into the intricacies of our formal model, commencing with fundamental definitions. We will then probe into operational dynamics, examining transactional workflows, dependency resolutions, and concurrency nuances. Cryptographic methodologies, particularly zero-knowledge proofs, will be highlighted, underscoring their pivotal role in efficient, confidential transaction validations.

With this formal model, our aspiration is to furnish readers with an encompassing, lucid, and rigorous comprehension of how to establish, sustain, and, if necessary, re-establish atomic composability in environments with multiple rollups.

Definitions #

  • ( R ): Set of rollups on Ethereum.
  • ( T ): Set of transactions, where ( T_{i,j} ) is the j-th transaction on rollup ( R_i ).
  • ( P_d ): Decentralized common pool.
  • ( K ): Set of cryptographic keys associated with transactions.
  • ( \tau ): Timestamp attached to each transaction when it’s accepted by the majority of ( P_d ) nodes.
  • ( B ): Buffer zone where transactions with pending dependencies are stored.
  • ( \tau_{max} ): Maximum time a transaction can reside in buffer ( B ).
  • ( B_{max} ): Maximum number of transactions that can reside in buffer ( B ).
  • ( D_{max} ): Maximum number of attempts to resolve a transaction’s dependencies.

Operations #

  1. Publish:

    • Description: This operation ensures that every transaction ( T_{i,j} ) from rollup ( R_i ) is published to the Decentralized Common Pool ( P_d ).
    • Steps:
      1. ( R_i ) generates a transaction ( T_{i,j} ).
      2. ( R_i ) sends ( T_{i,j} ) to ( P_d ).
      3. On receiving ( T_{i,j} ), the majority of nodes in ( P_d ) timestamp it with ( \tau ) and store it.
    • Output: ( \text{publish}(T_{i,j}) \rightarrow P_d(\tau) )
  2. Buffer:

    • Description: Transactions with unmet dependencies are sent to the buffer ( B ) until their dependencies are resolved or they hit one of the defined limits.
    • Steps:
      1. If ( T_{i,j} ) has unresolved dependencies, it’s directed to ( B ).
      2. ( T_{i,j} ) resides in ( B ) until either its dependencies are resolved or it breaches one of the constraints (( \tau_{max} ), ( B_{max} ), or ( D_{max} )).
    • Output: ( \text{buffer}(T_{i,j}) \rightarrow B )
  3. Resolve:

    • Description: The operation checks for and resolves dependencies between two transactions, possibly from different rollups.
    • Steps:
      1. Given two transactions ( T_{i,j} ) and ( T_{k,l} ), ( P_d ) checks for their mutual dependencies.
      2. If dependencies exist, ( P_d ) attempts resolution based on available data and timestamps.
      3. If resolution is successful within the constraints, the transactions proceed. Otherwise, they remain in ( B ) or are rejected.
    • Output: ( \text{resolve}(T_{i,j}, T_{k,l}) \rightarrow \text{bool} )
  4. Verify:

    • Description: Ensures the validity and authenticity of the transaction using cryptographic keys.
    • Steps:
      1. For each transaction ( T_{i,j} ), ( P_d ) uses the associated cryptographic key ( K_{i,j} ) to verify its authenticity and integrity.
      2. If the verification succeeds, the transaction proceeds. Otherwise, it’s deemed invalid.
    • Output: ( \text{verify}(T_{i,j}, K_{i,j}) \rightarrow \text{bool} )

Dependency Handling #

For two transactions ( T_{i,j} ) from ( R_i ) and ( T_{k,l} ) from ( R_k ):

  1. Timestamp-Based Handling:

    • If both transactions are timestamped in ( P_d ) and their timestamps are within an acceptable time difference (delta): [ |\tau(T_{i,j}) - \tau(T_{k,l})| \leq \delta ] Then, the transactions are deemed compatible and can be processed without buffering.
  2. Buffering:

    • If the timestamp difference exceeds the acceptable delta, or if there’s a dependency which isn’t yet satisfied, transactions are sent to ( B ).
    • While in ( B ):
      • Periodic checks are done to see if dependencies can now be resolved.
      • If ( (\tau_{current} - \tau(T_{i,j})) > \tau_{max} ), ( T_{i,j} ) is rejected.
      • If the buffer size exceeds ( B_{max} ), a rejection policy ( P_R ) is triggered to create space.
      • If attempts to resolve dependencies for ( T_{i,j} ) exceed ( D_{max} ), ( T_{i,j} ) is rejected.
  3. Rejection and Notification:

    • Once a transaction is rejected, a notification mechanism informs the originating rollup ( R_i ) or the respective party about the rejection. This allows for potential re-submission or other actions from the user’s end.

Punitive Measures #

1. Staking Mechanism

Every rollup must stake a certain number of tokens to participate in the transaction composability system. This stake acts as collateral that can be forfeited or slashed if the rollup misbehaves.

2. Monitoring and Reporting

Third-party nodes or participants (often called “watchers” or “validators”) can monitor transactions and execution behaviors across rollups. If they detect a rollup executing a transaction without proper validation or not adhering to the model’s rules, they can submit a proof of this misbehavior.

3. Misbehavior Proof

A system can be put in place to accept proofs of misbehavior. This proof will be a verifiable evidence that a rollup executed a transaction without adhering to the atomic composability rules. Examples might include:

  • A transaction executed even when its dependencies were not met.
  • A transaction was not delayed or buffered according to the model’s rules.
  • Dependency resolutions were not handled as per the model.

4. Slashing

Upon successful verification of a misbehavior proof, the staked tokens of the misbehaving rollup can be slashed. This means a portion or all of the staked tokens are taken away as a penalty.

5. Distribution of Slashed Tokens

To incentivize monitoring and honest behavior, a portion of the slashed tokens can be distributed to the party that provided the proof of misbehavior. The remainder can be burned or redistributed to other participating nodes.

6. Re-entry Mechanism

Once penalized, a rollup might be barred from participating in the system for a specific duration or until it stakes more tokens as a renewed sign of commitment to adhere to the rules.

By introducing these punitive measures, the formal model provides a disincentive for rollups to act dishonestly or recklessly. This enhances the security and trustworthiness of the entire multi-rollup system.

Impossibility Result #

Given the model:

  1. Atomic Composability Challenges: Ensuring atomic composability between transactions across rollups, especially with inter-rollup dependencies, becomes challenging due to system-imposed limits like ( \tau_{max} ), ( B_{max} ), and ( D_{max} ).

  2. Finite Buffering vs. Transaction Rejection: While finite buffering is guaranteed by ( \tau_{max} ), ( B_{max} ), and ( D_{max} ), they inevitably lead to transaction rejection. Rejected transactions aren’t necessarily invalid but could be victims of system constraints.

  3. Network Partitions in ( P_d ): If nodes in the ( P_d ) network become partitioned, it can lead to inconsistencies in the state of ( P_d ). This disrupts the dependency resolution process, making atomic composability between transactions impossible.

  4. External Dependencies Limitation: If a transaction in a rollup depends on external data or off-chain events, achieving atomic composability becomes unpredictable due to possible variable and extended delays, which the current model doesn’t account for.

Wile the formal model provides a structured approach to handling transaction delays, buffers, and dependencies, achieving atomic composability across rollups becomes impossible under specific scenarios, such as system constraints, network partitions, and unpredictable external dependencies. The model underscores the inherent challenges and trade-offs of scaling blockchains while trying to maintain robust guarantees.

Types of dependencies and coverage #

The formal model, with its comprehensive set of operations and dependency handling mechanisms, is equipped to handle the complexities of the various types of multi-rollup dependencies.

Sequential Dependencies #

Mapping: Transaction ( T_{A,i} ) in Rollup A must finalize before Transaction ( T_{B,j} ) in Rollup B can commence.

Coverage in Model:

  • The model’s Buffering operation ensures that ( T_{B,j} ) waits in the buffer ( B ) until ( T_{A,i} ) is confirmed.
  • The Resolve operation checks and ensures that the dependency of ( T_{B,j} ) on ( T_{A,i} ) is met before proceeding.

Concurrent Dependencies #

Mapping: Transaction ( T_{C,k} ) in Rollup C and Transaction ( T_{D,l} ) in Rollup D must be executed concurrently for the smart contract to proceed.

Coverage in Model:

  • Timestamp-Based Handling in the Dependency Handling section ensures that if the timestamps ( \tau(T_{C,k}) ) and ( \tau(T_{D,l}) ) are within an acceptable delta, the transactions are deemed to have occurred concurrently.

Conditional Dependencies #

Mapping: If ( T_{E,m} ) in Rollup E is successful, then execute ( T_{F,n} ) in Rollup F. Otherwise, alter or reject ( T_{F,n} ).

Coverage in Model:

  • The Buffering operation holds ( T_{F,n} ) until ( T_{E,m} )’s outcome is determined.
  • The Resolve operation checks the outcome of ( T_{E,m} ) and determines the next steps for ( T_{F,n} ) based on it.

Mutual Dependencies #

Mapping: ( T_{G,o} ) in Rollup G and ( T_{H,p} ) in Rollup H are mutually dependent. A failure in one should trigger a compensatory action in the other.

Coverage in Model:

  • Buffering operation holds both transactions until their mutual dependencies are resolved.
  • Resolve ensures that if one transaction fails, the other is alerted and appropriate actions are taken.

Aggregate Dependencies #

Mapping: If the sum of transactions ( \Sigma T_{I,q} ) in Rollup I exceeds a threshold, execute bonus distribution ( T_{J,r} ) in Rollup J.

Coverage in Model:

  • Buffering ensures that ( T_{J,r} ) is on hold until aggregate conditions from Rollup I are met.
  • Resolve continually checks the cumulative conditions from Rollup I to determine the fate of ( T_{J,r} ).

Cyclic Dependencies #

Mapping: ( T_{K,s} ) depends on ( T_{M,t} ), which depends on ( T_{L,u} ) and ( T_{K,v} ).

Coverage in Model:

  • The model’s inbuilt limits, such as ( \tau_{max} ) and ( D_{max} ), ensure that cyclic dependencies don’t lead to infinite loops or deadlocks. If a resolution isn’t found within these limits, transactions are rejected.
  • Resolve ensures dependencies are handled in the right order, breaking the cycle if needed based on timestamps or other constraints.

Atomic composability & ZK-proofs #

Zero-Knowledge Proofs (zk-proofs), particularly zk-SNARKs (Zero-Knowledge Succinct Non-Interactive Argument of Knowledge) and zk-STARKs (Zero-Knowledge Scalable Transparent Argument of Knowledge), are cryptographic methods that allow one party to prove to another party that a statement is true, without revealing any specific information beyond the validity of the statement itself.

For atomic composability across rollups, zk-proofs can be exceptionally beneficial. Let’s explore how:

Transaction Validation #

zk-proofs can be utilized to validate that a transaction on one rollup adheres to specific conditions, without actually revealing the contents of the transaction. This is especially useful for maintaining privacy across rollups while still ensuring that conditions are met.

Dependency Verification #

If one transaction depends on another from a different rollup, zk-proofs can be utilized to validate the successful execution and correctness of the dependent transaction, again, without revealing the actual transaction details.

Concurrency and Aggregate Dependencies #

zk-proofs can be crafted to provide proofs of concurrent transaction executions or aggregate transaction conditions (like total transaction value across multiple rollups) being met, all without revealing specific transaction details.

Compactness and Efficiency #

zk-proofs, especially zk-SNARKs, have the advantage of being succinct. That means, irrespective of the amount of data or the number of transactions they’re validating, the proof size remains relatively small and verification is swift. This feature can be immensely beneficial in a system with multiple rollups, where swift validations are essential.

Incorporating Zk-proofs #

  1. Verification Functions: Introduce verification functions within the model that use zk-proofs. These functions can quickly validate the correctness and completion of dependent transactions without needing full transparency into the transactions.

  2. Reduced Buffering Requirement: With zk-proofs validating transaction dependencies almost immediately, the need for extensive buffering can be reduced. Transactions can be executed swiftly after their zk-proof verifications succeed.

  3. Privacy Maintenance: As zk-proofs can validate statements without revealing the underlying data, transactions across rollups can maintain higher degrees of privacy, even in interdependent scenarios.

  4. Slashing with zk-proofs: The monitoring and reporting mechanism can also utilize zk-proofs. Watchers can provide a zk-proof of misbehavior, which if validated, can lead to punitive measures.

  5. Proofs of Dependency Resolution: In the case of cyclic or complex dependencies, zk-proofs can be crafted to ensure that all necessary conditions across rollups have been met without revealing the specifics of the transactions involved.

To integrate zk-proofs into the model effectively, it would require the rollups participating in this system to be zk-proof compatible. They should be able to generate and verify these proofs efficiently. Furthermore, standardization of proofs related to atomic composability would be necessary to ensure smooth inter-rollup operations.

The integration of zk-proofs does introduce added cryptographic complexity, but with the advantages of swift validation, reduced need for buffering, and enhanced privacy, they can significantly bolster the robustness and efficiency of the atomic composability model across rollups.

Application of the Formal Model #

As the Ethereum ecosystem evolves, diverse scaling solutions have emerged to address the challenges of throughput and latency. Among them, the concept of shared sequencers has gained significant traction. Shared sequencers act as centralized transaction ordering mechanisms, increasing throughput by temporarily assuming the role of transaction orderer before these transactions are batched and finalized on the main chain. While they introduce efficiencies, shared sequencers inherently operate differently from typical rollups, warranting an exploration of how our formal model for atomic composability can be applied to them.

The application of our formal model to shared sequencers and other existing solutions accentuates its versatility and universality. Whether it’s the centralized nature of sequencers or the diverse architectures of other rollups, the principles of atomic composability, as proposed in our model, remain consistent. This consistency is instrumental in creating a cohesive, interoperable, and scalable Ethereum ecosystem that can cater to the ever-growing demands of decentralized applications and services.

In this section, we extend our formal model to understand its implications on shared sequencers and juxtapose it with other prevailing solutions in the space. The goal is to provide a comparative analysis that not only elucidates the strengths and potential drawbacks of each approach but also offers a cohesive understanding of how atomic composability can be universally achieved irrespective of the underlying scaling solution.

Shared Sequencers: A Brief Overview #

Shared sequencers, by design, centralize the transaction ordering mechanism without compromising the security guarantees of the main chain. Transactions are quickly processed off-chain by the sequencer and then aggregated into larger batches to be submitted on-chain. This architecture provides the dual benefit of swift transaction times and reduced on-chain congestion.

Applying the Formal Model: #

  1. Buffering & Dependency Handling: Given that shared sequencers operate in an off-chain environment before finalization, the buffering mechanism we proposed becomes even more critical. It allows for the temporary storage of transactions, especially when there are cross-rollup or cross-sequencer dependencies.

  2. Concurrency Control: Shared sequencers inherently deal with a high volume of simultaneous transactions. Implementing our concurrency control mechanism ensures that interdependent transactions, even from different sources or rollups, can be processed in an atomic fashion.

  3. Zero-Knowledge Proofs & Validation: The validation mechanism using zk-proofs becomes an asset here. It ensures that even in a semi-centralized environment, transaction validations remain private, swift, and secure. The zk-proofs also provide an added layer of trust to users who might be skeptical of the centralized nature of sequencers.

Comparison with Existing Solutions: #

While shared sequencers present a compelling case for scalability, other solutions like zk-rollups, optimistic rollups, and sidechains each have their unique architectures and merits.

  1. zk-Rollups: These rely heavily on zk-proofs for batched transaction validation. Applying our formal model, zk-rollups can benefit from enhanced dependency handling and buffering mechanisms, ensuring transactions across rollups are consistently processed.

  2. Optimistic Rollups: Here, transactions are assumed to be correct until proven otherwise. Our formal model introduces a systematic approach for handling disputes and reordering, ensuring atomic composability without extensive delays.

  3. Sidechains: Being independent blockchains, sidechains can pose more significant challenges for atomic composability. Our model can act as a bridge, providing mechanisms like dependency resolution and buffering to ensure smooth inter-chain operations.

Conclusion #

The formal model presented serves as a superset for all possible atomic composability across rollups because of its comprehensive nature.

This model is a superset because it is designed to encompass every step, every entity, and every possible scenario in the lifecycle of a transaction across rollups. By being exhaustive in its approach, any atomic composability solution across rollups that exists or might be conceived in the future can be mapped onto some subset of this model. It acts as a general framework or blueprint from which specific implementations can be derived, tailored to particular requirements or constraints.

Comprehensiveness of Definitions #

It identifies every rollup, transaction, timestamp, cryptographic key, buffer mechanism, and even the decentralized pool where transactions are posted. This covers all essential tools and entities required for atomic transactions between rollups.

Detailed Operations #

The detailed operation steps ensure that every conceivable action associated with transactions – from their creation, publication, verification, buffering, to dependency resolution – is incorporated:

  • Publish: Every transaction is committed to a decentralized common pool, capturing the universal broadcast mechanism.
  • Buffer: Handles the uncertainties and delays.
  • Resolve: Manages interdependencies, making sure that if one transaction in a set can’t be executed, none in that set are.
  • Verify: Ensures that only legitimate and valid transactions are processed.

Robust Dependency Handling #

This is perhaps the crux of atomic composability. The model provides:

  • A method to check the timing (timestamps) of transactions, which is crucial for ensuring order and dependencies.
  • Buffering mechanisms to account for delays in dependency resolution.
  • Defined limits for buffering to prevent infinite waits and to give an outcome (acceptance or rejection) within a finite time.

Flexibility & Scalability #

The model doesn’t restrict the number or type of dependencies. It simply provides mechanisms to handle them. This ensures that as blockchain technology evolves and new dependency types emerge or transactions become more intricate, the model remains applicable.

Incorporation of Limits #

By integrating system limits like ( \tau_{max} ), ( B_{max} ), and ( D_{max} ), the model not only accounts for ideal scenarios where all dependencies are quickly resolved but also for edge cases where system constraints come into play. This adds to its universality.

Security through Verification #

The model’s inclusion of cryptographic verification ensures that security concerns are front and center. It recognizes that composability isn’t just about making sure transactions work together, but that they’re also genuine and untampered.

References: #

  1. Buterin, V. (2020). Ethereum’s Scalability and Decentralization Challenge. Ethereum Foundation.
  2. Micali, S. (2016). Atomic Transactions in Blockchain. MIT.
  3. Ben-Sasson, E., Chiesa, A., Genkin, D., Tromer, E., & Virza, M. (2013). SNARKs for C: Verifying program executions succinctly and in zero knowledge. Cryptology ePrint Archive.
  4. Schär, F. (2020). Decentralized Finance: On Blockchain- and Smart Contract-Based Financial Markets. Federal Reserve Bank of St. Louis Review.
  5. Narayanan, A., Bonneau, J., Felten, E., Miller, A., & Goldfeder, S. (2016). Bitcoin and Cryptocurrency Technologies. Princeton University Press.