Public Blockchain Gravity Releases Litepaper, Bringing a New User Experience Based on High-Performance EVM
Source: Gravity
Introduction
Since the launch of the Gravity Alpha Mainnet in August 2024, the network has processed over 275 million transactions, with a daily transaction volume of up to 2.6 million transactions and successfully collected 30 million G in transaction fees. With the release of the Litepaper, we are excited about the future roadmap of this high-performance EVM chain. This article will delve into the Litepaper, revealing how Gravity is constructing a superior high-performance Layer-1 blockchain architecture for real-world applications.
Gravity is a high-performance, EVM-compatible Layer-1 blockchain built by Galxe. The development of Gravity stemmed from Galxe's evolving needs. As the world's largest on-chain distribution platform, the Galxe ecosystem has connected a vast multi-chain network, attracting over 25 million users. In its continuous evolution, Galxe has transformed into a decentralized super app, integrating innovative tools such as Quest, Passport, Score, Compass, and Alva, while offering rich services such as loyalty points, event NFTs, token rewards, zk-identity verification, and cross-chain smart savings. Throughout this growth, Galxe's high transaction volume became a key driver for building Gravity—its loyalty points system supports 51.2 transactions per second, and token reward events support 32.1 transactions per second. This led us to pivot to an EVM blockchain when decentralizing the Galxe backend while maintaining optimal user experience.
With the further blockchainization of Galxe, increased transaction volume is expected. Anticipated throughput demands will reach 50 million gas per second, while meeting broader ecosystem needs, such as cross-chain settlements, loyalty point transactions, and NFT markets, may require processing power of 5 billion gas per second. Therefore, Gravity's goal is to support a throughput of 1 gigagas per second to meet the scaling requirements of resource-intensive applications.
In existing solutions, many platforms achieve scalability through Ethereum, but the inevitable challenge period of L2 Rollup leads to transaction delays, which is not friendly to applications like Galxe that require instant transaction confirmation. Although some DApps attempt to address the delay through trust models or external monitoring, these methods introduce additional complexity and risks, which are clearly not ideal for core use cases.
In this context, Gravity has emerged. Gravity, with parallel EVM at its core, has narrowed the performance gap between L2 and L1 by developing Grevm, currently the fastest open-source parallel EVM execution system. In addition, Gravity modularizes Aptos' architecture, integrating validated components such as Quorum Store and AptosBFT consensus engine. Leveraging AptosBFT's mature architecture, Gravity avoids the complexity and potential risks of developing from scratch. Ultimately, Gravity has not only built a high-performance L1 blockchain that provides a full-chain experience but has also launched the first pipeline blockchain SDK, greatly simplifying the interaction process for developers and users.
Gravity delivers unprecedented scalability, decentralization, and near-instant transaction speeds. Its technology combines the advantages of L1 and L2, achieving 10,000 transactions per second and sub-second finality. Meanwhile, by integrating protocols such as EigenLayer and Babylon, Gravity not only ensures robust security in the launch phase but also reduces systemic risks associated with long-term reliance on a single asset.
In the future, Gravity will progress according to the following roadmap:
· Launch Devnet Phase 1 to test real-time performance;
· Initiate Longevity Testnet to validate network long-term stability;
· Transition from Gravity Alpha Mainnet to fully operational Gravity Mainnet, laying the foundation for the widespread adoption of decentralized technology.
Below is the full translation of the Gravity Litepaper.
Abstract
Gravity is a high-performance, EVM-compatible Layer-1 blockchain built by Galxe, designed for large-scale applications and a multi-chain ecosystem. Its features include a throughput of 1 gigagas per second, sub-second transaction finality, and a PoS consensus mechanism based on the re-staking protocol. Gravity's design relies on two core open-source components: (1) Gravity SDK, which is a pipeline-style AptosBFT PoS consensus engine based on re-staking; (2) Gravity reth, an execution layer powered by parallel EVM Grevm (Gravity EVM). These tools provide the ability to build alternative Layer-1 and efficient Layer-2 for Web3 applications, particularly excelling on EVM chains. This paper delves into Gravity's engineering design and technical innovation, demonstrating how high-performance requirements are met through pipeline architecture, advanced consensus algorithms, parallel execution technology, and optimized storage mechanisms (by enhancing reth and improving Aptos consensus engine).
Introduction
The development of Gravity stemmed from the challenges encountered by Galxe in its operation. Galxe is a leading Web3 application that provides users with loyalty points, event NFTs, token rewards, zero-knowledge identity verification, and multi-chain smart savings services. As Galxe grew rapidly, its loyalty points system processed an average of 51.2 transactions per second, while the token reward activities processed an average of 32.1 transactions per second. In the gradual decentralization of Galxe, migrating these use cases to an EVM blockchain while ensuring a seamless user experience became a significant challenge. Therefore, developing a high-performance EVM blockchain that can meet both (1) high transaction throughput and (2) nearly instant transaction confirmation became crucial.
In this context, the decision to choose an existing Layer-2 solution or develop a new Layer-1 is a critical juncture. Layer-1 achieves finality through a consensus algorithm, while Layer-2 addresses this issue through the Rollup protocol. There are trade-offs between the two: Layer-1 typically sacrifices some throughput due to consensus algorithm limitations but can achieve faster transaction finality. For example, a consensus algorithm based on AptosBFT can confirm transactions in sub-seconds, while Optimistic Rollup may have a challenge period lasting up to seven days. Despite zero-knowledge proofs expediting this process, final confirmation still takes several hours. Considering Gravity's need for sub-second finality (especially with its cross-chain intent protocol), we chose to build a Layer-1 solution.
Although Layer-2 has a native advantage in communicating with Ethereum, a Layer-1 like Gravity can achieve deep interoperability with Ethereum and other blockchains through the Gravity Intent Protocol and cross-chain bridges. This design not only seamlessly collaborates with Ethereum but also enhances the overall connectivity of the entire Web3 ecosystem.
Furthermore, the restaking protocol significantly reduces the difficulty of building a PoS Layer-1 blockchain. Gravity leverages protocols such as EigenLayer and Babylon to integrate staking assets from Ethereum and Bitcoin and their extensive validator networks. This provides economic security to PoS consensus, bringing Gravity's decentralization and security to a level comparable to Ethereum.
In summary, Gravity is built as a high-performance, EVM-compatible Layer-1 blockchain to meet the scalability and performance needs of modern Web3 applications. While its initial development was to serve Galxe, the Gravity SDK and Grevm (Gravity EVM) provide a flexible framework suitable for building any Layer-1 and efficient Layer-2, akin to the Tendermint/Cosmos SDK.
We Need a Throughput of 1 Gigagas/s
For blockchain, throughput is the most critical performance metric, usually measured in transactions per second (TPS) or gas used per second (gas/s). Taking Galxe's Loyalty Points system as an example, it requires a minimum of 4 million gas/s to operate stably. This data is based on each loyalty point transaction consuming an average of 80,000 gas, while being able to process approximately 51.2 transactions per second, as calculated from this.
This prediction is supported by practical data from the Gravity Alpha Mainnet. As our Layer 2 network in testing, the Gravity Alpha Mainnet's loyalty point transactions have shown that its throughput can stably reach 4 million gas/s, validating the accuracy of the previous estimate.

Although the high cost of on-chain operations may lead to a slight decrease in demand, Galxe's scaling trend indicates that demand may increase to two to three times the current level during peak hours. Additionally, with the inclusion of other use cases such as NFTs, token rewards, and future whole-chain tasks supported by zero-knowledge proofs, if Galxe were to fully on-chain, the throughput demand is expected to reach 50 million gas/s. Assuming that the application gas usage on the Gravity chain follows a Pareto distribution (similar to Uniswap always consuming 10% of Ethereum's gas), to meet the broader ecological needs such as cross-chain settlement, loyalty point transactions, and the NFT market, ideally, support for a throughput of 500 million gas/s is required. Therefore, to meet these potential demands, the blockchain must have a processing capacity of 1 gigagas per second to ensure it can adapt to the scaling of resource-intensive applications.
To achieve such high throughput, the key is to introduce a parallel EVM. We have developed Grevm, which is the fastest open-source parallel EVM execution system currently available. The specific performance can be seen in the following sections.
Sub-second Confirmation Time
Besides throughput, the confirmation speed of transactions is also crucial for user experience. Modern users are accustomed to almost instant responses similar to Web2, which is still a challenge for blockchain. Taking Galxe as an example, it is similar to fully on-chain games and has certain requirements for low latency. Currently, the transaction confirmation time of most EVM blockchains ranges from seconds to days, far from meeting this requirement. We have chosen the AptosBFT consensus algorithm to achieve sub-second confirmation time.
Although L2 Rollups theoretically can increase throughput, their challenge period can cause transaction delays, which is highly disadvantageous for applications requiring instant transaction confirmation (such as Galxe). While some DApps try to optimize this through trust models or external monitoring, it introduces additional complexity and risk, which is not ideal for critical applications. The Gravity SDK, by designing a five-stage pipeline, parallelizes the consensus and execution processes, closing the performance gap between L2 and L1 (see specific design details in the following sections).
PoS Security Based on Restaking
The Gravity SDK provides a secure way to scale Ethereum, not limited to L2 Rollups, but instead opting for an L1 architecture protected through restaking, balancing performance, interoperability, and security. The core module integrates restaking protocols such as EigenLayer and Babylon, providing economic trust support, safeguarding the construction of a robust proof-of-stake consensus.
With Ethereum's $450 billion staked assets and 850,000 validators, along with access to Bitcoin's $6 trillion assets through Babylon, Gravity can establish a strong security foundation from the outset, avoiding common launch issues and security vulnerabilities in new blockchains, while also reducing long-term systemic risks associated with a single asset.
Gravity Chain Architecture

The Gravity Chain consists of two main components: the Gravity SDK and Gravity reth. The Gravity SDK is an enhanced blockchain framework based on the Aptos chain, Aptos being the most advanced PoS blockchain based on the HotStuff consensus algorithm family, with a pipeline architecture that significantly improves throughput and resource efficiency. Gravity reth, based on reth, operates as the execution layer in the form of a BlockStream Reactor (BSR) to receive proposed blocks from the consensus layer. By optimizing reth, Gravity reth achieves parallel execution, batch asynchronous state submission calculations, and storage efficiency improvements. These two components are closely integrated through the Gravity Consensus Engine Interface (GCEI) and reth adapter, dynamically managed by the pipeline controller to progress through each stage.
This design separates block execution from block consensus, with the execution layer consuming proposed blocks. Our optimizations of reth make it perfectly suited for the pipeline block proposal process managed by the BlockStream Reactor (BSR).
The transaction process of Gravity Chain is as follows:
1. The transaction is submitted through the Gravity Reth JSON RPC interface, which is fully compatible with Ethereum.
2. Subsequently, the transaction enters the memory pool of Gravity SDK and propagates in the network. Validators batch process the transactions and generate a Quorum Store (QS) certificate.
3. The leader of each round proposes a block, which includes block metadata and ordered transactions selected from the memory pool and QS.
4. Once the proposal is marked as ordered, it enters the execution layer.
5. The Grevm in the execution layer processes transactions in parallel, generates execution results, and passes the new state to the state management module.
6. The state module calculates the state root and passes it to the consensus engine to achieve state root consensus.
7. After the state root is finally confirmed, the storage module persists the state root and block data.
The following sections will detail each component.
Gravity SDK: Innovative Practice of Open-Source Pipeline Blockchain
Gravity SDK is a modular open-source blockchain framework based on production-ready Aptos blockchain development. Its goal is to modularize the architecture of Aptos, leveraging proven components such as Quorum Store and AptosBFT consensus engine to build the first pipeline blockchain SDK.
The reasons Gravity SDK chooses Aptos as the foundation include:
· Top-notch technical architecture: Aptos is an advanced PoS blockchain based on the HotStuff family of consensus.
· Ultimate performance: Aptos provides throughput of 160,000 transactions per second and finality in less than 1 second.
· Battle-tested reliability: Aptos has been validated in production environments, demonstrating outstanding stability and efficiency.
· Avoid reinventing the wheel: Leveraging the mature architecture of Aptos can avoid the complexity and potential risks of developing from scratch, while most attempts to surpass Aptos are mostly theoretical and lack practicality.
· Synergistic benefits: As Aptos continues to evolve, Gravity SDK can seamlessly integrate its new features, such as random number API, while also feeding back to Aptos through a modular architecture and innovative security mechanisms.
Blockchain based on the Gravity SDK interfaces with the Pipeline Consensus Engine through the Gravity Consensus Engine Interface (GCEI). Although GCEI is compatible with multiple execution layers, the Gravity SDK currently mainly supports Gravity Reth. More details about GCEI will be discussed in subsequent chapters.
Gravity Consensus Engine Interface (GCEI)
The GCEI (Gravity Consensus Execution Interface) protocol is a communication bridge between the consensus layer and the execution layer. It standardizes the interaction between the two layers, ensuring that consensus and execution processes remain synchronized through the Pipeline Controller.

The main difference between a traditional blockchain SDK and the Gravity SDK lies in its pipelined consensus engine. The execution layer must be implemented as a Block Stream Reactor, meaning it needs to be able to continuously consume the proposed block stream, and the state commitment must be asynchronously computed from transaction execution. Additionally, the execution layer needs to provide backpressure signals to the consensus layer to dynamically adjust the pace of block proposals.
Furthermore, due to the pipelined nature of the Gravity SDK, the execution layer must be able to handle non-executable transactions in proposed blocks, as the mempool cannot strictly validate the validity of any transactions due to lack of access to the latest world state: the execution may not have completed. Also, the execution results should not block the generation of subsequent blocks because, with the parallelization of block consensus and state consensus in the Gravity SDK, the execution layer becomes a reactor to the proposed block stream, free to return execution results in subsequent stages.
The GCEI protocol defines two sets of APIs:
· Consensus Layer API: These APIs are implemented by the Gravity SDK and are used for the execution layer to respond to the consensus engine's proposed block and submit a state commitment.
· Execution Layer API: These APIs must be implemented by the execution layer. The consensus engine will use these APIs to attempt to validate, streamify the proposed block before turning transactions into blocks, and inform the execution layer of the final state commitment.
From a transaction lifecycle perspective, the GCEI protocol defines the following:
1. check_txn (Execution Layer API)
· Input: Receives a transaction (GTxn) as input.
· Output: Returns the sender's address, nonce, and gas limit of the transaction.
Purpose: This method is used by the consensus engine to perform best-effort validation before proposing the transaction to a block. This method can be called multiple times for the same transaction, such as when the transaction enters the mempool, before being proposed to a block, and when the state commitment is finally determined.
2. submit_txn (Consensus Layer API)
Input: Receives a transaction (GTxn) from the execution layer.
Output: Returns Result<()>, indicating whether the transaction was successfully added to the mempool.
Purpose: The execution layer can use this method to submit a transaction to the mempool. The consensus engine will then propagate the transaction over the network and form a Quorum Store upon receiving a batch of transactions.
3. recv_ordered_block (Execution Layer API)
Input: Receives an ordered_block (of type BlockBatch) containing sorted transactions and block metadata.
Output: Returns Result<()>, indicating whether the execution layer successfully received and accepted the block.
Purpose: Once the consensus engine proposes a block, it is sent to the execution layer for transaction execution. This method allows the execution layer to receive and process the proposed block.
4. update_state_commitment (Consensus Layer API)
Input: State commitment of a specific block number (StateCommitment).
Output: Returns Result<()>, indicating whether the state commitment was successfully accepted by the local consensus engine.
Purpose: After the execution layer computes the state commitment, it sends it to the consensus layer for finalization, achieving a lightweight consensus of 2f+1 with other validators. If the state commitment consensus is significantly behind the proposed block's progress, the pipeline controller will adjust the pace of block proposal.
5. commit_block_hash (Execution Layer API)
Input: Receive a vector of block_ids representing the blocks to be submitted.
Output: Return a Result<()> indicating the success or failure of the operation.
Purpose: When the state commitment is ultimately determined, the consensus layer will notify the execution layer to submit the block hashes to the blockchain storage.
Blockchain Pipeline
The Gravity SDK uses a five-stage pipeline architecture to maximize hardware resource utilization, achieving higher throughput and lower latency. The pipeline interleaves tasks between different blocks, and the pipeline manager uses feedback mechanisms to ensure the steady progress of the blockchain. The first three stages belong to the consensus layer, while the last two stages belong to the execution layer.
Each stage is explained as follows:
· Stage 1: Transaction Propagation: In this stage, transactions are efficiently propagated among validators to ensure timely and reliable inclusion of transactions during block construction. The design decouples transaction propagation from the consensus mechanism, following the ideas of Narwhal & Tusk and Aptos, where validators continuously share batches of transactions, leveraging all network resources to run concurrently. When a batch of transactions receives 2f+1 weight signatures (forming PoAv, i.e., availability proof), it ensures that the batch of transactions is stored by at least f+1 honest validators, allowing all honest validators to retrieve these transactions for execution.
· Stage 2: Block Metadata Ordering: In this stage, a consensus and universally recognized order of transactions and block metadata is established within the network. The consensus mechanism (AptosBFT) follows the double-chain rule to provide Byzantine fault-tolerant blocks. The blocks subsequently flow into the execution stage, ready for parallel processing.
· Stage 3 (BSR): Batched Sequential Transactions: This stage is part of the execution layer, where transactions are executed in parallel. The execution results are delivered to the state commitment stage.
· Stage 4: State Commitment: In this stage, the state changes caused by transaction execution are finalized, and block finality is prepared. State commitment and transaction execution are asynchronous computations, ensuring that the execution of the next block is not hindered by the current block's state commitment.
· Phase 5: State Persistence: This phase persists the committed state changes to the blockchain storage. The final state root and related data are stored in the Gravity Store, which utilizes a highly optimized storage engine designed for fast and reliable access. It also notifies the mempool and Quorum Store to clear transactions that can no longer be included in the future.
Staking and Restaking Module
Building a secure Proof of Stake (PoS) Layer 1 blockchain is a complex task, especially when relying solely on on-chain specific token staking. This approach may face issues of insufficient economic security in the early stages, such as token price volatility or limited validator participation. To address this challenge, the Gravity SDK provides a flexible Staking and Restaking module aimed at enhancing network security through on-chain and off-chain staking mechanisms.
One of the key strategies of the Gravity SDK is the introduction of protocols like EigenLayer and Babylon for Restaking. These protocols allow validators to restake assets from other mature networks (such as Ethereum and Bitcoin), leveraging their existing security. By enabling validators to stake assets from these chains, the Gravity SDK enhances the network's economic security without relying solely on the native token. This approach not only strengthens the network's resilience but also promotes diversity in the validator ecosystem. The design of the Staking module is centered around modularity, with the Restaking component being highly flexible to easily adapt to new Restaking protocols as the blockchain ecosystem evolves.
This module not only supports restaking assets but also enables staking custom ERC20 tokens on supporting chains, such as the G Token on Ethereum. Validators can participate in consensus by staking these permitted tokens, contributing to the network's security. Validator voting power is calculated based on their total staked value, including custom tokens and assets from Restaking protocols. This calculation is specific to each chain's configuration, ensuring that each chain can flexibly set staking and restaking rules based on its unique requirements.
The Epoch Manager in the Consensus Engine collaborates directly with the Staking module to calculate the weight of the next round's validator set. It ensures that the consensus process accurately reflects the latest staking dynamics by fetching stake values from the execution layer. In this architecture, cross-chain assets (such as staked assets from Ethereum) must first be bridged to the execution layer before they can be used to calculate the total validator stake. The implementation of the bridging mechanism is the responsibility of the execution layer, allowing for more flexibility in handling cross-chain communication. Possible solutions include a PoS bridge, zero-knowledge proofs of chain state, and embedded interchain message passing for bootstrapping.
More technical details, API designs, and a full explanation of the Staking and Restaking mechanisms will be elaborated on in upcoming documents.
Gravity Reth: Block Stream Reactor EVM Execution Layer
Integrating the Ethereum Virtual Machine (EVM) execution layer into the Gravity SDK architecture presents unique challenges, especially when fully leveraging its pipelined consensus engine capabilities. To seamlessly integrate and unlock the potential of this architecture, we need to make several key optimizations to the open-source Ethereum client, reth. These optimizations fundamentally transform reth into Gravity reth, a pipelined-optimized EVM execution layer tailored for the pipelined consensus engine.
Traditional blockchain architectures process blocks sequentially, ensuring that each block is fully validated and executed before proposing the next block. However, the Gravity SDK adopts a pipelined consensus mechanism that separates the block processing into various stages to enhance performance. This paradigm shift introduces complexities:
Out-of-order Transactions: In a pipelined chain, the latest world state may not be accessible in the mempool due to pending executions from previous blocks. Therefore, transactions included in the proposed block may not be executable at the time of proposal because their validity cannot be strictly verified without the latest state.
Non-Blocking Execution Results: To prevent pipeline stalls, the execution results should not block subsequent block generation. The execution layer must asynchronously handle proposed blocks and return execution results in a later stage without hindering the consensus process. For EVM, this entails redefining the blockhash and eliminating reliance on the stateRoot field in the block header.
To address these issues, we have introduced four key optimizations:
· Block Stream Reactor (BSR): The BSR aims to adapt reth for the pipeline block proposal flow of the Gravity SDK. It enables the execution layer to continuously consume the proposed block stream as a reactor for asynchronous block processing. The BSR establishes a dynamic feedback loop with the consensus engine, incorporating appropriate backpressure signals. These signals adjust real-time block proposal and state submission speeds based on the execution layer's throughput and latency. If the execution layer lags due to complex transactions or resource constraints, the backpressure mechanism reduces the block proposal rate to ensure system stability.
· State Submission and Transaction Execution Decoupling: The second optimization involves decoupling state submission computation from transaction execution. By decoupling these processes, we have achieved the asynchrony of state submission computation, allowing subsequent block execution to proceed without waiting for the completion of state submission in the current block. We have redefined the blockhash, eliminating reliance on the stateRoot field in the block header, ensuring that state root computation does not stall the generation of subsequent blocks.
· Storage Layer Optimization: In a pipeline architecture, efficient caching and persisting of multi-version state values and state submission are crucial. The third optimization focuses on enhancing the storage layer to meet these requirements without introducing bottlenecks. By optimizing the storage mechanism, we ensure that state data can be rapidly written and concurrently retrieved. This includes building a multi-version storage engine and supporting asynchronous I/O from the database to the storage API.
· Parallel EVM: The final optimization involves parallelizing transaction execution within the EVM. We have developed Grevm, a parallel EVM runtime that significantly accelerates transaction processing through concurrent transaction execution. Grevm leverages data dependency hints obtained from transaction simulation to optimize parallel execution, reducing transaction re-executions and increasing throughput.

Grevm (Gravity EVM) - Parallel EVM Execution
Grevm is an open-source project hosted on GitHub (if not open-sourced yet, it will be in the future). Please refer to its README for more information.
Grevm (Gravity EVM) is an open-source parallel EVM runtime based on revm. Grevm's algorithm is inspired by BlockSTM and is enhanced by introducing a transaction data dependency graph obtained from simulation results. This mechanism makes parallel execution scheduling more efficient, minimizing transaction re-executions.
In our benchmarking, Grevm is currently the fastest open-source parallel EVM implementation. For conflict-free transactions, Grevm is 4.13 times faster than sequential execution, achieving a speed of 26.50 gigagas/s. When simulating real-world 100 μs I/O latency, its speed is 50.84 times that of sequential execution, with a throughput of 6.80 gigagas/s. This performance leap is attributed to the integration of parallelized execution and asynchronous I/O operations—parallelization allows I/O operations to effectively overlap, further accelerating performance.
The core idea of Grevm is to leverage data dependencies between transactions to optimize parallel execution through speculative transaction read/write sets. Although not all hints are entirely accurate, these simulation-based hints are typically practical. For example, on the Ethereum mainnet, based on historical Gas usage, approximately 30% of transactions are simple Ether transfers, and another 25%-30% are ERC20 token transfers, which usually only involve reading from and writing to a limited number of accounts and storage slots. For these transactions, simulation results are consistently accurate.
Building on these insights, we developed a three-stage parallel execution framework for Grevm as a follow-up to the Block-STM model and integrated data dependency hints obtained from transaction simulation:
· Stage 1: Hint Generation and State Preloading—Simulate transactions to collect dependency hints and pre-warm the memory cache. This stage can be executed at different times depending on the blockchain's design. For example, when new transactions arrive in the mempool, simulation can be run immediately to prepare dependency hints in advance.
· Stage 2: Dependency Analysis—Convert the dependency hints collected during the simulation stage into a directed acyclic graph (DAG) representing the dependency relationships between transactions. This DAG is used to plan transaction scheduling for subsequent parallel execution.
· Stage 3: Parallel Execution Under Conflict Resolution—Utilize a modified version of the BlockSTM algorithm based on the dependency DAG to execute transactions in parallel. The scheduling order no longer strictly follows the sequence number of transactions in the block (e.g., 1, 2, 3, ..., n) but prioritizes transaction ordering based on the DAG to minimize conflicts and reduce the need for re-execution.

Asynchronous Batch State Commitment
State commitment generation remains a key bottleneck in the blockchain pipeline, stemming from the inherently sequential nature of Merkleization. The final state commitment cannot be generated until each subtree computation is completed, leading to significant delays. Although existing solutions (such as reth's account-level parallelism) have introduced some degree of parallelism, there is still ample optimization space. In the context of Gravity reth's Block Stream Reactor (BSR) execution layer, decoupling state commitment consensus from transaction execution allows for asynchronous deferred and batched state commitment computations without blocking execution.
In order to address these issues, the proposed framework introduces the following key innovations:
Asynchronous Batch Hashing: By decoupling state commitment from transaction execution, this framework achieves asynchronous computation of state commitment. State root updates are batched (e.g., computed every 10 blocks) to reduce the frequency of state root computation. This batch processing approach leverages aggregated shared dirty nodes for efficient hash calculation, minimizing the overhead of frequent updates and reducing overall computation costs. For small blocks, batch processing can significantly increase parallelism; for large blocks, it can reduce overall computation costs.
Full Parallelization: This framework extends parallelization to the entire state tree, not just individual account trees. For nodes flagged as "dirty," the framework employs a parallel state computation algorithm that partitions the tree into independent subtrees and processes these subtrees concurrently. The results are aggregated at the top level for efficient computation of the final root. This method ensures that large blocks with a high volume of transactions and state changes can fully utilize multi-threading, maximizing throughput.
Alternative Fast State Roots: To accommodate Ethereum's block header and BLOCKHASH opcode (which requires access to the state root of the most recent 256 blocks), we have redefined the state root. Unlike relying on a final state commitment (unavailable during transaction execution), we calculate the state root as a combination of the block's change set and the hash of the previous state root. This approach speeds up state root computation without waiting for a complete state commitment.

Gravity Store
To meet the scalability needs of high-performance blockchains for large-scale data management, Gravity Store has emerged as an optimized multi-version storage layer. Building on reth's design, which separated state commitment storage from state data storage to reduce state bloat and lower data read/write overhead, Gravity Store takes this further to support parallel processing and asynchronous state commitment, introducing additional technical requirements.

To address these challenges, Gravity Store presents an efficient multi-version tree structure tailored specifically for our BSR (Block Stream Reactor) architecture. This tree structure supports managing multi-version state updates. Unlike the traditional approach of immediately updating the hash value after a modification, Gravity Store marks the modified nodes as "dirty nodes," enabling deferred processing and batch execution of hash calculations. This design allows for the rapid creation of new versions, efficient querying of specific version data, and clean-up of old versions below a specified height, significantly improving blockchain state management performance.
We are also researching the independent development of the Gravity DB storage engine, which is designed to provide an optimized storage layer for blockchain applications and support fully asynchronous I/O operations. The design of this engine is inspired by LETUS, a high-performance log-structured general-purpose database engine for blockchain. Our Chief Developer, Richard, as one of the main authors of LETUS, will provide a detailed introduction to its design in an upcoming blog post.
Conclusion
Gravity Chain is a high-performance EVM-compatible Layer 1 blockchain designed to meet the scalability and performance needs of modern web3 applications. By combining the Gravity SDK, the pipelined AptosBFT PoS consensus engine, and the Block Stream Reactor execution layer driven by Grevm, Gravity achieves a transaction throughput of 1 gigagas per second, sub-second transaction confirmation times, and PoS security based on a re-delegation mechanism. The design of these technological components provides a solid foundation for web3 applications to create custom alternative L1 blockchains or more efficient L2 solutions, especially suitable for optimizing the use cases of EVM chains.
This article is contributed content and does not represent the views of BlockBeats
You may also like

WEEX LALIGA Partnership 2026: Where Football Excellence Meets Crypto Innovation
WEEX becomes official crypto exchange partner of LALIGA in Hong Kong and Taiwan. Discover how this partnership brings together football excellence and trading discipline.

AI Apocalypse, a massive short squeeze

The "Second Truth" of the Luna Crash: Jane Street Exits Ahead of Plunge

Jane Street Market Manipulation, Stripe Considering Acquiring PayPal, What's the Overseas Crypto Community Talking About Today?
WEEX × LALIGA 2026: Trade Crypto, Take Your Shot & Win Official LALIGA Prizes
Unlock shoot attempts through futures trading, spot trading, or referrals. Turn match predictions into structured rewards with BTC, USDT, position airdrops, and LALIGA merchandise on WEEX.

a16z: Why Do AI Agents Need a Stablecoin for B2B Payments?

February 24th Market Key Intelligence, How Much Did You Miss?

Web4.0, perhaps the most needed narrative for cryptocurrency

Some Key News You Might Have Missed Over the Chinese New Year Holiday

Key Market Information Discrepancy on February 24th - A Must-Read! | Alpha Morning Report

$1,500,000 Salary Job: How to Achieve with $500 AI?

Bitcoin On-Chain User Attrition at 30%, ETF Hemorrhage at $4.5 Billion: What's Next for the Next 3 Months?

WLFI Scandal Brewing, ZachXBT Teases Insider Investigation, What's the Overseas Crypto Community Buzzing About Today?

Debunking the AI Doomsday Myth: Why Establishment Inertia and the Software Wasteland Will Save Us
Editor's Note: Citrini7's cyberpunk-themed AI doomsday prophecy has sparked widespread discussion across the internet. However, this article presents a more pragmatic counter perspective. If Citrini envisions a digital tsunami instantly engulfing civilization, this author sees the resilient resistance of the human bureaucratic system, the profoundly flawed existing software ecosystem, and the long-overlooked cornerstone of heavy industry. This is a frontal clash between Silicon Valley fantasy and the iron law of reality, reminding us that the singularity may come, but it will never happen overnight.
The following is the original content:
Renowned market commentator Citrini7 recently published a captivating and widely circulated AI doomsday novel. While he acknowledges that the probability of some scenes occurring is extremely low, as someone who has witnessed multiple economic collapse prophecies, I want to challenge his views and present a more deterministic and optimistic future.
In 2007, people thought that against the backdrop of "peak oil," the United States' geopolitical status had come to an end; in 2008, they believed the dollar system was on the brink of collapse; in 2014, everyone thought AMD and NVIDIA were done for. Then ChatGPT emerged, and people thought Google was toast... Yet every time, existing institutions with deep-rooted inertia have proven to be far more resilient than onlookers imagined.
When Citrini talks about the fear of institutional turnover and rapid workforce displacement, he writes, "Even in fields we think rely on interpersonal relationships, cracks are showing. Take the real estate industry, where buyers have tolerated 5%-6% commissions for decades due to the information asymmetry between brokers and consumers..."
Seeing this, I couldn't help but chuckle. People have been proclaiming the "death of real estate agents" for 20 years now! This hardly requires any superintelligence; with Zillow, Redfin, or Opendoor, it's enough. But this example precisely proves the opposite of Citrini's view: although this workforce has long been deemed obsolete in the eyes of most, due to market inertia and regulatory capture, real estate agents' vitality is more tenacious than anyone's expectations a decade ago.
A few months ago, I just bought a house. The transaction process mandated that we hire a real estate agent, with lofty justifications. My buyer's agent made about $50,000 in this transaction, while his actual work — filling out forms and coordinating between multiple parties — amounted to no more than 10 hours, something I could have easily handled myself. The market will eventually move towards efficiency, providing fair pricing for labor, but this will be a long process.
I deeply understand the ways of inertia and change management: I once founded and sold a company whose core business was driving insurance brokerages from "manual service" to "software-driven." The iron rule I learned is: human societies in the real world are extremely complex, and things always take longer than you imagine — even when you account for this rule. This doesn't mean that the world won't undergo drastic changes, but rather that change will be more gradual, allowing us time to respond and adapt.
Recently, the software sector has seen a downturn as investors worry about the lack of moats in the backend systems of companies like Monday, Salesforce, Asana, making them easily replicable. Citrini and others believe that AI programming heralds the end of SaaS companies: one, products become homogenized, with zero profits, and two, jobs disappear.
But everyone overlooks one thing: the current state of these software products is simply terrible.
I'm qualified to say this because I've spent hundreds of thousands of dollars on Salesforce and Monday. Indeed, AI can enable competitors to replicate these products, but more importantly, AI can enable competitors to build better products. Stock price declines are not surprising: an industry relying on long-term lock-ins, lacking competitiveness, and filled with low-quality legacy incumbents is finally facing competition again.
From a broader perspective, almost all existing software is garbage, which is an undeniable fact. Every tool I've paid for is riddled with bugs; some software is so bad that I can't even pay for it (I've been unable to use Citibank's online transfer for the past three years); most web apps can't even get mobile and desktop responsiveness right; not a single product can fully deliver what you want. Silicon Valley darlings like Stripe and Linear only garner massive followings because they are not as disgustingly unusable as their competitors. If you ask a seasoned engineer, "Show me a truly perfect piece of software," all you'll get is prolonged silence and blank stares.
Here lies a profound truth: even as we approach a "software singularity," the human demand for software labor is nearly infinite. It's well known that the final few percentage points of perfection often require the most work. By this standard, almost every software product has at least a 100x improvement in complexity and features before reaching demand saturation.
I believe that most commentators who claim that the software industry is on the brink of extinction lack an intuitive understanding of software development. The software industry has been around for 50 years, and despite tremendous progress, it is always in a state of "not enough." As a programmer in 2020, my productivity matches that of hundreds of people in 1970, which is incredibly impressive leverage. However, there is still significant room for improvement. People underestimate the "Jevons Paradox": Efficiency improvements often lead to explosive growth in overall demand.
This does not mean that software engineering is an invincible job, but the industry's ability to absorb labor and its inertia far exceed imagination. The saturation process will be very slow, giving us enough time to adapt.
Of course, labor reallocation is inevitable, such as in the driving sector. As Citrini pointed out, many white-collar jobs will experience disruptions. For positions like real estate brokers that have long lost tangible value and rely solely on momentum for income, AI may be the final straw.
But our lifesaver lies in the fact that the United States has almost infinite potential and demand for reindustrialization. You may have heard of "reshoring," but it goes far beyond that. We have essentially lost the ability to manufacture the core building blocks of modern life: batteries, motors, small-scale semiconductors—the entire electricity supply chain is almost entirely dependent on overseas sources. What if there is a military conflict? What's even worse, did you know that China produces 90% of the world's synthetic ammonia? Once the supply is cut off, we can't even produce fertilizer and will face famine.
As long as you look to the physical world, you will find endless job opportunities that will benefit the country, create employment, and build essential infrastructure, all of which can receive bipartisan political support.
We have seen the economic and political winds shifting in this direction—discussions on reshoring, deep tech, and "American vitality." My prediction is that when AI impacts the white-collar sector, the path of least political resistance will be to fund large-scale reindustrialization, absorbing labor through a "giant employment project." Fortunately, the physical world does not have a "singularity"; it is constrained by friction.
We will rebuild bridges and roads. People will find that seeing tangible labor results is more fulfilling than spinning in the digital abstract world. The Salesforce senior product manager who lost a $180,000 salary may find a new job at the "California Seawater Desalination Plant" to end the 25-year drought. These facilities not only need to be built but also pursued with excellence and require long-term maintenance. As long as we are willing, the "Jevons Paradox" also applies to the physical world.
The goal of large-scale industrial engineering is abundance. The United States will once again achieve self-sufficiency, enabling large-scale, low-cost production. Moving beyond material scarcity is crucial: in the long run, if we do indeed lose a significant portion of white-collar jobs to AI, we must be able to maintain a high quality of life for the public. And as AI drives profit margins to zero, consumer goods will become extremely affordable, automatically fulfilling this objective.
My view is that different sectors of the economy will "take off" at different speeds, and the transformation in almost all areas will be slower than Citrini anticipates. To be clear, I am extremely bullish on AI and foresee a day when my own labor will be obsolete. But this will take time, and time gives us the opportunity to devise sound strategies.
At this point, preventing the kind of market collapse Citrini imagines is actually not difficult. The U.S. government's performance during the pandemic has demonstrated its proactive and decisive crisis response. If necessary, massive stimulus policies will quickly intervene. Although I am somewhat displeased by its inefficiency, that is not the focus. The focus is on safeguarding material prosperity in people's lives—a universal well-being that gives legitimacy to a nation and upholds the social contract, rather than stubbornly adhering to past accounting metrics or economic dogma.
If we can maintain sharpness and responsiveness in this slow but sure technological transformation, we will eventually emerge unscathed.
Source: Original Post Link

Have Institutions Finally 'Entered Crypto,' but Just to Vampire?

A $2 Trillion Denouement: The AI-Driven Global Economic Crisis of 2028

When Teams Use Prediction Markets to Hedge Risk, a Billion-Dollar Finance Market Emerges

Cryptocurrency Market Overview and Emerging Trends
Key Takeaways Understanding the current state of the cryptocurrency market is crucial for investors and enthusiasts alike, providing…
WEEX LALIGA Partnership 2026: Where Football Excellence Meets Crypto Innovation
WEEX becomes official crypto exchange partner of LALIGA in Hong Kong and Taiwan. Discover how this partnership brings together football excellence and trading discipline.
AI Apocalypse, a massive short squeeze
The "Second Truth" of the Luna Crash: Jane Street Exits Ahead of Plunge
Jane Street Market Manipulation, Stripe Considering Acquiring PayPal, What's the Overseas Crypto Community Talking About Today?
WEEX × LALIGA 2026: Trade Crypto, Take Your Shot & Win Official LALIGA Prizes
Unlock shoot attempts through futures trading, spot trading, or referrals. Turn match predictions into structured rewards with BTC, USDT, position airdrops, and LALIGA merchandise on WEEX.