Off-Chain Compute Is All You Need

Towards a Proof-Based Future - Written with 0xkrane.

Introduction

Blockchains are globally distributed ledgers that come to a consensus over a global state. Some blockchains come equipped with a Turing-complete execution environment that enables programmability on top of this global state. Programs that target blockchains’ execution environments are called smart contracts, and the underlying blockchains are called smart contract platforms. Ethereum, Solana, and Avalanche are some of the most widely known smart contract platforms. We can think of smart contract platforms as distributed computers, with the execution environment (or virtual machine) acting like the CPU and the state performing the role of storage.

This framing of blockchains as computers will be important to motivate why coprocessors/off-chain compute is inevitable, especially in the context of blockchains. In traditional computing, coprocessors originated in microarchitecture to enhance performance. Similarly, coprocessors on Ethereum promise access to historical data and high-performant offchain compute to augment the features and design space of the base-layer protocol. Take a look at this introductory article on coprocessors for more.

This article explores coprocessors from first principles, aiming to clarify their importance and meta-properties. We then compare them to rollups, demonstrating how these two concepts, while different, are closely related. We also provide examples of when rollups and coprocessors can be used in conjunction with each other. For example, even an all-powerful rollup or L1 might need a coprocessor for heavy lifting tasks.

We conclude this article by observing that blockchains are moving towards a future where computation is centralized, but verification remains decentralized. Rollups, coprocessors, and any other form of verifiable off-chain compute are just different instantiations of this future.

How we got here:

In “The Limits to Blockchain Scalability,” Vitalik mentioned that for blockchain decentralization, it is important that regular users can run a node.

As previously mentioned, Ethereum can be conceptualized as a decentralized global computer in many aspects. It is a network of nodes running software that provides computational resources for executing smart contracts. The Ethereum blockchain stores state information and code, similar to a computer's storage and memory. And the Ethereum Virtual Machine (EVM) runs on every node, processing transactions and executing code like a CPU. However, Ethereum is permissionless and decentralized, using consensus between untrusted nodes. If some nodes go offline, the network continues operating. To ensure the correctness of EVM operations, the validators on Proof-of-Stake (PoS) networks like Ethereum must perform all state transitions to verify them. This limits the speed of a PoS network to its slowest nodes, limiting the amount of computation app developers have available to them.

Unlike a regular computer, Ethereum limits computation and storage to prevent network abuse. Fees are charged for each operation, making endless loops financially impractical. This approach keeps barriers to entry low, allowing everyday hardware like a Raspberry Pi to run network nodes. The constraints enable an inclusive system where anyone can help operate the decentralized Ethereum network.

Due to these computational restrictions of Ethereum nodes, complex applications like Machine Learning models, games, or scientific computing applications cannot feasibly run directly on Ethereum today.

It's a trade-off to make Ethereum widely accessible, secure, and sustainable as a foundation for basic apps. But inevitably, some limitations exist relative to a computationally unrestricted computer. It has limitations when compared to even an ancient processor like a Pentium 5:

  • No complex floating point math - The EVM only supports basic math and logical operations. Advanced numerical computations like neural networks are not feasible. (An interesting tidbit is an inability to handle floating point has also made swapping rebase assets like Ampleforth, etc, harder in recent history and sometimes even incompatible with some DEXs).

  • Limited computation per block - Gas fees meter computations, so complex software like games would be prohibitively expensive. The gas limit per block is 30M gas.

  • Restricted memory - Smart contracts have small permanent storage limits, making large programs difficult.

  • No persistent file storage - There is no way to store files like graphics, audio, or video on the blockchain.

  • Slow speed - Transaction speeds on Ethereum are currently ~15 TPS, many orders of magnitude slower than a CPU.

Ultimately, limited storage and compute restricts the degrees of freedom available to apps (these limits differ from blockchain to blockchain, but they always exist). People have compared blockchains to the compute-constrained environments of the 1970s-1980s, but we think there are some large differences between these two:

  • The growth of computing in the 1970s-1980s was rapid (with transistor count in microprocessors going from ~1,000 to ~1,000,000 during that period). But this growth didn’t mean people often bought or updated their computers. Since smart contract platforms are limited by their slowest nodes, a speed-up at the frontier of computers will not necessarily lead to blockchains seeing a proportional increase in computational speeds. A speed-up can only happen if the baseline requirements for nodes on the blockchain are updated.

  • There is also a clear trade-off between constantly updating the minimum hardware requirements for nodes and decentralization. Solo stakers might not want to upgrade hardware every couple of years (and they certainly don’t want to monitor performance daily), leading to only professionals wanting to run blockchain infrastructure.

All of this is to say that, over the years, CPUs improved, and we got more CPU cores on every device to allow us to do progressively complicated tasks. If we think blockchain computers won’t speed up as fast as traditional computing (due to baseline node requirements), then it makes sense to try to find alternative sources of compute. An interesting analogy to pull here is that CPUs in traditional computing didn’t get good at graphical processing tasks, leading to the rise of GPUs in almost every computer. Similarly, since blockchains are focusing on being secure stores of state with simple compute batteries enabled, there is a clear opportunity for off-chain compute to expand the application design space. Today, blockchains only make sense for low-compute applications that want properties like open access, self-sovereignty, censorship resistance, and composability. To put a greater variety of applications onchain, we need to lift the constraints we place on app developers. We say this with the understanding that these constraints have also been a boon for experimentation. For example, CLOBs couldn’t effectively run on Ethereum due to the compute constraints, so AMMs were adopted, having since clocked in a trillion dollars in volume.

There are two common approaches to making more compute available to blockchain applications:

  • Increase baseline node requirements relatively often. This is roughly the path integrated high-performance blockchains like Solana and Sui take. A high baseline for nodes makes it possible for them to build a very fast blockchain and also lifts some design constraints from application design. Phoenix, a Limit Order Book DEX on Solana, could not be built on Ethereum (or any L2) today. The flip side to increasing baseline requirements is that if they grow constantly, then running nodes might only be viable for professional infrastructure providers. Historical RAM requirements do a pretty good job of showcasing how hardware requirements have grown consistently on Solana:
 Web Archive (Note: we use median RAM requirements from 2020)
Web Archive (Note: we use median RAM requirements from 2020)
  • Moving compute off-chain to third parties. This has been the strategy the Ethereum ecosystem has adopted. These third parties could themselves be blockchains (in the case of rollups), off-chain verifiable compute devices (i.e., coprocessors), or trusted third parties (as is the case with application-specific off-chain compute like dydx’s orderbook).

Towards Unification of Off-Chain Compute

Recently, there has been a rise in talks of coprocessors, which provide off-chain verifiable compute. Coprocessors can be implemented in various ways, including but not limited to  Zero-Knowledge Proofs or Trusted Execution Environments (TEEs). Some examples are:

Simultaneously, when it comes to offloading compute, Ethereum’s rollup-centric roadmap offloads compute to various rollups that settle on Ethereum. Over the last couple of years, a steady stream of developers and users have been migrating to rollups due to a combination of cheaper, faster transactions and incentives provided by the rollups. In an ideal world, rollups allow Ethereum to scale its overall computational capacity via off-chain execution without adding trust assumptions. More compute doesn’t just refer to executing more transactions but also to doing more expressive computation per transaction. New transaction types expand the design space available to applications, and higher throughput reduces the cost of performing these expressive transactions, assuring affordable access to a higher class of applications.

Before we go further, let’s briefly define both rollups and coprocessors to prevent confusion:

Rollups: Rollups maintain a persistent, partitioned state different from their base/host chains but still inherits the security properties of their base by posting data/proofs to it. By moving the state off of the host chain, rollups can use additional compute to perform state transitions before posting proofs of integrity of these state transitions to the host. Rollups are most useful to users who don’t want to pay the high fees of Ethereum but want to access the security properties of Ethereum.

Before diving into coprocessors, let’s give some more background on how constrained smart contract development on Ethereum is today. Ethereum has persistent state storage in its global state - account balances, contract data, etc. This data persists on the blockchain indefinitely. However, there are limitations:

  • The maximum size of contract data is limited (e.g., 24KB per contract currently and was set in EIP 170). Storing large files would exceed this. (*Not solved by coprocessors either)
  • Reading/writing contract storage is slower than a filesystem or database. Accessing 1KB of data can cost millions of gas.

  • While the global state persists, individual nodes only retain the recent state locally in "pruning" mode. The full state history requires an archive node.

  • There are no native filesystem primitives for handling files like images, audio, and documents. Smart contracts can only read/write basic data types to storage.

Solutions around this are:

  • Large files can be split into smaller pieces to fit within contract storage limits.

  • File references can be stored on-chain, with the files stored off-chain in systems like IPFS.

Coprocessors: Coprocessors don’t maintain any state themselves; they behave like lambda functions on AWS, where applications can send a compute task to them, and they send back the result with proof of computation. Coprocessors fundamentally increase how much compute is available to any given transaction, but since proving on coprocessors also happens on a per-transaction basis, using them is going to be more expensive than rollups. Given the cost, coprocessors are likely to be useful to protocols or users who want to do complex one-off tasks in a verifiable way. Another benefit of coprocessors is that they allow applications using off-chain compute to also access the full historic state of Ethereum without adding any trust assumptions to the application itself; this is not possible on a vanilla smart contract today.

To drive home the difference between rollups and coprocessors, let's refer to the ZK flavors of both these primitives. ZK rollups access both the verifiability and the compression aspect of zero-knowledge proofs, allowing them to fundamentally increase throughput for their ecosystem. Coprocessors, on the other hand, only access the verifiability property of zk proofs, meaning the overall throughput of the system remains the same. Additionally, ZK rollups require circuits that can prove any program that targets the virtual machine for that rollup (for example, rollups on Ethereum have built zkEVMs for contracts that target the EVM). In contrast, ZK coprocessors only need to build circuits for the tasks they are enlisted to perform.

So, it looks like the two biggest differences between rollups and coprocessors are:

  • Rollups maintain a partitioned persistent state, and coprocessors do not (they use the state of the host chain).

  • Rollups (as the name suggests) batch several transactions together, and coprocessors are generally used for complicated tasks as part of a single transaction (at least in the current paradigm).

Recently, Booster Rollups have been proposed, which execute transactions as if they are running directly on the host chain, with access to the full state of the host. However, Booster Rollups also have their own storage, allowing them to scale computation and storage across both the host and the rollup. The Booster Rollup proposal points to how there is a spectrum in the off-chain compute design spectrum, with traditional rollups and coprocessors sitting on either end of this spectrum. Rollups, Booster Rollups, and Coprocessors all provide access to more compute and only differ in how much state they hold partitioned from their base L1.

In a talk at the Modular Summit, 2023 called “Shielded Transactions Are Rollups”, Henry De Valence talked about this exact concept and presented a very simple image to define a rollup:

The talk posits that any execution offloaded by the base chain to a third party is a rollup. Under his definition, coprocessors would also be rollups. This slightly differs from our view of unifying rollups and coprocessors under the banner of off-chain verifiable compute but the overall sentiment remains the same!

In his Endgame vision, Vitalik discusses a future where block production is centralized and block validation is trustless and highly decentralized. We believe this is roughly the correct model to think about what is happening now. In a zk-rollup, block production and state-transition computation are centralized. However, proofs enable verification to be cheap and decentralized. Similarly, a zk-coprocessor has no block production; it only accesses historical data and computes state transitions over this data. Computation on a zk-coprocessor is likely to always be performed on a centralized machine; still, the validity proof returned along with a result allows anyone to verify the results before using them. Maybe it is correct to restate Vitalik’s vision as: “a future where computation is centralized, but verification of centralized computation is trustless and highly decentralized.”

Same Same But Different

Despite their overall similarities, rollups and coprocessors serve very different markets today. One might ask, “If we can just use a coprocessor on ETH L1 and access its liquidity, why do we need rollups?” while this is a fair question, we think there are a few reasons why rollups still make sense (and present a much larger market opportunity than coprocessors today):

  • As previously mentioned, coprocessors allow you to access more compute in the same transaction than the L1. But they can’t help move the needle on how many transactions can be performed by the blockchain that is calling the coprocessor (if you’re thinking about batching, voilà, you’ve arrived at a rollup). By maintaining a partitioned persistent state, rollups can increase the number of transactions available to people who want to access blockspace with Ethereum’s security properties. This is possible because rollups only post to Ethereum every n blocks and don’t require all Ethereum validators to verify a state transition happened. Interested parties can just rely on the proof.

  • Even if you use coprocessors, you still have to pay the same order of magnitude of fees as any other transaction on the L1. On the other hand, rollups via batching can reduce costs by orders of magnitude.

Additionally, since rollups provide the ability to run transactions on this separate state, they still behave like blockchains (faster, less decentralized blockchains, but blockchains nonetheless), so they, too, have clear limits on how much compute can be accessed from the rollup itself. In this scenario, a coprocessor can be useful for rollups if a user wants to do arbitrarily complex transactions (and now you’re doing verifiable transactions on a rollup, so you only have to obey the laws of physics of the rollup).

Another important point to note here is that most liquidity today resides on ETH L1, so for many protocols that rely on liquidity to improve their products, it might be astute to still launch on Ethereum mainnet. An application on Ethereum mainnet can get access to more compute by intermittently doing transactions on a coprocessor. For example, a DEX like Ambient or Uniswap v4 can use hooks in conjunction with coprocessors to do complicated logic on how to change fees or even modify the shape of the liquidity curve based on market data.

One interesting analogy compares the interplay between rollups and coprocessors to imperative and functional programming. Imperative programming focuses on mutable states and side effects, specifying step-by-step how to execute tasks. Functional programming emphasizes immutable data and pure functions, avoiding state changes and side effects. In the same way, rollups are like imperative programs that modify the state they hold, while coprocessors are like functional programs where they don't mutate the state but produce a result along with proofs of computation. Moreover, just like imperative and function programming, rollups and coprocessors have their place and should be used accordingly.

A Proof-Based Future

If we end up in a world where computation is centralized, but verification of centralized compute is trustless and highly decentralized, where does that leave Ethereum? Will the world computer be reduced to a mere database? Is this a bad thing?

Ultimately, Ethereum’s goal is to give its users access to trustless compute and storage. In the past, the only way to access trustless compute on Ethereum was for computation to be performed and verified by all nodes. With the progression of proving techniques (especially zero-knowledge proofs), we can move much of the computation that happened on validator nodes to off-chain compute and only have validators verify the results on-chain. This essentially turns Ethereum into the world's immutable bulletin board. The proofs of computation allow us to verify that a transaction was done correctly, and by posting them to Ethereum, we get a timestamp and an immutable historical store for these proofs. As zero-knowledge proofs become more efficient on arbitrary computation, it is likely that at some point the cost to do computation in ZK will be significantly less than the cost to do it on a blockchain (maybe even a 100-validator CometBFT chain). In such a world, it is hard to imagine that ZK proofs will not become the dominant mode of accessing trustless computing. Similar thoughts have been echoed by David Wong recently as well:

A future in which any computation can be proven also allows us to build infrastructure for the kinds of trustless applications that have user demand instead of trying to retrofit the Ethereum base layer to become the home for those applications. In the ideal case, tailored infrastructure will create more seamless user experiences and will also scale with the applications built on top of it. This will hopefully allow web3 applications to compete with their web2 counterparts and usher in the trustless, proof-based future cypherpunks always dreamt of.

All in all, we believe we are moving towards the following paradigm:

---------------------------Dont Trust, Verify----------------------------

Subscribe to Emperor
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.