Understanding "An Analysis of Intent-Based Markets"

Notes on the nature of Intent-Based Markets.

In this blog, I discuss the work “An Analysis of Intent-Based Markets” by Tarun Chitra, Kshitij Kulkarni, Mallesh Pai, and Theo Diamandis.

Intents have been touted as a way of expressing generalized transactions on blockchains. Intents allow users to express transactions with a set of conditions or covenants, and only if these conditions are met can the transaction be executed.

Today, the most common sort of transactions on blockchains are “swaps” between two assets. So, let’s consider an example,

"I want the outcome of trading my 1 Ethereum for 3000 USDT."

This is an example of a simple swap between two assets; you can do it on Uniswap or Ambient today. But you have to make a choice - decide the exchange, the chain, the slippage of your dex, and even check if it’s better off-chain or onchain, considering fees. Intent design abstracts these decisions; the whole process - picking the pool, making accounts/signing transactions, handling transfers (or converting dust in your wallet), etc. will be managed. Additionally, Let's imagine you instead say, “I want to trade 10 ETH for the best price possible, subtracting fees.” an additional consideration, like considering different venues, now gets added. You must calculate this over all possible routes across chains, making it difficult. While a transaction like this cannot be expressed today, intents eventually aim to provide a framework to accomplish this.

A Solver is an entity that receives your intent and determines how to “solve” it. The solver handles the messy details of trying to optimize for the best possible outcome for you. They try to meet your demands while keeping all the conditions and covenants in mind.

The key is that intents focus on “what you want” rather than “how you want it.” You define the desired results, while someone else determines the "how." Intents greatly simplify the transaction flow that most users use in crypto by allowing you to specify outcomes and not worry about steps.

For a deeper dive into understanding intents, refer to

This work is about the nature of solvers, modeling intent-based markets (when they become popular) with different solvers and exploring the dynamics that arise in these solver-driven intent markets.


Decentralized exchanges are essentially passive pools of liquidity between two assets. In an actively traded market, while the global price of the asset moves, DEXes, because of their design, rely on continuous arbitrage to synchronize these prices. In the work [Quantifying loss in automated market makers](https://Automated Market Making and Loss-Versus-Rebalancing), Millionis et al. consider the market microstructure of AMMs and find the main adverse selection costs that lead to significant losses for the LPs. This is also true in Uni v3, as shown by the authors of Decentralized Finance and Automated Market Making: Predictable Loss and Optimal Liquidity Provision.

Intents are described as a way of credibly executing asynchronous transactions across multiple blockchains. Essentially, they generalize “Request for quote” systems. Request for Quote (RFQ) systems are trading protocols where:

  1. A potential buyer or seller (the requester) asks for price quotes from one or more market makers or dealers.

  2. The requester specifies the asset, quantity, and whether they want to buy or sell.

  3. Market makers respond with their best price quotes.

  4. The requester can then accept one of the quotes or decline all offers.

RFQ systems are often used in over-the-counter markets, bond trading, and some cryptocurrency exchanges. (This explanation clarifies the similarity between intents and RFQs.) Intents are generalized RFQs and allow users to be more granular or specific about the conditions, which are then guaranteed to hold throughout the transaction's execution.


Intents in Practice

In the world of Intent markets where intents are fulfilled by solvers trying to meet the conditions set by a user. A natural assumption is that solvers will only satisfy a user order if something “in it” for them, i.e., it benefits them. The nature of their setup ( allows them to have information that usual users or unsophisticated market participants do not have. So, incentivizing the solver to execute the user's orders at optimal prices becomes an open problem and resembles principal-agent problems.

Several intent-based marketplaces exist today, but you might say that the expressivity is slightly constrained given the generality we mentioned before. UniswapX is a popular intent system and has done about 11B in volume at the time of this writing.

UniswapX is a non-custodial, Dutch auction-based trading protocol designed for the Ethereum Virtual Machine. The protocol aims to aggregate both onchain and off-chain liquidity, internalize Miner Extractable Value (MEV) as price improvement, offer gas-free swaps, and support cross-chain trading.

The system's design revolves around signed offchain orders that are executed and settled onchain. Instead of creating and submitting transactions themselves, swappers sign orders specifying details like input/output tokens, amounts, decay functions, and deadlines. These orders are then picked up by "fillers" (MEV searchers, market makers, or other onchain agents) who submit them to a reactor contract. The protocol uses a Dutch order type, which starts at a price better than the current market price and decays over time. This creates competition among fillers to find the best possible price for swappers while maintaining a small profit margin. The system can be extended to support cross-chain trading, allowing users to trade assets between different blockchain networks. UniswapX also incorporates features like optional filler exclusivity periods, governance-controlled fees, and the ability for interfaces and wallets to charge additional fees. UniswapX is essentially a Dutch auction with a reserve price, which is guaranteed sourced from the univ3 or v2 liquidity pool.

CowSwap is also another intent based protocol that implements the “maximum output” intent and has processed billions since inception. In cowswap, solvers participate in an auction to execute user orders at a uniform clearing price, known as a batch auction.

In the work “Illuminating Ethereum’s Order flow landscape”, the authors show a Sankey diagram that illustrates where solvers source liquidity for transactions that were sent to solver auctions Cowswap, 1inch Fusion, and Uniswap X for the month of November 2023. The figure shows the three solver auctions together have 22 solvers that accessed over 33 liquidity sources and also shows solvers source liquidity across both automated market makers (AMM), most notably Uniswap V3, and private market makers (PMM), most notably, Wintermute and SCP.

Through the diagram, the authors also conclude that millions of dollars in Cowswap user orders were matched with 1-inch user limit orders in landed transactions in November 2023. To win the solver auction, 1-inch limit orders must have provided a better price than private market makers, making a case that expanding market access to fill retail order flow can drive better user outcomes. (This was proposed as a rule by SEC recently, but more on that later).

The authors also talk about “Market maker auctions”, where earlier, the only way to participate in these were permissioned RFQs built by aggregators like 0x and 1inch. Recently, Hashflow developed an RFQ that allowed any project or solver to tap into its market maker liquidity which changed the game. Through Hashflow, market makers filled more volume than other auctions run by aggregators. But why is this important? Market makers that integrated with solvers through Hashflow had two benefits: cowswap required solvers to stake money in USDC and cow tokens, thus introducing a barrier to entry, and cashflow gave a bigger surface area of orders (literally more orders) to fill.

OEV (oracle extractable value) markets also run auctions to sell the right to execute a trade immediately after a price update is sent on-chain via a price oracle. What kind of trades might one want to do with these? For example, if a price update were to make a loan position cross its liquidation threshold on a lending protocol onchain, having access to the guarantee of the trade after might help the entity secure the fees gained.

Going back to UniswapX - one might assume it’s a free lunch at the start and that increased competition might lead to better user fulfillment, but only sophisticated participants can win in the longer term. Does competition always lead to better outcomes for user welfare? How do solver markets change when entry barriers are introduced in the form of required entry fees? How do solver markets change when filling the order requires skill in the form of investment in infra, and thus, some solvers outcompete others on this front? Uniswap had over 2000 addresses participating as fillers in their early days which came down to only 12 addresses by Jan 2024. This also might lead to unintended outcomes since the market might suffer from a lack of competition which may lead to worse outcomes for users.

These are some questions explored and answered in the work “An Analysis of Intent-Based Markets” by Tarun Chitra et al.


An Auction Theoretic Model

Consider a user who wants to swap 1 unit of token T1T_1 for as many units of token T2T_2 ​ as possible. This is referred to as the "maximum output intent." If you have dollars to convert to ETH or Sol, you want as much quantity as possible (maximum output).

You have two choices:

  • Trade this on a CFMM like Uniswap or Ambient, which gives you a specific price.

    OR

  • Settle it with one of the nn solvers, where each solver ii has an inventory that holds the token T2T_2 at a price pip_i in token T1T_1.

Naturally, you only want to use a solver if they can give you a better price than the CFMM. To model this well, we have to assume that these prices pip_i come independently from a CDF F(p)F(p) with density f(p)f(p). Explaining these assumptions in short:

  • Independence: The price that each solver has access to is independent of the prices of other solvers. In other words, knowing one solver's price doesn't give you any information about another solver's price.

  • Cumulative Distribution Function, F(p)F(p): The CDF F(p)F(p) describes the probability that a random variable PP takes on a value less than or equal to pp. Mathematically,

F(p)=Pr(Pp)F(p) = Pr(P \leq p)
  • Probability Density Function PDF, f(p)f(p): The PDF f(p)f(p) is the derivative of the CDF F(p)F(p) with respect to pp. It describes the likelihood of the random variable PP taking on a specific value pp. Mathematically,
f(p)=ddpF(p)f(p) = \frac{d}{dp}F(p)

Imagine we are drawing random number to represent prices:

CDF, F(p)F(p): If F(100)F(100) = 0.80.8, this means there is an 80% chance that a randomly drawn price will be 100 or less.

PDF, f(p)f(p): If f(50)f(50) = 0.050.05, this means that the likelihood of the price being exactly 50 is low compared to other values.

So now we have a setup for our auction model, where the user runs it to get a better price than the one he could attain from public markets.

Why would solvers decide to participate in an auction? In the auction, The solver with the highest bid fills the order, provided their bid improves upon the public market price pp^*. The winning solver's profit is the difference between their true price pip_i and their bid. So, the incentive for the solver to win the auction is the spread they can capture on this. Now, Solvers face a variety of choices they need to make:

  • Do I enter the auction at all?

    • Entering an auction might involve entry costs. These represent infrastructure and setup costs.
  • If I choose to enter, how should I bid?

    • After entering an auction, solvers exert effort to find a good price. This effort is costly and may be congestive, i.e., the cost depends on the total number of participating solvers, representing increased competition for scarce liquidity. We can call these congestion/effort costs.

Now, we will look at these cases in detail.

No Entry Costs

A Dutch auction mechanism is where the auctioneer starts with a high asking price and gradually lowers it until a participant accepts it. In the context of the auction, we are running for our order:

  • User's Role: The user intends to swap a certain amount of token T1 for as much of token T2 as possible.

  • Solvers' Role: Solvers (participants in the auction) bid to offer the best price for this swap.

As we said earlier, Each solver has a price pip_i​ at which they can fulfill the user's intent. These prices are drawn from a known distribution F(p)F(p) with density f(p)f(p). The Dutch auction was shown to be equivalent to a sealed bid first-price auction. Now, we try to model the solver strategies as a first-price auction, where the highest bidding solver wins at the highest price.

The highest bid will get filled if it is better than pp^*. Solvers must decide on a bid p~i\tilde{p}_i​​ that balances the likelihood of winning the auction against the profit margin if they win. The net profit of a solver from being filled is pip~ip_i - \tilde{p}_i, where pip_i is their bid.

A short note on revenue equivalence -- Revenue equivalence is a fundamental concept in auction theory. It refers to the principle that under certain conditions, different types of auctions (First-price sealed-bid, Second-price sealed-bid, Dutch or English) will generate the same expected revenue for the seller. The conditions are that the bidders are risk-neutral, they have independent private values, and all bidders have valuations drawn from the same distribution, i.e., they have the same information and bidding strategy. This is a powerful result because it means we can often analyze simpler auction formats and apply the results to more complex ones as long as the conditions of the theorem are met.

If kk of the universe of nn possible solvers are present in the auction, revenue equivalence yields

p~i=pippiF(k1)(x)F(k1)(pi)dx\tilde{p}_i = p_i - \frac{\int_{p*}^{p_i} F^{(k-1)}(x)}{F^{(k-1)}(p_i)} dx

The interpretation is that each solver shades their bid without entry or congestion costs. In simple terms, "bid shading" means that a solver will bid less than their true price pip_i in the auction. They do this to increase their potential profit margin if they win while still trying to bid high enough to win the auction.

Now that we have a rough idea of what each solver might do. We now model interim profits. Imagine you're a solver considering whether to participate in an auction to fill a user's order. You know your price pip_i, but you don't know the prices of the other solvers who might compete against you.

Before deciding whether to enter the auction, you'll want to estimate your expected profit if you participate. This estimated profit is your interim profit. It's called "interim" because it's your expected profit after you've learned your price but before you know the auction's outcome.

The interim expected profit of a solver with a price pip_i in a setting of k other solvers can be written as:

S(pi,k)=ppiFk(x)dxS(p_i, k) = \int_{p^*}^{p_i} F^{k}(x) dx

The Ex-Ante Expected profit, defined as the overall expected profit for a solver considering the distribution of prices and the number of competitors, can be written as:

S(k)=0S(p,k)f(p)dpS(k) = \int_0^\infty S(p, k) f(p) dp

where,

  • S(p,k)S(p,k) is the interim expected profit, and it considers the probability of winning the auction with price pp and the resulting profit if they win.

  • f(p)f(p) is the probability density function we saw earlier. Since prices are drawn from a distribution, f(p)f(p) represents the likelihood of each price pp occurring.

Simplifying we get,

S(k)==ppFk(p)(1F(p))S(k) = = \int_{p^*}^{\overline{p}} F^k(p) (1 - F(p))

where

  • pp^* represents the lower bound of the integral, possibly the public market price.

  • p \overline{p}​ represents the upper bound of the prices considered in the market

What did the simplification yield? We transformed the original integral of the expected profit into a form that explicitly accounts for the probability of a price being higher than the public market price pp^*. It reveals how the distribution of the prices and the cumulative effect of multiple bidders impact the expected profit of the solver.

Why are we calculating interim profits S(k)S(k)?

  • Entry decisions: A potential solver will compare their expected interim profit S(pi,k)S(p_i, k) to their entry cost cic_i to decide whether to enter the auction. If S(pi,k)S(p_i, k) > cic_i, they expect to make a profit by entering, so they will participate. This is the key condition that determines the equilibrium number of entrants kk^*.

  • Bid shading: The interim profit function S(pi,k)S(p_i, k) is used to derive the optimal bid shading strategy in the Dutch auction. Solvers shade their bids to balance the trade-off between a higher profit margin if they win (incentive to shade more) and a higher probability of winning (incentive to shade less).

  • The user's expected welfare from the auction is directly related to the solvers' interim profits. In particular, the user's expected payment is equal to the second-highest solver's expected interim profit. So understanding how interim profits vary with the number of solvers kk and the price distribution F(p)F(p) is crucial for assessing the user's welfare.

  • The paper examines the impact of entry costs and congestion costs on interim profits. This analysis helps in understanding how these costs influence the solvers' decisions and the overall market dynamics.

The interim profit function is a key building block of the model that connects the solvers' entry and bidding decisions to the overall competitiveness and efficiency of the intent-based markets.

The paper considers three types of distribution from which the prices can be derived: Exponential, Uniform, and Pareto. The above equation S(k) is simplified with the help of the PDFs of these probability distributions, and we get a final equation in terms of k. These equations help us model-specific cases like entry and high effort costs and gain insight into how many solvers might eventually be left or how they slowly change because of various costs. These costs influence the equilibrium number of solvers (k*) participating in the auction.

  • Exponential Distribution:

    • The exponential distribution is often used to model the time between events in a Poisson process.

    • The probability density function (PDF) is f(p)=λeλpf(p)=λe^{−λp}, where the λ\lambda is the rate parameter.

    • An exponential distribution means many solvers have relatively low prices, but occasionally, a solver might offer a much higher price.

After simplifying with PDF, we get,

S(k)=1(k+1)λS(k) = \frac{1}{(k + 1) \lambda}
  • Uniform Distribution:

    • The uniform distribution assumes all outcomes are equally likely within a specified range.

    • If prices are uniformly distributed between 0 and 1, the PDF is f(p)=1f(p)=1 for 0p10 ≤ p ≤ 1.

    • A uniform distribution means prices are evenly spread out within some range.

After simplifying, we get,

S(k)=1(k+1)(k+2)S(k) = \frac{1}{(k + 1)(k + 2)}
  • Pareto Distribution

    • The Pareto distribution is often used to model distributions with heavy tails, such as income distributions.

    • The PDF is f(p)=αx_mαpα+1f(p) = \frac{\alpha x\_m^\alpha}{p^{\alpha + 1}} for pxp \ge x.

    • If the prices solvers offer follow a Pareto distribution, a few will offer very high prices, but most will offer lower prices.

    • After simplifying, we get S(0)=S(0) = \infty. This means that with no other solvers, the expected interim profit is infinite, which is a characteristic of distributions with heavy tails.

Costly Entry

In this scenario, we’ll assume that every solver must pay an entry cost cic_i if they bid. What might this represent? It could be real-world expenses like setting up infrastructure or the cost of developing/licensing the necessary software to participate in Intent markets. The costs are distributed according to a distribution FcF_c.

Entry costs help us understand participation by providing a threshold cost cˉ\bar{c}. If the costs exceed the threshold, the entrant is not profitable. So naturally, the maximum cost solver is the one who exactly meets the threshold. We can write this as

k=0n((nk)FCk(cˉ)(1FC(cˉ))nk)S(k)=cˉ.\sum_{k=0}^n (\binom{n}{k} F_C^k(\bar{c})(1-F_C(\bar{c}))^{n-k})S(k) = \bar{c}.

Once you have a threshold cˉ\bar{c}, the expected number of entrants into the solver market can be computed as:

K=nFc(cˉ)K_* = \textit{n}F_c(\bar{c})

where Fc(cˉ)F_c(\bar{c}) is the cumulative probability.

In the intent market model context, we have nn potential solvers, each independently deciding whether to enter the market. The probability of a solver entering is FC(cˉ)F_C(\bar{c}), which is the cumulative probability that their cost is below the threshold cˉ\bar{c}. Since each solver's decision is independent and has the same probability FC(cˉ)F_C(\bar{c}) of entering, we can apply the binomial distribution property directly. This property states that the expected number of successes (entrants) in nn independent trials, each with success probability pp, is n×pn×p. Therefore, the expected number of entrants kk∗ is calculated as nn × FC(cˉ)F_C(\bar{c}), where nn is the total number of potential solvers and FC(cˉ)F_C(\bar{c}) is the probability of each solver entering. This calculation assumes that each solver's decision to enter is independent of the others, a fundamental property of the binomial distribution.

Exponential Distribution

For the case of exponentially distributed prices with rate λ, the expected number of entrants kk^* is derived as follows. The left-hand side of the equation determining the threshold cost (cˉ\bar{c}) is:

k=0n((nk)FCk(cˉ)(1FC(cˉ))nk)S(k)=1(n+1)λFC(cˉ)(1(1FC(cˉ))n+1) \sum_{k=0}^{n} \left( \binom{n}{k} F_C^{k}(\bar{c})(1 - F_C(\bar{c}))^{n-k} \right) S(k) = \frac{1}{(n+1)λF_C(\bar{c})} (1 - (1 - F_C(\bar{c}))^{n+1})

This leads to the equation:

1(n+1)λ=cˉFC(cˉ)(1(1FC(cˉ))n+1)\frac{1}{(n+1)λ} = \frac{\bar{c} F_C(\bar{c})}{(1 - (1 - F_C(\bar{c}))^{n+1})}

Using Taylor approximations for large nn, it is derived that cˉ=O(1/n)\bar{c} = O(1/\sqrt{n}), implying k=O(n)k^* = O(\sqrt{n}) . The analysis shows that the number of entrants grows more slowly than the total number of potential solvers, indicating that entry costs significantly reduce the number of solvers willing to enter the auction. Specifically, k=O(n)k^* = O(\sqrt{n}) , meaning the number of entrants grows roughly with the square root of the total potential solvers.

Uniform Distribution

For prices uniformly distributed between [0, 1], the left-hand side of the equation determining the threshold cost cˉ\bar{c} is:

k=0n((nk)FCk(cˉ)(1FC(cˉ))nk)S(k)=1(n+1)(n+2)FC2(cˉ)(1(1FC(cˉ))n+2(n+2)FC(cˉ)(1FC(cˉ))n+1)\sum_{k=0}^{n} \left( \binom{n}{k} F_C^{k}(\bar{c})(1 - F_C(\bar{c}))^{n-k} \right) S(k) = \\ \frac{1}{(n+1)(n+2)F_C^2(\bar{c})} (1 - (1 - F_C(\bar{c}))^{n+2} - (n+2)F_C(\bar{c})(1 - F_C(\bar{c}))^{n+1})

This leads to the equation:

1(n+1)(n+2)=cˉFC2(cˉ)(1(1FC(cˉ))n+2(n+2)FC(cˉ)(1FC(cˉ))n+1) \frac{1}{(n+1)(n+2)} = \frac{\bar{c} F_C^2(\bar{c})}{(1 - (1 - F_C(\bar{c}))^{n+2} - (n+2)F_C(\bar{c})(1 - F_C(\bar{c}))^{n+1})}

Similar arguments show that cˉ=O(n2/3)\bar{c} = O(n^{-2/3}), implying k=O(n1/3)k^* = O(n^{1/3}). This indicates that the expected number of entrants kk^* grows even slower than in the exponential case, further highlighting the impact of entry costs on reducing the number of solvers.

Pareto Distribution

For the Pareto distribution with infinite mean, the expected profit S(0)S(0) is infinite, leading to:

(1FC(cˉ))nS(0)cˉ(1 - F_C(\bar{c}))^n S(0) \leq \bar{c}

Since S(0)=S(0) = \infty , the only solution is cˉ=\bar{c} = \infty. Therefore, FC(cˉ)1F_C(\bar{c}) \to 1 as cˉ\bar{c} \to \infty, implying k=nk^* = n. This means that all potential solvers may enter the market due to the anticipation of large prices. This case is unique because the expected profit for a single solver with no competition is infinite.

Extreme spacings refer to the gaps between the highest and second-highest values in a sample. In the context of the Pareto distribution, these spacings can be quite large due to the distribution's heavy tail. In auctions, extreme spacings can lead to significant differences between the winning bid and the second-highest bid. This can result in substantial bid shading, as solvers may strategically bid lower to increase their expected profits.

The Pareto case, while allowing full participation, may still not provide optimal outcomes for users due to potential bid shading behavior by solvers. This is due to the user’s realized price being far below the highest potential price. Significant bid shading may limit the user’s ability to realize large welfare increases, as the ratio of the expected second-highest price to the expected highest price approaches zero as nn increases:

E[pn1:n]E[pn:n]0 as n \frac{E[p_{n-1:n}]}{E[p_{n:n}]} \to 0 \text{ as } n \to \infty

This suggests that even with a large number of solvers present in the auction, the user can realize a price that is only a small fraction of the theoretical highest price available to any of the solvers.

Costly Effort

In the previous scenario, we considered the impact of entry costs on market participation. Now, let's delve deeper into the dynamics within the market itself. Once solvers have entered the auction, they face another challenge: the need to invest costly effort to find the best prices to win and satisfy the user’s order. This effort modifies the distribution of prices to F(p,e)F(p,e), where ee represents the effort level chosen by the solver. The effort is not only costly but also congestive, meaning it increases with the number of competitors. This is modeled by c(k,e)c(k^∗,e), where cc is a cost function that is increasing and convex in both the number of entrants kk∗ and the effort level ee.

Ultimately, what we explore here is the consideration of how solver effort affects their ability to offer competitive prices and how this effort might be influenced by the number of participants in the market (congestion effects).

(Note: people who don’t want to read the math can go to the conclusions in bold.

Effort-Dependent Prices and Costs

Suppose kk^* solvers have entered the auction. Each solver ii chooses an investment level eiR+e_i \in \mathbb{R}^+, influencing the price piF(p,ei)p_i \sim F(p, e_i). Higher investments lead to better prices, satisfying F(p,e)F(p,e)F(p, e) \leq F(p, e') for e>ee > e' .

Cost of Effort and Congestion

The cost of effort ee with kk^* entrants is c(k,e)c(k^*, e) , increasing and convex in both arguments. A special case is no congestion cost when c()c(\cdot) is constant in kk^* .

Equilibrium Effort

To find the equilibrium investment ee^*, consider solver ii , assuming other k1k^* - 1 solvers draw pF(x,e)p \sim F(x, e^*) . The expected revenue for solver ii with price pp is:

S(p,k,e):=0pF(x,e)k1dxS(p, k^*, e^*) := \int_{0}^{p} F(x, e^*)^{k^*-1} dx

Given this, solver ii must choose their investment level eie_i​ to maximize their expected profit:

ei=argmaxeR+S(p,k,e)f(p,e)dpc(k,e)e_i = \arg \max_{e \in \mathbb{R}^+} \int S(p, k^*, e^*) f(p, e) dp - c(k^*, e)

The first-order condition at e=ee = e^* is:

(S(p,k,e)f(p,e)edpc(k,e)k)e=e=0\left( \int S(p, k^*, e^*) \frac{\partial f(p, e)}{\partial e} dp - \frac{\partial c(k^*, e)}{\partial k} \right)_{e=e^*} = 0

This defines ee^* as a function of kk^*, allowing us to analyze how congestion and effort impact net revenue.

Specific Example: Uniform Distribution

Consider a specific example where F(p,e)=peF(p, e) = p^e for p[0,1]p \in [0, 1] . This implies f(p,e)=epe1f(p, e) = e p^{e-1} . Suppose the cost function is c(k,e)=α(k)e22c(k^, e) = \alpha(k^) \frac{e^2}{2}, where α()\alpha(\cdot) is an increasing function that parametrizes the 'congestiveness' of the market. The expected revenue becomes:

S(p,k,e)=p(k1)e+1(k1)e+1S(p, k^*, e^*) = \frac{p^{(k^*-1)e^*+1}}{(k^*-1)e^*+1}

Substituting this into the first-order condition, we get:

01p(k1)e+1(k1)e+1(pe1+epe1lnp)dpα(k)e=0\int_{0}^{1} \frac{p^{(k^*-1)e^*+1}}{(k^*-1)e^*+1} (p^{e^*-1} + e^* p^{e^*-1} \ln p) dp - \alpha(k^*) e^* = 0

Solving this integral yields:

1(1+ek)2α(k)e=0    α(k)e(1+ek)2=1\frac{1}{(1 + e^* k^*)^2} - \alpha(k^*) e^* = 0 \implies \alpha(k^*) e^* (1 + e^* k^*)^2 = 1

Since α()\alpha(\cdot) is nondecreasing, ee^* must be decreasing in kk^*. If α(k)\alpha(k^*) is linear α(k)=Θ(k),ek\alpha(k^*) = \Theta(k^*) , e^* k^* is constant in kk^* . If α(k)\alpha(k^*) grows faster (slower) than linear, eke^* k^* is decreasing (increasing) in kk^*. For k=ω(1)k^* = \omega(1) , this implies e=o(1)e^* = o(1) in kk^*, indicating vanishing expected equilibrium effort as ek0e^* k^* \to 0.

This means that as the market becomes more congested, solvers will invest less effort in finding better prices, leading to a lower equilibrium effort level.

Welfare as a Function of Effort

User welfare is maximized when the expected revenue of the auction is maximized, as it is the (expected) price the user receives. By revenue equivalence, the expected revenue equals the revenue of a second-price auction:

Rev(e,k)=01pk(k1)(1F(p,e))f(p,e)(F(p,e))k2dp\text{Rev}(e^*, k^*) = \int_{0}^{1} p k^* (k^*-1) (1 - F(p, e^*)) f(p, e^*) (F(p, e^*))^{k^*-2} dp

For the specific example, this becomes:

Rev(e,k)=e(k1)1+e(k1)×ek1+ek\text{Rev}(e^*, k^*) = \frac{e^* (k^*-1)}{1 + e^* (k^*-1)} \times \frac{e^* k^*}{1 + e^* k^*}

Two claims are made about this formula:

  1. If α(k)=o(k)\alpha(k^*) = o(k^*), then Rev(e,k)\text{Rev}(e^*, k^*) increases in kk^*. This means that if the congestion cost is sublinear in kk^∗, allowing more solvers to enter the market benefits user welfare. Each additional solver contributes to a higher total revenue without significantly increasing the cost per solver.

  2. If α(k)=ω(k)\alpha(k^*) = \omega(k^*), then Rev(e,k) \text{Rev}(e^*, k^*) decreases in kk^*. This means that if the congestion cost is superlinear in kk∗, allowing more solvers to enter the market is detrimental to user welfare because the increased cost per solver outweighs the additional revenue generated, leading to a lower total revenue.

Essentially summing up:

  1. In markets with low congestion costs, encouraging more participation (higher k*) can improve user welfare.

  2. In markets with high congestion costs, restricting entry might actually improve user welfare by incentivizing higher effort from each solver.

This analysis shows that the presence of effort costs can lead to reduced user welfare due to congestion, and a welfare-maximizing planner may prefer to limit entry into the market. The specific example with a uniform distribution illustrates how the equilibrium effort ee^∗ and user welfare Rev(e,k)\text{Rev}(e^∗,k^∗) is influenced by the number of entrants kk^∗ and the congestion cost function α(k) α(k^∗).

Since this model is not as intuitive as the next one, here is an analogy about chefs cooking in a competition

Imagine a cooking competition where chefs (solvers) compete to sell a meal to a single judge (user). Each chef has a unique set of ingredients and recipes that determine the quality of the meal they can prepare. The distribution of meal qualities across chefs is represented by F(p)F(p), similar to the distribution of prices across solvers in the intent-based market.

Before the competition begins, each chef must decide whether to pay an entry fee (entry cost) to participate. These entry fees, denoted cic_i​, vary for each chef and are drawn from a distribution FCF_C​. This is analogous to the heterogeneous entry costs faced by solvers, such as infrastructure or setup costs. The entry fee is a sunk cost once paid, influencing the chefs' decision to participate based on their expected return from the competition.

After paying the entry fee and observing their own meal quality pip_i​, each chef decides how to price their meal in the competition. They consider their expected profit (interim profit S(pi,k)S(p_i​,k)), which depends on their meal quality and the number of other chefs they expect to compete against kk. This is like the solver's decision-making process.

In equilibrium, chefs will enter the competition until the expected profit for the marginal chef equals the entry fee. This determines the equilibrium number of chefs kk^∗ who participate,

Once the competition begins, chefs can exert effort to improve their meals, but this effort is costly. Moreover, if many chefs are in the competition, they may face congestion in the kitchen, reducing everyone's efficiency. This is similar to solvers incurring effort costs to find better prices and facing congestion when many solvers are searching simultaneously.

During the competition, chefs may engage in "bid shading" by pricing their meals below their true quality. They do this to increase their chances of winning while still making a profit. This is analogous to solvers shading their bids in the auction to balance the trade-off between winning probability and profit margin.

The judge's welfare, like the user's welfare in an intent-based market, depends on the quality of the winning meal and the price they pay for it. This welfare is determined by the chefs' entry and pricing decisions, which are shaped by the same economic forces as in the solver market: entry costs, effort costs, congestion, and strategic bid shading.

The key insights from the auction model hold in the cooking competition analogy:

  • Entry costs can lead to limited participation and reduced competition among chefs.

  • Costly effort and congestion can result in chefs underinvesting in meal quality, similar to solvers underinvesting in price improvement.

  • Chefs will strategically shade their prices based on the level of competition they face, just like solvers shading their bids.


Optimization Model

The probabilistic model presented in the previous sections provides a robust framework for understanding the strategic behavior of solvers in intent-based markets. However, it relies heavily on assumptions about the distribution of prices and the costs associated with entry and effort. While this approach is powerful, it may not capture the full complexity of real-world scenarios where solvers exhibit heterogeneous behaviors and utilities. An alternative deterministic approach, grounded in optimization theory, offers a complementary perspective. This approach allows for explicitly modeling utility and cost functions, which can be tailored to specific data and scenarios, providing a more flexible and potentially more accurate representation of solver behavior.

Framework and Assumptions

The deterministic model begins by assuming that each solver kk is equipped with a utility function uku_k and a cost function ckc_k. The utility function reflects the solver’s preference for holding tokens, while the cost function captures the expenses incurred in providing liquidity or executing trades. The user’s utility is derived from the maximum output of token T2T_2 achievable using a combination of solvers and on-chain Constant Function Market Makers (CFMMs) encapsulated in an aggregate forward exchange function GG.

Essentially, Each solver kk , is characterized by the following:

  1. For a given amount xx of token T1T_1, the solver provides αkxα_k x of token T2T_2.

  2. The solver incurs a cost ck(x)c_k(x) and gains utility uk(x)u_k(x).

  3. Cost function ck:R+R+c_k: R+ → R+ ∪ {∞} is convex.

  4. Utility function uk:R+R+u_k: R+ → R+ is concave.

  5. Both functions are defined over non-negative reals and are closed and proper.

The net utility for solver kk is given by:

Uk(xk)=αkxkck(xk)+uk(xk)U_k(x_k) = -α_k x_k - c_k(x_k) + u_k(x_k)

Where this utility is measured in terms of token T2T_2, which serves as the numeraire.

In the user model, The user wishes to trade δ units of token T1T_1 for a maximum amount of token T2T_2, using a combination of solvers and on-chain Constant Function Market Makers (CFMMs). The model introduces an aggregate forward exchange function G:R+R+G: R+ → R+, which maps an input of token T1T_1 to the maximum output of token T2T_2 using all on-chain CFMMs. This function is concave and nondecreasing.

The user's utility is given by:

Up(x)=G(δk=1Kxk)+k=1KαkxkU_p(x) = G(\delta - \sum_{k=1}^K x_k) + \sum_{k=1}^K \alpha_k x_k

Social Welfare Maximization

This framework's central problem is maximizing social welfare, which is defined as the sum of the user’s utility and the solvers’ net utilities. This problem can be formulated as:

maximizeG(δy)+k=1nuk(xk)ck(xk)subject toy=k=1nxk \text{maximize} \quad G(\delta - y) + \sum_{k=1}^{n} u_k(x_k) - c_k(x_k) \\ \text{subject to} \quad y = \sum_{k=1}^{n} x_k

Here, yy represents the total amount of token T1T_1 traded, and xk x_k is the amount the solver kk traded. The objective function seeks to maximize the user’s utility from trading δ\delta units of the token T1T_1 while accounting for the solvers’ utilities and costs.

Dual Approach and Dutch Auction Mechanism

The Lagrangian of the social welfare maximization problem is:

L(x,x~,ν)=G(δy)+νy+k=1nuk(xk)ck(xk)νxk L(x, \tilde{x}, \nu) = G(\delta - y) + \nu y + \sum_{k=1}^{n} u_k(x_k) - c_k(x_k) - \nu x_k

The dual function h(ν)h(\nu) is then:

h(ν)=supy0{G(δy)+νy}+k=1nsupxk0{uk(xk)ck(xk)νxk} h(\nu) = \sup_{y \geq 0} \{G(\delta - y) + \nu y\} + \sum_{k=1}^{n} \sup_{x_k \geq 0} \{u_k(x_k) - c_k(x_k) - \nu x_k\}

The dual problem is to minimize h(ν)h(\nu), which can be solved iteratively using a method akin to a Dutch auction. Starting with a high price ν\nu, the mechanism queries each solver for their optimal trade x~k\tilde{x}_k given the price. The price is then adjusted based on the gradient h(ν)h'(\nu), which is computed as:

h(ν)=y~k=1nx~k h'(\nu) = \tilde{y} - \sum_{k=1}^{n} \tilde{x}_k

where y~\tilde{y} is the optimizer of G(δy)+νyG(\delta - y) + \nu y and x~k\tilde{x}_k are the optimizers of the solvers’ subproblems. The process continues until the gradient is sufficiently close to zero, indicating that the optimal solution has been reached.

The mechanism elicits the solver's true price for a given order size and terminates when the solver's marginal price equals that of the CFMM. Instead of proposing an order size for each solver, the mechanism proposes one price, which it broadcasts to all solvers. The solvers respond to this price with a proposed order

The deterministic approach provides several advantages. It allows for the parameterization of utility and cost functions, making it suitable for empirical studies and model fitting with real-world data. The model allows for heterogeneity among solvers through individual cost and utility functions. The introduction of the aggregate forward exchange function GG provides a way to incorporate on-chain CFMMs into the model. The dual approach to solving the social welfare maximization problem naturally leads to a Dutch auction-like mechanism, providing a clear link between theory and practical implementation. Furthermore, this approach can be extended to more complex scenarios, such as multiple assets with correlated prices, by employing coordinate descent techniques. The deterministic framework complements the probabilistic model and offers a robust tool for analyzing and designing intent-based markets. The congestion extension demonstrates how the model can be adapted to account for more complex market dynamics, reinforcing findings from the probabilistic model.

Extension: Congestion

When the authors extend this to congestion they find that If solver cost functions are not independent and positively correlated, the user's welfare decreases. This mirrors the probabilistic model's result: under congestion costs, user welfare decreases.


In this work, we looked at an analysis of intent-based markets. The authors propose two models to model solver profiles and ultimately conclude that under certain conditions for user welfare, the designer of a market system might choose to limit participation. This can also be seen as the rise of oligopolies in intent-based market systems.

I recently read a quote by Einstein that says, “In theory, theory and practice are the same. In practice, they are not.” Having said this sometimes practical systems need theoretical groundings, there is enough evidence to see that different parts of solver-like markets in crypto are becoming oligopolies. Studying and modeling systems like this gives us a theoretical grounding to further explore other alternatives and motivates future improvements.

Subscribe to Emperor
Receive the latest updates directly to your inbox.
Mint this entry as an NFT to add it to your collection.
Verification
This entry has been permanently stored onchain and signed by its creator.