On March 13, 2023, nearly $200 million was siphoned from Euler Finance in what became DeFi’s largest exploit of Q1. By all appearances, it was a flash loan attack. But the root cause wasn’t speed or leverage. It was Euler’s own protocol logic.
Euler let users mint debt with internal leverage, donate collateral without triggering a solvency check, and self-liquidate at a profit—all within a single block.
This wasn’t a zero-day or reentrancy bug. It was a design flaw hiding in plain sight: a dust-cleanup function that quietly broke the protocol’s most critical invariant—that no user could end up with more debt than collateral. One missing checkLiquidity() call, introduced after audit, gave attackers permissionless access to a synthetic liquidation arbitrage loop.
And then the attacker gave it back.
In a rare twist, the hacker, who signed messages as “Jacob”, returned the stolen assets, apologized on-chain, and closed the loop on one of DeFi’s cleanest technical exploits. No ransom. No threat. Just a protocol that allowed users to exit solvent by minting unbacked debt and harvesting it as profit.
This article isn’t about the heist. It’s about the flaw. Because Euler’s failure should permanently reshape how lending protocols think about mint mechanics, liquidation design, and the danger of assumptions that audits don’t check and tests don’t catch.
The Exploit Flow
The attack wasn’t complex in its execution. It was complex in how cleanly it bypassed every defense Euler thought it had.
At the core was Euler’s donateToReserves function. Introduced in eIP-14, the function was meant to clean up user dust by letting anyone donate their eTokens to the protocol’s reserves. The problem was simple and fatal: it didn’t call checkLiquidity.
Euler allows leveraged minting. Users can deposit collateral, mint eTokens, and immediately use those eTokens to borrow more assets in a recursive loop. Normally, the health factor is enforced at the end of the transaction to ensure solvency. But donateToReserves let a user burn their eToken collateral without checking if that pushed them underwater. Their dToken debt remained untouched.
That was the opening.
The attacker took a flash loan, used it to mint leveraged eTokens, then called donateToReserves to burn part of that collateral and tank their health factor. From there, a separate contract stepped in as the liquidator and harvested the bad position. Because Euler’s liquidation logic gives the liquidator a discount proportional to how underwater the borrower is, the attacker captured the full 20 percent liquidation reward.
Euler’s math allowed it. Its checks never ran. Its assumptions never held.
Critically, this was all internal to the protocol. No price oracle was manipulated. No reentrancy was triggered. The attacker simply used Euler’s core functions in the intended order, with unintended consequences.
Let’s walk through the flow in steps.
1. Flash Loan
The attacker took a 30 million DAI flash loan from Aave to fund the exploit. This capital was used both to initiate the leveraged mint and to pay down part of the resulting debt, which increased the allowed borrow amount further.
2. Leveraged Mint
The attacker deposited a chunk of the flash loan into Euler, receiving eDAI. Then they recursively minted more eDAI using the deposited collateral, leveraging their position well beyond 10x. Euler’s own mint logic enabled this without requiring external flash loan loops.
3. Collateral Burn
With the position leveraged, the attacker called donateToReserves. This burned a significant portion of their eDAI collateral but left their dDAI debt untouched. The result was a massively underwater account.
4. Self-Liquidation
A separate contract stepped in as a liquidator. It repaid only the discounted portion of the underwater debt and received the full amount of remaining eDAI collateral in return. The protocol granted the maximum 20 percent discount due to how far underwater the position had fallen.
5. Profit Extraction
The liquidator contract withdrew the eDAI and redeemed it for real DAI from the pool. After repaying the original flash loan and fees, the attacker walked away with a clean eight-figure profit in under one minute.
This loop was repeated across assets—DAI, USDC, stETH, and wBTC—until Euler’s pools were drained.
This was not an exploit of a loophole. It was an exploit of Euler’s design. A lending protocol that allowed synthetic bad debt to be created and monetized internally. No manipulation required.
Root Cause: Broken Invariant, Not a Code Bug
Euler did not get hacked because of an unoptimized function. It got hacked because it violated its own foundational rule: that every account must remain solvent at all times.
That invariant, risk-adjusted collateral must always exceed or equal debt, is the single assumption that underpins the entire lending protocol. Euler enforces it at the end of most operations using a checkLiquidity call. But donateToReserves skipped it. The protocol assumed no user would intentionally reduce their collateral while leaving debt untouched. That assumption failed.
Here is what this function actually enabled:
- Let users mint collateralized debt far beyond their actual deposits using Euler’s recursive leverage design
- Burn that collateral through donation with no check on solvency
- Self-liquidate the now undercollateralized position for a guaranteed 20 percent profit
Each step obeyed protocol rules. But together, they broke the protocol’s guarantees.
There was no constraint enforcing the systemic invariant that the sum of all eTokens should meaningfully back the outstanding dTokens. Once donateToReserves let users burn eTokens without touching dTokens, that link was gone. Users could manufacture unbacked debt positions, trigger liquidation, and get paid by the system to clean up after themselves.
Euler’s liquidation math assumed undercollateralization was the result of volatility. It treated deep insolvency as rare and deserving of maximum incentive. But when insolvency could be synthetically manufactured in a controlled way, the liquidation discount became a money printer.
And here’s the real problem: Euler’s logic was technically correct. Its math worked. The exploit wasn’t a bug in execution—it was a failure of constraint design. The protocol had no invariant enforcement mechanism for what it believed to be true.
This is what makes the exploit devastating, not just for Euler, but for every protocol that builds without modeling what must always be true.
Why Audits Missed It
The donateToReserves function was audited. The protocol passed multiple reviews. And yet the most catastrophic exploit in Euler’s history was greenlit by everyone who looked.
This is not about negligence. It’s about scope and mindset.
The function that introduced the flaw—donateToReserves—was added in eIP-14 after the main Omniscia audit. Sherlock later audited the function itself, but like most audit scopes, the review focused on correctness, not consequence. The code burned eTokens. It didn’t touch dTokens. No reentrancy. No permission escalation. No unexpected return values. It passed.
But audits rarely model invariant drift. They don’t simulate adversarial state transitions across protocol modules. They validate functions in isolation. They assume the rest of the system continues to enforce its core rules.
Euler’s liquidation logic depended on the idea that no user could voluntarily make themselves insolvent. Once that assumption broke, the liquidation incentive became exploitable.
And the audits weren’t set up to catch that.
This is the fundamental gap: audits are snapshots of correctness under assumed conditions. But exploits like this one target the assumptions themselves. They exploit the space between modules—the interstitial logic where trust lives but validation doesn’t.
Even sophisticated testing setups miss this. Unit tests assert behavior. Property-based tests assert bounds. But invariant-based fuzzing—where the system asserts what must always hold true—is still rare, especially across mutative edge cases like donation or self-liquidation.
Security coverage without system modeling is shallow defense. Euler’s architecture required constraints on internal leverage, solvency preservation, and liquidation integrity. Those constraints were never explicitly defined. So no test could break them. No audit could question them. And no alert could fire when they failed.
This Was a Free Option Disguised as UX Sugar
Euler didn’t call donateToReserves a core protocol function. It was meant to help users clean up dust balances. On the surface, it looked like a harmless quality-of-life addition.
But under the hood, it granted users a free financial option: the ability to burn collateral, reduce their health factor, and force the protocol to pay a liquidation reward. No oracle spoofing required. No governance exploit. Just a function that bypassed solvency checks.
Euler’s design amplified the risk.
The protocol allowed recursive minting—where users could mint tokens using their own leveraged positions as collateral. This created massive synthetic exposure from small initial deposits. Combine that with a donate function that silently removed collateral and a liquidation engine that rewarded maximum undercollateralization, and the entire protocol became a liquidation arbitrage loop.
It was riskless from the attacker’s point of view. They controlled both the violator and the liquidator. The violator created a junk position. The liquidator harvested it at a discount and walked away with more than they repaid. The exploit wasn’t just profitable—it was structurally guaranteed.
Euler’s liquidation discount logic assumed that extreme insolvency was rare and externally driven. It was built to reward risk, not contain it. But by enabling internal leverage and unverified collateral burn, Euler handed that risk to anyone who could read the code and deploy a pair of contracts.
This is what happens when protocols add features without modeling game-theoretic consequences. Even small changes—like skipping a health check in a donation function—can introduce free options that break your economics.
Smart contract bugs drain funds. Protocol-level free options collapse systems.
How Olympix Would Have Flagged Euler
The Euler exploit wasn’t hidden. It was untreated. Olympix would have caught the underlying vulnerabilities before mainnet deployment, because it’s engineered to surface silent failures that standard audits miss.
1. Lacking Input Validation Detector
donateToReserves accepted user-triggered balance reduction with zero validation. No check on health, no constraint on debt. Olympix's input validation detector scans for public functions that modify core protocol state without boundary checks. It would have escalated this immediately: a function that burns collateral without validating solvency violates protocol safety assumptions.
2. Unchecked Returned Value Detector
Euler relied on functions that returned critical collateral balances without enforcing their correctness. Olympix’s detector targets precisely this category—unverified state reads that drive liquidation logic. It would have flagged the use of raw uint collateral balances in the liquidation path, along with missing enforcement on those values. This isn’t just a coding hygiene issue. It’s a latent security risk.
3. Oracle Vulnerabilities Detector
While not the attack vector in this case, this detector reinforces Olympix’s approach: protocol behavior must be constrained not just by code correctness, but by external trust boundaries and internal invariants. If Euler had used oracle reads to gate solvency, multiple reads without validation would have been flagged. The same modeling applies internally: if you mutate collateral, you validate health. Period.
Olympix doesn’t just compare code to a signature database. It learns the rules the protocol is supposed to follow, and then checks if the protocol itself ever stops enforcing them.
Euler broke its own rules. Olympix would have raised the alarm.
Lessons for Protocol Builders
Euler’s collapse was not a failure of Solidity. It was a failure of systems thinking. The protocol operated exactly as written. The attacker simply walked through the door Euler left open.
Here are the tactical lessons every DeFi protocol team should extract from this:
1. Every mint and burn function is protocol-critical. If a function changes token balances or reserve states, it needs the same level of scrutiny as a borrow or repay. donateToReserves looked like a UI helper. It unbounded leverage and broke solvency.
2. Enforce invariants post-mutation. Security checks that only trigger on major actions miss the state transitions in between. Euler deferred checkLiquidity() and assumed it would catch issues at the end. But donateToReserves created a solvency failure in the middle of a valid transaction path.
3. Liquidation logic must price risk, not just balance deltas. Euler priced liquidation discounts based on how underwater a position was. It never questioned whether that position was adversarially constructed. As a result, it paid attackers for insolvency they caused themselves.
4. Treat self-liquidation as an adversarial strategy. Euler treated it as an edge case. The attacker turned it into an exploit. Any system that allows a user to become a violator and liquidator in the same block must account for manipulation of the liquidation price curve.
5. Mutation testing and invariant fuzzing are non-negotiable. Tests that assert expected behavior won’t catch behavior that shouldn’t exist. If Euler had an invariant asserting that “total protocol debt must be backed by at least X collateral,” it would have failed during the donation call. It never ran that test.
Security isn’t just about logic correctness. It’s about constraint enforcement. The best way to defend against protocol-level exploits is to define what must never happen and then build tooling to catch any path that violates it.
The Real Failure Was Structural
Euler didn’t fail because someone found an edge case. It failed because no one enforced the core rules the system was built on. The protocol assumed that collateral burns would always be safe, that users couldn’t create unbacked debt, and that liquidation incentives couldn’t be gamed.
All of those assumptions were wrong. And none of them were checked.
The attacker didn’t need an exploit in the traditional sense. They just needed Euler to work as written. What they exploited was the absence of constraints.
This is the new frontier of DeFi security. Not just bugs, but architectural entropy. Features added for UX or efficiency that quietly undercut safety guarantees. Logic that passes tests but fails under adversarial composition. Invariants that are assumed but never enforced.
If your protocol allows internal leverage, self-liquidation, or dynamic pricing based on user state, your system is only secure if those mechanisms can’t be used to extract guaranteed profit. Anything else is just yield farming with better tooling.
Euler built something sophisticated. But it never constrained the system it deployed.
Don’t make the same mistake.
Sources
Euler, Sentiment, Safemoon: What These Exploits Reveal About DeFi’s Weak Points
Euler Finance Incident Post-Mortem by Omniscia
Euler Compromise Investigation - Part 1 - The Exploit by Coinbase
$197 Million Stolen: Euler Finance Flash Loan Attack Explained by Chainalysis