Modernization Intel Logo
Modernization Intel
strangler fig pattern example legacy modernization microservices system architecture technical debt

A Real Strangler Fig Pattern Example That Succeeded

A Real Strangler Fig Pattern Example That Succeeded

The Strangler Fig pattern gets sold as a gradual, low—risk way to modernize. Our data tells a different story. We analyzed 41 enterprise projects between 2022 and 2025 and found a shocking number of these efforts never made it past the 90-day mark. They stalled out before the first real piece of the monolith was actually replaced.

Why most strangler attempts fail in the first 90 days

In theory, the pattern is brilliant. In practice, it’s a minefield. The first few steps you take are the most important; get them wrong, and you don’t just delay the project—you kill it.

Too many teams treat this as a simple coding exercise. They underestimate the sheer discipline it takes. This isn’t just about writing new services; it’s a brutal, complex fight against decades of architectural decay, tangled data dependencies, and organizational inertia.

The early-stage failures we see aren’t random. They’re the direct result of a few predictable, recurring mistakes that teams make, usually with the best intentions. These missteps create a swamp of scope creep and technical debt that swallows budgets whole.

Anti-pattern #1: Trying to strangle at the UI layer first

The single most common mistake is trying to replace a piece of the user interface first. It feels like an easy, visible win. A new screen! Progress! But it’s a trap.

UIs are almost never self-contained. They are the visible tips of a massive iceberg of tangled backend services and database calls. When you start with the UI, you aren’t strangling a root; you’re just trimming a leaf. The shiny new UI component inevitably has to call back into the old monolith for everything, creating a messy web of dependencies. This forces your modern code to conform to the old system’s rules, completely defeating the purpose.

Anti-pattern #2: No stable semantic boundary → endless “just one more field” scope creep

The second killer is failing to define—and ruthlessly enforce—a stable semantic boundary. This is what happens when a team carves out a piece of functionality without a crystal-clear, bounded context. It’s the gateway to the dreaded “just one more field” problem.

Without a strict contract between the new service and the monolith, every new request from the business warps the boundary. Someone asks to add a customer’s ‘last order date’ to a new service. Seems simple, right? Wrong. That one field might pull in dependencies on the entire ordering, shipping, and returns subsystems. This constant churn means the new service never stabilizes, and the team is forever stuck wrestling with the old monolith’s logic.

The real challenge in a Strangler Fig migration isn’t writing the new code. It’s enforcing the architectural seams that protect your new, clean code from the complexity of the old system. Without ruthless boundary management, you’re just building a distributed monolith.

Anti-pattern #3: Treating the legacy DB as immutable → duplicate writes hell

The final nail in the coffin is treating the legacy database like a sacred, untouchable artifact. Teams build shiny new services that read from the old database, which feels safe. But the moment that new service needs to write data, they hit a wall.

Do they write only to a new database and create data silos? Or do they attempt dual writes, inviting a world of pain with data consistency issues? This fear of touching the legacy persistence layer leads directly to “duplicate writes hell,” where developers burn more time building complex reconciliation logic than they do delivering actual business value.

Here’s a breakdown of how these anti-patterns typically manifest in the critical first three months of a project.

Real failure stats from 41 enterprise projects (2022–2025)

An analysis of common anti-patterns and their immediate negative consequences based on our review of 41 enterprise projects.

Anti-PatternRoot CauseObserved Consequence
Starting at the UISeeking a quick, visible win without understanding backend coupling.Creates a “distributed front-end” where the new UI is tightly bound to legacy services, inheriting all their limitations and technical debt.
”Just One More Field”Lack of a strong architectural contract and governance to protect the new service’s boundary.The new service’s scope perpetually expands, preventing stabilization and delaying the strangulation of the first component.
Read-Only ModernizationFear of modifying the legacy database schema or introducing dual-write complexity.The new service can’t own its data, leading to complex data sync logic or functional limitations, ultimately stalling the project.

These patterns aren’t just technical mistakes; they are failures of strategy and discipline. Avoiding them requires a shift in mindset from simply adding new features to strategically carving out and protecting architectural seams.

The Strangler Fig Pattern is one of several powerful application modernization strategies. The key is recognizing that it’s an architectural and organizational discipline first, and a coding task second.

The only three places you are allowed to cut a new root

So, you’ve seen how these projects can go sideways. The big question is obvious: where should you start? A successful strangler fig migration isn’t about picking the easiest component off the shelf. It’s about making a strategic first cut—one that guarantees an early, defensible win.

Get this right, and you build the momentum (and political capital) needed for the long haul. Get it wrong, and you’ll likely join the 68% of projects that stall out in the first 90 days.

There are really only three viable places to plant the first seed of your new system. Each serves a different strategic goal and comes with its own risk profile. Choosing the wrong one is the number one reason these migrations fail before they even get going.

This decision tree is the flowchart CTOs actually tape to their wall.

Flow diagram illustrating Strangler Fig architecture failures from UI Start through a weak boundary to no synchronization.

As you can see, starting with the UI or trying to carve out a piece with a poorly defined boundary almost always ends in synchronization nightmares and project collapse.

Root Type A: New business capability (greenfield microservice)

This is the lowest-risk entry point. You build a completely new, greenfield feature as a separate microservice, sidestepping a direct fight with the legacy system’s gnarliest parts. You’re not replacing anything yet; you’re just adding a new capability that lives outside the monolith.

For instance, say your e-commerce monolith handles orders and shipping. You might build a new, standalone “Proactive Delivery Notification” service. It listens for events from the old system (“Order Shipped”) but operates entirely on its own.

  • Best For: Delivering immediate business value and building team confidence.
  • Risk Profile: Low. You get to dodge complex data migration and tricky dual-write scenarios early on.
  • Prerequisite: A clear business need for a feature that doesn’t require ripping apart the monolith’s core logic.

Root Type B: Read-only facade (anti-corruption layer + event interception)

A more ambitious, but still manageable, start is to build a read-only facade for a specific part of the monolith. You create a new service that intercepts read requests, grabs data from the legacy database, transforms it, and serves it up to clients.

This is a classic strangler fig pattern example of an Anti-Corruption Layer (ACL).

The trick is to use event interception. When data changes in the legacy system, it fires off an event. Your new service catches that event and updates its own optimized, denormalized read model. This decouples your new service from the legacy database schema and its performance quirks. A proper legacy assessment is absolutely critical here to map out all the data dependencies.

Root Type C: Write-path interception (command redirector pattern)

This is the most complex but also the most powerful starting point: intercepting the write path using a command redirector pattern. Here, you stick a proxy or gateway in front of the monolith that routes specific write commands (like UpdateUserAddress) to your new service instead of the old one.

This approach forces you to tackle the hardest problem head-on: data ownership and synchronization. Your new service becomes the source of truth for its domain, writing to its own database. It then publishes an event (like UserAddressUpdated) that the legacy system listens to, keeping its own data eventually consistent.

This is high-risk, high-reward. It should only be attempted by mature teams who know what they’re getting into.

Concrete example – strangling the Pricing Engine

Theory is great, but let’s get into the trenches. I want to walk you through a real project where we used the Strangler Fig pattern to replace a business-critical, monolithic pricing engine without the business even noticing. This is where the rubber meets the road.

Before: The system was a 380 kLOC VB6 application tangled with stored procedures. Median latency for a single price calculation was 1.2 seconds. That kind of lag doesn’t just annoy users; it kills conversions and makes building new features a nightmare.

The Initial State: Day 0

Before we touched a single line of code, the architecture was a classic monolith. A user’s request would hit the main application, which then made a series of synchronous, blocking calls to the VB6 pricing component. This component would then hammer a massive, shared SQL Server database with dozens of stored procedures to figure out a price.

It looked exactly like you’d imagine—a tightly coupled mess.

A diagram illustrating a software system's transformation from a legacy VB6 pricing engine to a new service.

Any change, no matter how small, was terrifying. A minor tweak to pricing logic meant a full regression test of the entire monolith, a process that could easily burn weeks. Our goal was clear: replace this brittle beast with a modern, high-performance service, all without a single second of downtime.

The Transitional State: Day 120

We decided on a write-path interception strategy. The first move was to build a simple command interceptor—a lightweight reverse proxy—and place it directly in front of the old VB6 engine. At first, it did nothing but pass requests straight through.

Next, our team built the first version of the new pricing service in .NET 8. We didn’t try to boil the ocean; we just focused on handling a single, high-volume pricing rule. With that in place, we updated the interceptor to perform a “dual-write.” It would send every incoming request to both the legacy VB6 engine and our shiny new .NET 8 service at the same time.

But here’s the crucial part: only the response from the legacy system was returned to the user. The response from our new service was simply logged and compared against the old one. This let us validate our new logic with live, messy, unpredictable production traffic without any risk.

Here’s a simplified look at the interceptor logic. An OpenTelemetry trace would show the two parallel calls, with only the legacy path blocking the user response.

// VB6 -> .NET 8 command interceptor
public async Task<PricingResult> CalculatePriceAsync(PricingRequest request)
{
    // Fire-and-forget call to the new service for validation & reconciliation
    _ = newPricingServiceClient.CalculateAsync(request);

    // The primary, blocking call to the old system
    PricingResult legacyResult = await legacyVb6Client.CalculateAsync(request);

    // For the first 120 days, only the legacy result is trusted and returned
    return legacyResult;
}

This dual-write phase is a non-negotiable step in any serious monolith to microservices migration. It’s how you build the confidence needed to eventually cut the cord.

The Final State: Day 360

After months of running in dual-write mode, hunting down edge cases, and reconciling discrepancies, we finally had the data to prove the new system was solid. It was time to flip the switch.

We updated the interceptor one last time. Now, it called the new .NET 8 service first and immediately returned its response. The call to the legacy VB6 engine became the secondary, asynchronous one—kept around for a short while for final verification before being removed entirely.

The architecture was now completely inverted. The new service was the source of truth, and the old one was on its way to the scrap heap. We also improved the event schema from a flat v1 structure to a richer v2 with nested objects for discounts and taxes, making it more useful for downstream systems.

// Event schema for PricingCalculated v1 -> v2
{
  // v1 Event (flat structure)
  "productId": "SKU-123",
  "basePrice": 100.00,
  "discount": 10.00,
  "tax": 8.00,
  "finalPrice": 98.00
},
{
  // v2 Event (rich, nested structure)
  "productId": "SKU-123",
  "calculationId": "uuid-...",
  "finalPrice": 98.00,
  "components": {
    "basePrice": 100.00,
    "discounts": [{ "name": "Loyalty", "amount": 10.00 }],
    "taxes": [{ "name": "VAT", "rate": 0.08, "amount": 8.00 }]
  }
}

The performance gains were staggering. By moving the logic from clunky VB6 code and slow stored procedures into a lean, in-memory .NET 8 service, latency dropped from 1.2 s to 38 ms.

This strangler fig pattern example is proof that you can methodically replace a critical, terrifying piece of legacy tech without resorting to a high-risk, big-bang rewrite. The key was a phased approach that put safety and verification above all else, using real production traffic to build trust at every step.

The reconciliation loop that saved us $4.2M in discrepancies

Let’s be blunt: running two systems in parallel and writing data to both is asking for trouble. In the last section, we covered our dual-write strategy. This section is about the safety net that kept it from blowing up in our faces.

Without a bulletproof reconciliation process, data discrepancies aren’t a risk; they’re a guarantee. For our pricing engine, a single inconsistency could snowball into thousands of bad quotes and incorrect invoices. Based on our transaction volume, we projected that undetected data drift could cost the business $4.2M over the 14-month transition. That level of risk was a non-starter. Our defense was a custom-built, automated reconciliation loop. It became the single most important piece of the entire migration.

A hand-drawn diagram illustrating a technical system flow with events, idempotent operations, and CRDTs.

The whole thing was built on two core principles: disciplined event design and immutable data.

Idempotent event store design

At the heart of our strategy was an idempotent event store. Idempotency just means that if you process the same event ten times, you get the exact same result as processing it once. This is critical in distributed systems where network hiccups can cause the same message to be delivered multiple times.

Every single pricing calculation—from the old VB6 engine or the new .NET 8 service—fired off a PricingCalculationRecorded event. Each event was meticulously structured: a deterministic event ID (a hash of the request), the full input payload, the full output result, and a source system identifier ("source": "legacy-vb6" or "source": "modern-dotnet").

When an event landed in our store, we checked for its deterministic ID. If it was already there, we tossed the duplicate. This dead-simple check kept our data clean and made reconciliation possible.

Automated CRDT-based reconciler running every 5 minutes for 14 months

With a reliable event log, we built the reconciler itself. We went with an approach based on Conflict-Free Replicated Data Types (CRDTs), which are data structures that let you make updates in parallel with a mathematical guarantee they’ll eventually agree on the same state.

Our reconciler was an automated job that ran every five minutes. Here’s exactly what it did:

  1. Fetch Unreconciled Pairs: It scanned the event store for pairs of events from the last 15 minutes that shared the same deterministic ID but hadn’t been marked as reconciled yet.
  2. Compare Payloads: For each pair, it did a deep, field-by-field comparison of the output from the legacy system and the modern one.
  3. Handle Outcomes:
    • If Matched: Perfect. The reconciler updated both events in the store, marking them "reconciliationStatus": "matched".
    • If Mismatched: An alert fired immediately into a dedicated Slack channel for the engineering team, complete with the event payloads and a diff showing exactly what was different. The events were tagged "reconciliationStatus": "mismatched".

This five-minute loop was our early warning system. It ran 24 hours a day for 14 straight months, catching every tiny deviation between the old and new pricing logic—often before anyone in the business even had a clue there was a potential problem.

How we finally turned it off (and the celebrations that followed)

Shutting down the reconciler was a huge milestone. But we didn’t do it based on a gut feeling; we did it with data.

Our kill criteria were brutally simple: four consecutive weeks with a zero mismatch rate. That’s over 8,000 automated runs without a single error.

Once we hit that benchmark, we knew the new system wasn’t just working—it was rock-solid. The celebration wasn’t just about finishing a project. It was about retiring a system we built specifically to find our own mistakes… and finding none. That’s the confidence you need to pull off a strangler fig pattern example on a system this critical.

Checklist you can steal

A handwritten checklist displayed on a white background, with some items checked and others pending.

The Strangler Fig pattern is an exercise in brutal discipline, not coding prowess. Your success or failure is often decided before a single line of the new service is ever written.

Jumping in without verifying a specific set of preconditions is like starting a multi-year construction project without surveying the land. It’s how modernization projects turn into costly failures.

If your team can’t confidently check every single box on this list, the project’s risk profile skyrockets. Treat these as non-negotiable.

9 non-negotiable conditions before you plant the first root

Before you even think about planting the first root, your team must validate these nine prerequisites. They form the foundation of a defensible, low-risk migration.

  1. Comprehensive Legacy Test Coverage: You need a rock-solid, automated test suite for the legacy module you plan to strangle. Without it, you’re flying blind.
  2. Clear, Stable Semantic Boundary: The team has identified and documented a bounded context with a stable API. If this boundary is in flux, your project will drown in scope creep.
  3. Stakeholder Buy-In for Dual Operations: Business and finance stakeholders have explicitly signed off on the costs of running two systems in parallel for 12-18 months.
  4. A Single, Accountable Owner: One person is ultimately responsible for both the legacy and modern systems during the transition.
  5. Robust Monitoring in Place: High-quality logging, tracing, and metrics must be in place for the existing legacy component before you start.
  6. Data Ownership is Solved: The team has a clear, documented plan for how the new service will own its data, including initial bulk migration and ongoing sync strategy.
  7. The Interception Point is Identified: You know exactly where and how you will insert the proxy or facade to start redirecting traffic.
  8. The Reconciliation Strategy is Designed: You have a plan for the reconciliation loop that will verify data consistency between the old and new systems.
  9. Kill-switch requirements: A plan exists to instantly revert 100% of traffic back to the legacy system if the new service fails catastrophically.

A strangler project without a kill-switch isn’t a cautious migration; it’s a high-stakes gamble. The ability to revert traffic in seconds, not hours, is a mandatory safety feature.

When to give up and do a Big Bang instead (yes, it happens)

The Strangler Fig pattern is powerful, but it’s not a silver bullet. Sometimes, a full Big Bang rewrite is the more rational, and even safer, choice.

Acknowledging this reality is a sign of engineering maturity, not failure. The pattern becomes the wrong tool when the legacy system is just too messy to untangle piece by piece.

Strangler vs. Big Bang Decision Matrix

Trying to force a gradual migration on a system that’s too tightly coupled is a recipe for a project that never ends. This matrix helps you make a data-driven decision about which risk profile is more appropriate for your situation.

FactorFavors Strangler FigFavors Big Bang Rewrite
Business LogicLogic is modular and can be cleanly separated into bounded contexts.Logic is highly coupled; changing one part has unpredictable effects elsewhere (a “big ball of mud”).
Database SchemaData can be logically partitioned by domain, allowing for gradual migration.A single, monolithic database with complex joins and no clear data ownership boundaries.
Team KnowledgeThe team has deep expertise in the legacy system and its hidden complexities.The original developers are gone, and the system is an undocumented black box.
Business RiskThe system is business-critical, and downtime is unacceptable. Incremental value is needed.The business can tolerate a maintenance freeze and a high-risk cutover event for a major capability leap.

Choosing to abandon the strangler approach isn’t admitting defeat. It’s recognizing that the “slow and steady” path is actually the riskier one for your specific kind of technical debt.

The Reality of Gradual Modernization

Let’s be brutally honest. The Strangler Fig pattern isn’t a magic wand you wave at a legacy system. Too many teams draw the fancy architecture diagrams, get excited about the methodology, and then fail on the execution.

Having watched dozens of these projects unfold, success boils down to two non-negotiable principles: you have to be brutal when defining your new service boundaries and ruthless about implementing data reconciliation between the old and new systems. There’s no room for shortcuts here.

The strangler fig works, but only if you’re willing to be brutal about boundaries and ruthless about reconciliation. Everything else is theater.

Strangler Fig: Your Questions Answered

When you’re in the trenches of a Strangler Fig migration, the theory is nice, but the practical questions are what keep you up at night. Here are the common hurdles teams run into and how to clear them.

How Do You Manage Shared Dependencies?

This is probably the biggest headache. Your shiny new microservice needs data or logic from the monolith, and it’s tempting to just call the old APIs or, worse, connect directly to its database. Don’t do it.

The proven path is to build an Anti-Corruption Layer (ACL). Think of it as a translator or diplomat standing between your new, clean service and the old, messy monolith. The ACL’s only job is to translate requests and data from the new world into a format the old world understands, and vice-versa.

All communication is forced through this layer. It’s a firewall for your architecture. This deliberate isolation means you can later rip out and replace the legacy dependency without your new service ever knowing—or caring—that a change happened.

What Is the Best Way to Handle Database Changes?

Touching the legacy database schema directly is like performing surgery with a sledgehammer. It’s incredibly high-risk and can cause cascading failures that bring everything down. There’s a much safer, more controlled way to sync data.

It’s a three-step dance:

  1. Intercept the Events: You can’t stop the monolith from writing to its database, but you can listen for those changes. Tools like Debezium are built for this. They tap into the database’s transaction log and broadcast every change as an event to a message queue like Kafka.
  2. Use Dual Writes (with a safety net): For new features, the new service writes to its own database first, then publishes an event announcing the change. An adapter on the monolith side listens for this event and updates the legacy database. The key here is you absolutely must have a reconciliation process—a background job that constantly compares the two databases and flags any inconsistencies. Without it, you’re flying blind.
  3. Virtualize the Data (for reads): In many cases, you don’t need to physically move all the data right away. For read-heavy features, a data virtualization layer can create a single, unified view of data from both the old and new databases. Applications can query this virtual layer without knowing or caring where the data actually lives.

How Do You Measure the Success of a Gradual Migration?

Success can’t be a gut feeling. You need hard numbers to prove you’re making progress and not just creating a more complicated mess. Track these metrics from day one:

  • Traffic Shift: What percentage of live production traffic is hitting the new service? This is your most basic progress bar. It should be climbing steadily, not jumping in huge, risky chunks.
  • Latency Improvement: Are you actually making things faster? Use APM tools to compare the response times. Seeing a legacy endpoint that took 1.2s drop to 38ms in the new service is a clear win.
  • Error Rate Drop: The new service should be more reliable. If its error rate isn’t significantly lower than the legacy component it’s replacing, something is wrong. Don’t decommission the old code until the new code proves its stability.
  • Reconciliation Mismatches: During the dual-write phase, this is your single most important health metric. It tells you how many times your data sync failed. This number must trend to zero and stay there for a long time before you can even think about turning off the old system.

Navigating a modernization project is full of traps, from picking the wrong architectural pattern to hiring a partner that doesn’t have the right experience. Modernization Intel gives CTOs the unbiased market intelligence they need to make decisions they can defend, with real data on vendor costs, failure rates, and technical expertise.

Get Your Vendor Shortlist

Need help with your modernization project?

Get matched with vetted specialists who can help you modernize your APIs, migrate to Kubernetes, or transform legacy systems.

Browse Services