Cloud Migration Strategies Guide: Choosing the Right Technical Approaches and Execution Methods
Key Takeaways
- There are six technical approaches to migration, known as the six R's. Each one has different implications for cost, complexity and timeline. The approach you choose shapes every other decision that follows.
- Execution strategy is about when systems move relative to each other, not just a date. Big bang, phased and gradual each carry different trade-offs. The right choice depends on how much downtime you can absorb.
- Most migrations mix strategies, and different workloads need different approaches.
Every cloud migration depends on getting the who, what, when, where, why and how right. Â
There’s What you’re migrating. Why you’re migrating it. Where you’re migrating it to. Who’s responsible for planning and executing the migration. But the final two pieces in the puzzle are arguably the critical decisions you’ll need to make: Â
- How you migrate. Â
- When you execute the migration.Â
These decisions, while separate, are deeply connected. And they influence each other more than you might realise. How you approach these decisions will determine whether your cloud migration is a success or failure. Â
“How” is about choosing the right technical path, often guided by the six R’s of migration. “When” is about pulling the trigger in a way that minimises disruption and risk.Â
Both decisions should reflect your organisation’s unique environment and operational reality. In Australia, this also means navigating a complex regulatory landscape, including the Privacy Act and other sector-specific obligations. A well-structured cloud migration framework helps bring all this together, making it easier to balance risk, performance and compliance while supporting a smooth cloud migration. Â
In this guide, we break down the different technical approaches and execution methods for cloud migration. By understanding these options, you can confidently choose an approach that best suits your organisation and set your migration up for success.Â
 Â
Technical migration approaches: The six R’s
The six R’s (6 R’s) are a set of technical approaches to cloud migration. Â
They originated from Gartner’s “5 R’s” model, which emerged in the early 2010s. At the time enterprises were grappling with the prospect of their first cloud migration. The 5 R’s offered a framework to assess which parts of their legacy on-prem environment were worth migrating, modernising, replacing or leaving behind. AWS later built on Gartner’s original model, combining it with practical insights gained from real-world migration projects. This led to the formalisation of the six R’s, which today is a widely recognised framework for guiding cloud migration decisions.Â
The technical approach you choose will have implications across the entire cloud migration project, shaping its cost, complexity and long-term outcomes. So, make sure this is the first decision you make. Â
The six approaches – Rehost, Replatform, Refactor, Retire, Retain and Repurchase – each offer a different path. The best one for your organisation’s cloud migration depends on what you’re trying to achieve. For example, rehosting (lift-and-shift) is often suited to faster migrations with minimal change. Refactoring, on the other hand, sets the foundation for deeper modernisation and long-term optimisation.Â
The 6 R’s are: Â
Rehosting (lift-and-shift migration)Â
Rehosting, (aka lift-and-shift) involves moving systems to the cloud as-is, without changing the underlying code or architecture. It’s the fastest way to migrate. Â
Think of rehosting like publishing Shakespeare’s original works on the web. Same story, just in a new place. Â
Advantages:Â
- Fastest migration path, typically days to weeks.Â
- Lowest initial cost and complexity compared to other migration approaches.Â
- Minimal risk of disrupting or breaking existing functionality.Â
- Teams can migrate using their existing skills and processes.Â
- Supports the rapid movement of workloads into Australian cloud regions to meet data residency requirements.Â
Limitations:Â
- Doesn’t take full advantage of cloud-native capabilities such as auto scaling or serverless services.Â
- May lead to higher ongoing operational costs without further optimisation.Â
- Existing tech debt is carried into the new environment.Â
- Performance improvements are typically limited.Â
- Will still require future optimisation work to realise the full benefits of cloud infrastructure.Â
When to choose lift-and-shift migration:Â
If your systems work fine as they are, and you simply want to move them to the cloud, rehosting is often the ideal choice. By moving systems as-is, you save time, money and the much of the complexity of a typical cloud migration. As such, it’s a good way to test the viability of the cloud before committing to larger-scale changes. For change-wary teams, rehosting offers stability, as existing processes can continue with minimal disruption. But, like moving outdated furniture to a new, modern office, rehosting’s convenience will come with a long-term cost. Cloud migrations are golden opportunities for modernisation, and you’ll need to upgrade your legacy systems at some point. An inevitable post migration-modernisation may end up costing more than if you’d combined it with the migration. Â
 Â
Replatform (cloud replatforming)
Cloud replatforming involves making small, targeted changes to optimise your existing systems for the cloud, without changing their core application architecture. It’s the middle ground between moving your systems as-is and completely redesigning them.Â
Think of replatforming like a modern-language edition of Shakespeare’s works, such as the popular No Fear Shakespeare series. The story’s unchanged, but the language is updated so it’s easier for modern readers to understand.Â
Advantages:
- Delivers a better cost-to-performance outcome than a pure lift-and-shift.Â
- Cuts down your operational workload by allowing you to leverage managed cloud services.Â
- Requires only a moderate amount of time and investment.Â
- Gives you some cloud-native benefits without a full rebuild.Â
- Lower risk and disruption compared to a complete refactor.Â
Limitations:Â Â
- Doesn’t fully take advantage of cloud-native architecture.Â
- Some tech debt still comes across.Â
- Requires testing to make sure the system works in the new environment.Â
- Not suitable for systems that need a major architectural overhaul.Â
When to choose cloud replatformingÂ
Cloud replatforming is the first step towards optimising your systems for the cloud. It’s a good fit when your existing system can be made cloud compatible with only minor changes. It lets you access managed services and improve performance while keeping costs, complexity and disruption low. Compared to a lift and shift, the migration takes a little longer, usually weeks to months, and may require some performance tuning or database adjustments to ensure everything runs smoothly. In return, you gain practical cloud benefits that make the additional investment worthwhile.Â
 Â
Refactor/re-architect (cloud refactoring)
Cloud refactoring means redesigning an application’s internal architecture for the cloud, without changing its core functionality. Instead of lifting the system as it is, the application is rebuilt to be faster, more scalable and easier to maintain. This approach is used when an organisation wants a major step up in performance or when the current system is holding them back.Â
Think of refactoring like West Side Story. It keeps the heart of Shakespeare’s Romeo and Juliet, but reimagines the entire production for a modern world. That’s what refactoring does for your systems.Â
Refactoring often involves breaking a large application into smaller parts so they can run and scale independently. It can also include replacing background jobs with cloud automation or moving to modern deployment methods so updates are easier and more reliable. The goal is a system that performs better, costs less to run over time and can evolve quickly as business needs change.Â
Advantages:Â
- Access to the full benefits of cloud infrastructure, such as improved performance and automatic scaling.Â
- Better long-term cost efficiency.Â
- More reliable and resilient applications.Â
- Enables development cycles and easier updates post-migration.Â
- Supports innovation and change.Â
Limitations:Â
- Highest cost and longest timeline.Â
- Requires strong development capability.Â
- More complex to deliver and manage.Â
- Teams may need new skills.Â
- Potential operational disruption during the rebuild.Â
Refactoring is the most intensive pathway, but it provides the strongest long-term outcome for critical applications. When a system is core to operations or future growth, investing in a full redesign can deliver a meaningful uplift in capability, performance and scalability.Â
 Â
When to refactor
Cloud refactoring is the right approach when legacy tech debt is slowing teams down and limiting your ability to innovate. Rather than adding more workarounds, it gives you a chance to modernise the foundation of your systems and commit to a cloud model built for long-term growth. With a modern architecture in place, teams can ship new features faster, respond to change more easily and keep pace with rising customer expectations. For organisations prioritising performance, agility and ongoing improvement, refactoring becomes the most sustainable path forward.Â
 Â
The other three Rs
The first three R’s we’ve covered in detail focus on migrating existing systems. The next three focus on the systems that don’t make the cut. They are: Â
Repurchase: You stop using the old system entirely and switch to a cloud-native SaaS product. This is Ideal if you want a clean break from an old system. While there is still some data migration involved (such as importing existing customer records to a new CRM), repurchasing allows you to take advantage of modern, cloud-native functionality out-of-the-box, rather than trying to retrofit old systems on new infrastructure. Â
Retain: Leave the system on-prem. This is the right choice when the system is heavily dependent on on-prem hardware or when the cost of moving it to the cloud outweighs its benefits. Retaining can be a practical option for stable, low-change systems that still serve a purpose and don’t justify migration effort.Â
Retire: Decommission the old system entirely. This is the right choice when new systems or updated business processes have made the existing system redundant. However, redundancy doesn’t always mean the system’s no longer in use. Before making the call, check for shadow IT processes or informal workarounds that still depend on the system. Hidden dependencies on legacy systems are more common than you might think, particularly when the system’s been in place for many years.Â
 Â
Legacy migration and private cloud considerations
Working through a legacy migration can feel like cleaning your wardrobe: You find things tucked away that you haven’t even thought of in years. Now, you suddenly need to decide if it’s worth keeping. But legacy systems are more complicated than an old shirt. They often run on outdated technology stacks that don’t translate cleanly to the cloud. Some have undocumented dependencies, handwritten configurations and quiet workarounds that nobody’s touched in years. Others carry compliance obligations that demand long-term data retention, which limits what you can change. Â
Legacy systems might also rely on skills that are no longer common in the workforce, making ongoing maintenance risky and expensive. Put together, these factors combine to make legacy migration less of a technical exercise and more of a gradual discovery process. Each step uncovers another layer of decisions about what to keep, what to update, and what to retire. Â
But often, there’s a difference between “this will be hard to move” and “this isn’t worth doing”. Here’s how you can tell whether your legacy systems need a clean-up or a complete overhaul before moving to the cloud.Â
 Â
When to replatform legacy systems
Replatforming is the smart choice when the system itself still does the job, but the technology underneath it is holding you back. If there’s a cloud-friendly version available, or you can move parts of it to managed services without too much effort, replatforming gives you a straightforward upgrade path. It’s a practical way to improve the system’s performance and stability without committing to a full rebuild.Â
 Â
When to refactor legacy systems
Refactoring becomes necessary when tech debt is slowing your organisation down, and small fixes no longer move the needle. If maintenance keeps getting more expensive, making changes takes too long or your team’s crying out for features the old system can’t support, rebuilding becomes the smarter long-term option. Refactoring future-proofs your system by aligning it with the standards of modern, cloud-native infrastructure. So, the re-architecture holds its value even if you decide to repatriate the workload in the future. In fact, by combining the proven efficiencies of your legacy system with the performance and scalability of the cloud, you could unlock a powerful competitive advantage.Â
 Â
Private cloud migration considerations
Not all legacy systems can go public. If a system relies on specific hardware, handles sensitive data or has integration requirements that don’t translate well to public cloud, private cloud becomes the obvious destination. Australian private cloud also helps you maintain the data residency and control of your legacy environment. A hybrid model can offer the right balance, keeping sensitive workloads in private infrastructure while using public cloud for scalable, variable demand. The decision comes down to how much control you need, your compliance requirements and whether the cost of migrating the system outweighs the risk of keeping it where it is.Â
Cloud migration decision quiz
Answer each question with A, B or C.
Keep track of your answers to see which migration path best fits your system.
If your answers were mostly A: The system is functional as-is, and simply needs to be rehosted in the cloud. Â
If your answers were mostly B: The system is mostly fine, but needs some small tweaks to ensure it remains fit-for-purpose in its new cloud environment.Â
If your answers were mostly C: The system, while still functional, needs an overhaul to meet the expectations of a cloud environment and remain fit-for-purpose long-term. Â
Cloud migration execution strategies: timing and risk management
Now you know how your systems are moving, it’s time to figure out when. By when, we don’t mean the date and time. Instead, we mean when your systems will move relative to each other. Should you move them all at once? Or space them out to reduce risk and minimise operational disruption? The answer will depend on your organisation. Here’s an overview of the different migration execution strategies, and when to go with each. Â
 Â
Big bang migration
A big bang migration involves moving your entire workload in one coordinated event during a planned maintenance window. It’s a focused, high-pressure change where your environment moves in a single cutover. It’s a clean break that avoids the need to run your old and new environments at the same time. If it works, you start the next business day in the new platform. If it doesn’t, the impact is immediate and far-reaching. Â
Â
AdvantagesÂ
- The shortest overall migration timeline.Â
- Removes the complexity and cost of keeping old and new systems synchronised. Â
- Simplifies project management and migration logistics. Â
LimitationsÂ
- Cutover carries higher risk, as issues can immediately affect the migration.Â
- You must accept a downtime window. Â
- Because everything moves at once, you have fewer opportunities to learn gradually. Â
- Migration day puts significant pressure on your team.
- Rollback can be complex, if you need to use it.Â
When to choose big bang migrationÂ
The ultimate litmus test of a big bang approach is this: Would the cost of running parallel environments outweigh the risk of not having them? That’s often the case when workloads are small to medium, you understand every application dependency and you can put up with a planned downtime window. If that’s the case, a big bang approach makes sense. Of course, you’ll still need a team that can coordinate a concentrated migration effort, and have clear rollback procedures tested and documented. Without those in place, even the simplest of moves can become complex and costly. Â
 Â
Cutover planning for big bang migrationÂ
Big bang migrations are all about careful timing and airtight procedures. Firstly, schedule the migration during periods of low (or no) use. For most organisations that run during normal Australian business hours, that’s typically early in the morning on a weekend. Your migration timeline should also include a buffer for unexpected issues that occur during the cutover. Make sure your migration plan has pre-planned rollback decision points. These are the safeguards that stop you moving too far if something goes wrong, where you can instead stop and safely revert without causing major disruption. When you reach each decision point, stop, pause and give rollback the consideration it deserves. An oversight or hasty decision can come back to bite you if things weren’t going as well as you thought. Â
Test DNS switching and traffic routing in advance so you can be sure everything points to the right place. And while you can hope for the best, it’s important to plan for the opposite. Prepare communications that inform staff and customers of extended downtime windows. You hope you never have to use them, but if you do, they will be the difference between chaos and control. Â
  Â
Rollback approachÂ
Your rollback plan should be informed by fully automated, pre-tested procedures. It should define clear go or no-go points ahead of time. With these on hand, the migration team will always know when to continue and when to revert, keeping the big-bang cutover controlled and predictable.Â
 Â
Risk mitigation strategiesÂ
Big bang migrations are high-pressure. Effective risk management means making the unknown familiar through preparation, rehearsal and tight coordination. Run pilot migrations with production-like data to validate your system’s behaviour ahead of time. During the cutover, stand up a “war room”, so issues can be triaged and resolved quickly from a central point. Finally, define checkpoints throughout the migration so you can pause progress before risk escalates.Â
 Â
Phased Migration
A phased migration moves your workloads in planned waves rather than shifting everything at once. It’s a more cautious approach that lets you validate and refine your method as you progress. Each wave gives you clearer insight into performance, tooling and team capability. Those early learnings shape the later waves, which usually involve your most critical systems. This approach suits more complex environments that need stability, where you can’t gamble the entire workload on a single cutover.Â
Â
AdvantagesÂ
- Incremental moves reduce risk. Â
- You gain continuous learning and improvement from each wave.Â
- You can roll back individual phases without impacting the entire environment. Effort and budget are distributed over time, and your team develops skills gradually. Â
- Suited to regulated workloads (e.g. APRA) that require staged validation and controlled change.Â
LimitationsÂ
- Phased migrations have a longer overall timeline, often extending over months or even years.Â
- You must manage dependencies between waves to keep systems aligned.Â
- Dual running environments increase cost and operational burden.Â
- Careful sequencing is required to avoid disruption as workloads move.Â
- Integration testing takes longer, as you need to validate interactions between migrated and non-migrated systems.Â
When to choose phased migrationÂ
A phased migration is best fit for large, complex environments that contain many applications and interdependencies. As it avoids the downtime of a single move, it’s ideal when you need operational continuity. It also works well when applications vary in their importance to everyday operations, as you can sequence them in an order that minimises disruption. The trade-off is a longer migration period, but this can be an advantage if you’re happy to wait. A phased migration also lets you spread the migration’s cost and workload across a longer period.  Â
 Â
Typical wave structureÂ
Wave 1: Your first wave is the pilot. It should include non-critical applications with a small user base and well-understood architecture. Getting through this first wave helps you establish the migration patterns, tooling and automation. It will also validate the performance of the new environment, building your team’s confidence in their ability to execute the later, more complex waves. Â
Wave 2: The next wave targets medium-criticality applications, applying the lessons from the first wave and refining your processes to suit. Â
Wave 3: The third wave handles your most critical systems, where security, compliance and performance matter most. By this stage, if you’ve applied the lessons from previous waves, you’ll have a proven approach that maximises your team’s capability and confidence. Â
 Â
Rollback migration approachÂ
With a phased migration, rollback is contained within each wave. If something goes wrong, you revert only the systems included in that phase, while the rest of your environment keeps operating normally. This isolates the risk, limits the impact of any issue and gives you controlled points to stop, reset and correct before moving on to the next wave.Â
 Â
Gradual Migration (Strangler Fig Pattern)
A gradual migration replaces your system piece by piece while the legacy and new environments run in parallel. Instead of switching everything in one move, you build new capability around the old system and let it take over function by function until the legacy system becomes unnecessary. It’s called the strangler fig pattern because the new architecture slowly surrounds and replaces the old one, without forcing downtime or a disruptive cutover.Â
Because you need to run two environments at once for an extended period, a gradual migration is typically the most expensive approach. You’re paying for complete operational continuity, so if you can’t afford any disruption, even minor, it’s more than worth the money.Â
Â
How gradual migration worksÂ
Gradual migration means deploying new cloud-native components alongside your legacy system. New features route to the cloud while existing functionality stays on premises. Over time, you migrate functionality piece by piece. Once the cloud version is stable, you can safely and confidently retire its legacy equivalent. This continues over time until your entire environment is up and running in the cloud. That’s when you can safely decommission your old environment. Â
 Â
Enabling technologiesÂ
Without the right tools, a gradual approach can get messy. If your parallel environments don’t communicate properly, data and workflows can start diverging across systems. To avoid this, use API gateways to route traffic between old and new components. Feature flags help control distribution, database synchronisation keeps both environments consistent, and load balancers let you shift traffic gradually without impacting users.Â
Â
AdvantagesÂ
- You maintain zero downtime throughout the migration and deliver value continuously. Â
- Risk stays low because changes are small and incremental. Â
- You can roll back individual components immediately if required, without impacting the rest of the environment.Â
- Modernisation happens as part of the migration rather than as a separate project. Â
- Meets Australian regulatory requirements for critical infrastructure to remain online 24/7/365.Â
LimitationsÂ
- It’s the most complex migration strategy, requiring specialised expertise and significant resourcing.Â
- Higher costs due to dual running environments.Â
- The longest overall migration timeline.Â
- Requires strong tooling and disciplined architecture practices.Â
- Testing becomes more complex across the hybrid environment.Â
- Your team needs mature DevOps capability to manage the increased coordination.Â
When to choose a gradual migrationÂ
Choose a gradual migration when the system must stay online 24/7 and can’t tolerate interruption. It also lets you modernise your architecture as you go, rather than waiting until after cutover. This approach works well for large, tightly coupled applications. If you prefer incremental investment, continuous value delivery and the ability to adjust course as you learn, a gradual approach keeps risk manageable.Â
   Â
Rollback migration approachÂ
With a gradual migration, rollback happens at the component level. If a new service or feature doesn’t behave as expected, you can immediately redirect traffic back to the legacy component without affecting the rest of your environment. Because the old and new environments run in parallel, the impact stays contained and you avoid large-scale reversions. This gives you controlled recovery points throughout the migration and keeps your end user experience stable while you modernise in the background.Â
 Â
Migration execution strategy: quick decision guide
 
Big BangÂ
Choose a big bang migration when your workload is small, you can accept a planned downtime window and you want to move fast without paying for dual running environments. This approach is the quickest way to move when speed and cost matter more than fine-tuning. Â
 Â
Phased MigrationÂ
Choose a phased migration when you have a large, complex environment and want a balance of speed and agility. This approach works best when your applications vary in how critical they are to operations. It also helps you distribute your budget and reduce risk.Â
 Â
Gradual MigrationÂ
Choose a gradual migration when downtime is not acceptable and you want to modernise the architecture as you migrate. This approach demands strong DevOps capability and a willingness to manage higher complexity in exchange for stability and continuous operation.Â
 Â
Pilot FirstÂ
While not a full strategy on its own, a pilot-first approach is a valuable starting point. It involves moving a small workload to the cloud as a separate project before the main migration. Whether it precedes a big bang, phased or gradual migration, a pilot gives you the chance to validate your approach and confirm everything works as intended before committing to the full move. It’s a great way to start your first major cloud migration, as a successful pilot builds confidence in the overall migration strategy. Â
 Â
Real-world migration scenarios: combining technical and execution strategies
 Â
Scenario 1: Lift and shift + big bang migrationÂ
This approach makes for the quickest, most straightforward cloud migration (if you get it right, that is). It’s a practical choice when you’re looking to replace ageing infrastructure before it’s made redundant. These migrations happen fast, typically in days to weeks, making them ideal when speed matters more than anything else. The risk is medium, as you’re trading safeguards for speed. Overall, it’s a solid fit for small to medium workloads where a planned downtime window isn’t a deal-breaker.Â
 Â
Scenario 2: Replatform + phased migrationÂ
This approach makes sense for moving large database estates that need some optimisation along the way. A typical example is migrating a big set of SQL servers into Azure SQL in structured waves. Breaking the work into these waves keeps risk low to medium and makes the process far more predictable. The timeline’s moderate: usually land between three and six months. It’s an ideal migration strategy for enterprises that want each stage controlled, measurable and easy to monitor.Â
 Â
Scenario 3: Refactor + gradual migrationÂ
This migration strategy lets you modernise complex, business-critical systems without taking them offline. A common example is breaking an e-commerce platform into microservices running in public cloud. Timelines usually range from 12 to 24 months, allowing for a careful, considered approach. The risk is medium, but it’s manageable as changes are small and incremental. It’s ideal for customer-facing systems that can’t turn off, even for a split second.Â
Scenario 4: Legacy + pilot + phased migrationÂ
This is the right path for migrating complex systems, where compliance and control drive every decision. A typical example is an APRA-regulated mutual bank migrating its core banking platform into Australian private cloud. These migrations usually span eighteen to thirty-six months and carry low risk due to extensive testing and validation at every stage.Â
Choosing the right cloud migration strategy for your organisation
Now we’ve gone through each migration approach and execution strategy, here’s the catch: It’s not usually as simple as picking one of each and going on your way. In reality, most cloud migrations combine multiple approaches and execution strategies for a tailored approach that meets the organisation’s unique needs. Â
The right combination depends on your size, risk tolerance and timeline. And if you’re not sure where to start, we can help. With four decades of experience in large-scale IT infrastructure transformation, Interactive will ensure a simple, straightforward and secure cloud migration. Whatever your approach, our team has the expertise to guide you from start to finish. Â