AWS is the best platform… if you're building infrastructure
Most teams on AWS inherited its complexity. Here's why that operational burden compounds in ways you don't see until it's too late.
Here is a question that rarely gets asked directly: Who is AWS actually built for?
Not in a marketing sense. In a product sense. If you look at what AWS is, not the managed services at the edges but the core of what it gives you, it is a platform for assembling infrastructure. Raw compute, raw networking, raw storage, raw identity management. The primitives from which infrastructure products are built.
That is genuinely powerful. It is the reason Airbnb, Netflix, and Stripe could scale to planetary levels without owning a data centre. It is the reason every serious infrastructure company builds on top of it. AWS gives you the ability to construct virtually any infrastructure topology you can imagine, and then manage it, operate it, and own it indefinitely.
The keyword is own.
That word carries weight that most teams do not fully reckon with when they make the decision to run on AWS. Ownership is not a one-time act. It is an ongoing commitment. Every service you provision, every policy you write, every cluster you configure, becomes part of a system your team is responsible for operating from that day forward. There is no expiry date on that responsibility. There is no point at which AWS takes over.
Understanding that is the starting point for an honest conversation about whether AWS is the right platform for a product engineering team.
What AWS is selling you#
When you provision EC2 instances, configure VPCs, write IAM policies, set up Auto Scaling Groups, and wire together ECS clusters, you are not buying a hosting service. You are buying the materials to build one.
The difference matters enormously.
A hosting service absorbs operational complexity on your behalf. You describe what you want to run and the service makes it run. A platform for building infrastructure gives you the components to construct that service yourself. The construction and the operation are your problem.
AWS is the second thing. It is brilliant at being the second thing. But a remarkable number of product companies, companies whose business has nothing to do with infrastructure, have adopted it as if it were the first.
The result is predictable. You end up operating a custom-built hosting service, assembled from AWS primitives, maintained by your engineering team, carrying the full operational burden that a hosting service would otherwise absorb. You have built the thing AWS was designed to help infrastructure companies build, and now you are running it alongside the product you actually exist to ship.
The operational surface area that comes with a self-assembled AWS setup is not trivial. A reasonably complete production deployment of a Laravel application might involve an Application Load Balancer, EC2 instances or ECS containers, RDS for the database, ElastiCache for Redis, SQS for queues, S3 for file storage, CloudWatch for logs and alerting, Route 53 for DNS, ACM for certificates, and an IAM structure that ties it all together with the right permissions. Every one of those services is something your team configures, monitors, patches, and owns.
That is not a hosting service someone else built for you. That is a hosting service you built for yourself, using AWS as the construction material.
The companies AWS was designed for#
AWS became what it is because Amazon needed to solve infrastructure at a scale and complexity that no existing product could handle. They built primitives. Then they made those primitives available to others.
The companies that get the most from raw AWS are the ones with genuine infrastructure complexity. They are building something that demands direct control over networking topology, custom container orchestration, specialised compute configurations, or infrastructure that is itself part of their product. They have dedicated platform engineering teams whose job is infrastructure. They treat infrastructure as a core competency, not a supporting function.
These companies are not using AWS because it is the industry default. They are using it because the level of control it provides is actually necessary for what they are building. The primitives are the point.
If that describes your company, AWS is the right tool. The control is worth the cost.
Most Laravel teams are not those companies. Most are building web applications and APIs where the infrastructure requirements are entirely conventional. A web process, a database, a cache, a queue, background workers, scheduled tasks. Those requirements have been solved, thoroughly and reliably, by platforms built specifically to run them. AWS is overkill for most product teams. The control it provides is control your team will spend years managing without it ever becoming a competitive advantage.
The distinction is worth sitting with. There is a category of company for which AWS infrastructure complexity is inseparable from the product. And there is a much larger category of company for which it is entirely incidental. The second category includes the vast majority of product companies running Laravel applications in production today.
And here is the part that rarely gets said out loud: most teams on AWS did not make a deliberate choice to accept that complexity. They picked AWS because it was the industry default, because the first senior engineer on the team knew it, because the job posting said AWS experience preferred. You did not choose AWS. You inherited its complexity. The operational burden arrived incrementally, one reasonable decision at a time, until it was simply the water you swim in.
The cost of the wrong abstraction level#
There is a real cost to operating at the wrong abstraction level, and it compounds in ways that are slow to become visible.
When you build your hosting layer on AWS primitives, you take on a continuous maintenance obligation. Security group rules need reviewing when you add new services. IAM policies need auditing when someone joins or leaves the team. The AMIs your EC2 instances run on need updating when vulnerabilities are patched. The Auto Scaling Group configuration that handled your traffic six months ago needs revisiting as you grow. The RDS minor version you have been deferring will eventually become urgent. The CloudWatch alarm thresholds that made sense at your previous load profile may no longer be calibrated correctly.
None of this is exceptional. It is just the steady background cost of owning infrastructure you assembled from components. It does not announce itself as a crisis. It arrives as a permanent low-grade drain on engineering time.
Behind that operational cost is a deeper one that is harder to quantify. Your senior engineers are carrying a mental model of how your infrastructure fits together. They know which deploy step updates which ECS task definition. They know why the cron job runs on that specific instance. They know how the security groups are structured and what breaks if you change them without also updating the relevant IAM policy. That knowledge is not passive. It occupies cognitive space that cannot simultaneously be occupied by harder problems in your product.
The institutional knowledge required to operate a self-assembled AWS setup does not document itself. Runbooks are written once and drift. Terraform state tells you what exists, not why decisions were made. The Slack threads where key configuration choices were debated are long buried. The understanding that actually keeps the system running lives in people, and people leave.
When a senior engineer with deep AWS context leaves a team, the knowledge transfer problem is almost always worse than it looks on paper. You cannot fully document what someone carries in their head after two years of debugging the same infrastructure. What gets written down is the surface layer. The judgment, the pattern recognition, the awareness of which changes are safe and which are not, leaves with the person.
That is a structural fragility that most teams do not account for until they are experiencing it.
The trap of sunk cost familiarity#
There is a specific dynamic that keeps teams on AWS longer than the analysis would support. Call it sunk cost familiarity.
Your team has spent significant time learning how your AWS setup works. Your senior engineers have real expertise in it. The runbooks exist, even if they are imperfect. The deploy pipeline mostly works. Migrating feels expensive and disruptive, and the current setup is known, in the sense that you know where things break and roughly how to fix them.
This is a real cost and it is worth acknowledging. Migration is not free.
But the framing of migration as the expensive option and staying as the safe one gets the comparison backwards. Staying is not free either. Staying means continuing to pay the operational cost, continuing to concentrate infrastructure knowledge in a small number of people, continuing to have your senior engineers' attention split between product and infrastructure, continuing to carry the fragility that comes with a self-assembled system.
The question is not whether migration costs something. It does. The question is whether the ongoing cost of staying is higher than the one-time cost of moving, and whether the team you have today is the right team to be operating what you built three years ago.
For most product engineering teams, the honest answer is that the operational cost of self-managed AWS infrastructure is not proportionate to what it provides. They are not getting infrastructure-level competitive advantage. They are getting infrastructure-level maintenance obligations.
A different kind of platform#
There is a category of product that sits between raw AWS and consumer hosting. Platforms built for specific application patterns, purpose-engineered to absorb the infrastructure layer that those patterns require.
Sevalla is one of them. It exists for the 90% of teams who should not be running AWS at all. It is built specifically for Laravel applications running in production.
A typical AWS Laravel setup requires you to configure and maintain an Application Load Balancer, EC2 or ECS for compute, RDS for the database, ElastiCache for Redis, SQS for queues, S3 for file storage, CloudWatch for logging and alerting, Route 53 for DNS, ACM for certificates, and the IAM policies that connect all of it. With Sevalla, you do not need or manage any of those services. Sevalla handles the infrastructure. The things a production Laravel application needs—a managed database, Redis, queue workers, a scheduler, environment management, zero-downtime deployments—are handled at the platform level. You are not assembling them from primitives. You are describing what your application does and the platform runs it.
The operational surface area is your application. Not your infrastructure.
That distinction changes what your team owns. There are no IAM policies to write or audit. No security groups to configure. No ECS task definitions to version. No container image registry to manage. No on-call rotation for infrastructure incidents, because those incidents are not yours to respond to. When something goes wrong, it is almost certainly in your application code, which means the person best equipped to diagnose it is a Laravel developer, not someone who has memorised the structure of your AWS account.
The knowledge your team needs to carry shrinks to the knowledge that was always relevant: how your application works.
The question worth asking#
If your team is running on AWS right now, the question is not whether your setup works. It probably does work, most of the time. The question is whether the work required to keep it working is work your team should be doing.
AWS gives you enormous control. Control over networking, over compute configuration, over how every layer of your infrastructure fits together. That control is genuinely valuable if your product requires it.
For most Laravel teams, it does not. The control you have purchased is control over things that were never part of your competitive advantage. You are maintaining a hosting service when you could be buying one, paying for that maintenance in engineering hours that could be building the product instead.
AWS is a platform for companies building infrastructure products. If that is not what you are building, you are using the most powerful and most operationally demanding tool available for a job it was not designed for.
Sevalla was designed for the job you actually have. It is worth understanding what that difference looks like in practice.