Agentic hosting is here. Connect to our MCP now ->

Blog

Infrastructure ownership is a business decision, not an engineering one

The decision to run on self-managed AWS was probably never made by the people who should have made it. Here is what it is actually costing the business.

·by Steve McDougall

At most companies, the decision to run on self-managed AWS infrastructure was never made by the people who should have made it.

It was made by an engineer who needed somewhere to deploy the application and chose the most capable platform available. Or it was inherited from a previous team lead whose preferences shaped the stack before the current leadership arrived. Or it emerged from a series of incremental technical decisions, each individually reasonable, that accumulated into a posture nobody explicitly chose.

The people who should have weighed in, founders, CTOs, engineering directors, anyone responsible for how the company deploys its engineering capacity, were not in the room. The decision looked like a technical choice, so it was treated as one. It landed with the engineers, and the engineers made a technically defensible call.

The problem is that infrastructure ownership is not primarily a technical decision. It is a business decision with engineering consequences. And when it gets made at the wrong level of the organisation, the consequences compound for years before anyone fully accounts for them.

This article is for founders, CTOs, and engineering leaders who have the authority to revisit that decision. Not to relitigate how it was made, but to evaluate it honestly against what it is actually costing the business today.

The business consequences nobody calculated

When a product company takes on self-managed AWS infrastructure, it takes on a set of ongoing obligations that do not appear in a business plan.

Someone has to maintain it. Not as a one-time project, but permanently. Security patches need applying. Credentials need rotating. IAM policies need auditing when the team changes. Database versions need upgrading before they fall out of support. Alarm thresholds need revisiting as the application evolves. The deploy pipeline needs maintaining when its dependencies change.

That maintenance does not happen automatically. It consumes engineering time, and engineering time is the scarcest and most expensive resource a product company has. The question that rarely gets asked at the business level is: what is the cost per month of the engineering hours we are spending on infrastructure that has nothing to do with our product?

Most companies do not know the answer. They have not calculated it because the cost arrives in small increments, distributed across multiple engineers, mixed in with product work in ways that make it hard to isolate. The AWS invoice is visible. The engineering hours are not. But the engineering hours are where the real cost lives.

The decision that was made by default

You did not choose AWS. You inherited its complexity. That is not an indictment of whoever made the initial call. AWS is the industry default for a reason. It is powerful, well-documented, and deeply familiar to most senior engineers. Choosing it was not irrational.

But choosing a platform and choosing to own all of its operational complexity are two different things, and the second choice is the one that has business consequences. The operational complexity of a self-managed AWS setup does not shrink over time. It grows as the application grows, as the team changes, and as the infrastructure accumulates the kind of undocumented decisions that make systems progressively harder to modify safely.

The business made that choice by default, without a conversation about what it would cost or whether the control it provided was worth paying for. Now, every month that passes without revisiting it is another month of the same choice being made implicitly. Inertia is not neutrality. Continuing on self-managed AWS is a decision you are making continuously, and it deserves the same scrutiny as any other significant ongoing business cost.

What you are actually buying with infrastructure ownership

There is a version of infrastructure ownership that makes business sense. Companies with specific compliance requirements that mandate direct control over their data environment. Companies operating at a scale where the economics of managed platforms no longer hold. Companies whose product is itself infrastructure, where deep control over the underlying stack is inseparable from what they are selling.

For those companies, owning the infrastructure is a strategic choice. The cost is proportionate to the value.

For the vast majority of product companies running Laravel applications, none of those conditions apply. The infrastructure is not the product. The regulatory constraints are not so specific that managed platforms cannot satisfy them. The scale has not reached the point where the economics favour a self-managed approach.

What those companies are buying with infrastructure ownership is control. Control over the networking topology, the container orchestration, the IAM permission model, the deployment pipeline, the certificate lifecycle, the database configuration. That control is real. The question is what they are doing with it.

For most product teams, the honest answer is: maintaining it. The control is not being used to build competitive advantage. It is being used to keep the system running in the state it was in last month. The engineers with infrastructure expertise are not extending the platform in ways that create product differentiation. They are performing maintenance on a system that a managed platform would operate on their behalf.

That is an expensive way to use control you did not need to have.

The staffing consequence

Infrastructure ownership at the business level is also a staffing decision, whether it is recognised as one or not.

When a product company owns its AWS infrastructure, it needs people who understand that infrastructure. In practice, that means senior engineers whose accumulated context about the system makes them operationally load-bearing. They are the ones who respond to incidents. They are the ones who know which parts of the system are fragile and why. They are the ones whose departure creates a knowledge transfer problem that is harder to solve than it looks.

The business consequence is that those engineers are not fully deployable on product work. Their operational responsibility is a permanent partial allocation that does not show up in a sprint plan but absolutely shows up in what gets built and how fast.

AWS is overkill for most product teams, and the staffing cost is where that shows up most clearly. You are carrying senior engineering capacity that is partially committed to operating infrastructure that a platform would handle. The cost of that partial commitment, in features not shipped, in architectural decisions deferred, in junior engineers not mentored, is the invisible line item in your engineering budget that infrastructure ownership creates.

The conversation that should have happened

In a well-run company, a decision with these consequences would have been made explicitly, with the relevant people in the room and the full cost on the table.

The conversation would have gone something like this: we can run on self-managed AWS infrastructure, which gives us maximum control at the cost of ongoing engineering maintenance and operational complexity, or we can run on a managed platform that handles the infrastructure layer, which reduces control over the underlying stack but eliminates the operational burden. Given that our competitive advantage lives entirely in the application layer, and given what the ongoing engineering cost of self-managed infrastructure is, which option makes more sense for the business?

For most product companies, that conversation leads clearly to the managed platform. The control that self-managed infrastructure provides is control over things that are not differentiating. The cost it imposes is measured in the most valuable resource the company has.

The conversation did not happen at most companies. It defaulted to the engineers, and the engineers made a reasonable technical call. The business decision embedded in that technical call was never examined at the right level.

What the alternative actually looks like

Sevalla exists for the 90% of teams who should not be running AWS at all. It is built specifically for Laravel applications running in production. A typical AWS Laravel setup requires you to configure and maintain an Application Load Balancer, EC2 or ECS for compute, RDS for the database, ElastiCache for Redis, SQS for queues, S3 for file storage, CloudWatch for logging and alerting, Route 53 for DNS, ACM for certificates, and the IAM structure connecting all of it. With Sevalla, you do not need or manage any of those services. Sevalla handles the infrastructure.

What that means at the business level is that the engineering hours currently consumed by that list redirect to the product. The staffing consequences of infrastructure ownership, the load-bearing senior engineers, the informal on-call arrangements, the deferred maintenance backlog, stop being business risks. The decision surface your team owns shrinks to the decision surface that was always relevant: how the application works and what it does.

That is not a reduction in capability. It is a more honest alignment between what your engineering team exists to do and what they are spending their time on.

Making the decision explicitly

If you are a founder or CTO reading this, the action is straightforward. Pull the cost out of the background and put it on paper.

Estimate the engineering hours per month your team spends on infrastructure work. Include the maintenance, the incidents, the pipeline debugging, the IAM reviews, the background awareness your senior engineers carry. Apply your loaded engineering cost to those hours. That is the monthly cost of infrastructure ownership in the currency that matters most to the business.

Then look at your product backlog. Find the work that keeps getting deferred because infrastructure pulls focus. Estimate what it would mean for the business if that work shipped six months sooner.

The gap between those two numbers is the business case for making a different decision. For most product companies running Laravel on AWS, that gap is significant, and it has been accumulating since the day the infrastructure was stood up.

Infrastructure ownership is a business decision. It deserves to be made like one, with the full cost visible and the right people in the room. If it has not been made that way at your company, now is a reasonable time to make it.

Sevalla is worth an hour of your time to understand concretely. The infrastructure is not the interesting part of what you are building. The business case for treating it that way is stronger than most founding teams realise until they sit down and do the calculation.

Deep dive into the cloud!

Stake your claim on the Interwebz today with Sevalla's platform!
Deploy your application, database, or static site in minutes.

Get started