Agentic hosting is here. Connect to our MCP now ->

Blog

What production ready really means

Ask five engineers what "production ready" means and you'll get five different answers. Here's the definition that actually matters for Laravel teams.

·by Steve McDougall

Ask five engineers what "production ready" means and you'll get five different answers. Most of them will be wrong in the same way. One will talk about uptime SLAs. Another will mention load balancers and auto-scaling. A third will pull up a checklist involving Docker, multi-AZ databases, and a Kubernetes deployment. The conversation spirals into infrastructure architecture before anyone has asked the more important question: does the application itself actually behave correctly under real conditions?

This is for teams already running production Laravel applications where downtime affects real users and revenue. Not a checklist of infrastructure components to provision, but a genuine definition of what production ready requires and, just as importantly, what it does not.

The definition that actually matters

Production ready means the application behaves correctly and observably under real conditions, and the team can operate it without heroics.

That definition has no opinion on whether you are running containers. It does not care how many availability zones your database spans. It does not require Kubernetes, no matter how often teams assume it does. It requires that your application handles failure gracefully, that your team knows when something is wrong before users do, and that any developer on the team can reason about what is happening without needing a guided tour from the person who set it up.

Most teams do not start with this definition. They start with what production infrastructure looks like at companies much larger than them, and they work backwards. The result is engineering teams spending weeks building deployment pipelines and configuring cloud services before they have asked whether any of it addresses the actual failure modes their application is likely to encounter.

The problem with borrowing someone else's production standards

There is a pattern I keep seeing. A team reaches the point where they need to take their Laravel application seriously, someone looks at how a large, well-regarded company runs their infrastructure, and the team starts building toward that model.

It sounds reasonable. It's usually wrong. If it works for a company operating at a massive scale, teams assume it's the right starting point. It isn't. Better to build it properly from the start than to retrofit it later.

What gets missed is that those companies built that infrastructure because they needed it. Usually after they were already in production with real traffic. Usually with dedicated platform engineers whose entire job is operating it. That infrastructure was built in response to scale, not a requirement for reliability.

When a team at ten thousand monthly active users builds the same infrastructure as a company at ten million, they are not being more rigorous. They are taking on operational complexity they don't need and won't maintain properly. And while they are maintaining it, they are not working on the application-layer reliability that would actually protect their users.

What the application layer actually requires

The things that make a Laravel application genuinely production ready live almost entirely in the application code and its immediate configuration, not in the surrounding infrastructure.

A health check that tells the truth is one of the most valuable things you can add to a Laravel application, and one of the most commonly done wrong. A route that returns {"status": "ok"} unconditionally is not a health check. It gives you a false sense of security right up until something breaks. A useful health check verifies that the application can actually reach its critical dependencies:

Route::get('/health', function () {
    $checks = [];
 
    try {
        DB::connection()->getPdo();
        $checks['database'] = 'ok';
    } catch (Exception $e) {
        $checks['database'] = 'error';
    }
 
    try {
        Cache::store()->has('health-check');
        $checks['cache'] = 'ok';
    } catch (Exception $e) {
        $checks['cache'] = 'error';
    }
 
    $allHealthy = collect($checks)->every(fn ($status) => $status === 'ok');
 
    return response()->json([
        'status' => $allHealthy ? 'ok' : 'degraded',
        'checks' => $checks,
    ], $allHealthy ? 200 : 503);
});

Queue reliability is the other area where application-layer decisions matter far more than infrastructure decisions. Queued jobs will fail. That is not a production problem. Failing jobs that disappear silently are a production problem. Your application needs a failed_jobs table, monitored by something that alerts when the count crosses a threshold. It needs retry logic appropriate to each job type. It needs the team to have thought intentionally about what happens when a job exhausts its retries, not discovered it under pressure.

Zero-downtime deployments are an application concern as much as an infrastructure one. The infrastructure can handle the routing cutover, but the application has to be written to survive it. That means backwards-compatible migrations, because old and new code will run simultaneously during the transition. It means not making assumptions about schema state that has not yet been applied. A platform can give you zero-downtime deployments, but only if the application code cooperates.

Structured logging matters because production incidents get diagnosed through logs, and logs that are hard to find or read extend incidents. Setting LOG_CHANNEL=stderr is a single environment variable change that routes your Laravel logs to standard output, where your platform captures and forwards them. No log files to rotate. No disk space to monitor. Logs that are immediately queryable when something goes wrong.

Environment parity is the quiet one that causes the most surprises. If your staging environment is running a different PHP version, different queue driver, or different cache backend than production, you are not testing what you think you are testing. Every divergence between environments is a potential deployment surprise.

None of these things require sophisticated infrastructure to implement. They require intentional application design.

Where the attention goes instead

The reason these application-layer concerns get skipped is that infrastructure work is more visible. Configuring a load balancer produces something you can point to. Setting up a multi-AZ database produces an architecture diagram you can show in a meeting. Writing a health check that actually tests your cache connection produces the absence of a future incident, which is much harder to attribute to anything.

Infrastructure work also feels like the serious, professional way to approach production. It isn't. There is a narrative that says complex infrastructure equals rigorous engineering. That narrative is expensive and slows teams down. Teams that invest heavily in infrastructure sophistication before sorting out application-layer reliability end up with a system that looks production ready from the outside and fails in application-specific ways that the infrastructure cannot see.

A Laravel application with a genuine health check, observable queue processing, backwards-compatible migrations, and structured logging is more production ready than one running on Kubernetes without those things. The platform does not confer readiness. The application does.

What this looks like in practice

Here is what a genuinely production-ready Laravel 12 application on PHP 8.5 looks like when deployed on Sevalla:

app:
  name: my-laravel-app
  runtime: php
  version: "8.5"
 
build:
  buildpacks: true
  run:
    - composer install --no-dev --optimize-autoloader
    - php artisan config:cache
    - php artisan route:cache
    - php artisan view:cache
    - php artisan migrate --force
 
workers:
  - name: queue
    command: php artisan queue:work --sleep=3 --tries=3 --max-time=3600
 
crons:
  - name: scheduler
    schedule: "* * * * *"
    command: php artisan schedule:run
 
environment:
  - APP_ENV=production
  - LOG_CHANNEL=stderr
  - QUEUE_CONNECTION=redis
  - CACHE_DRIVER=redis

The migration runs before the new application version receives traffic. The queue worker restarts every hour via --max-time=3600, which prevents the class of bugs that come from long-running workers accumulating stale state. The scheduler fires on a single managed cron, not once per instance on every instance you are running. Logs go to stderr. The platform captures them.

What is not in that configuration is as important as what is. There is no load balancer to configure, no TLS certificate to provision, no IAM policy to write, no container networking to debug. Sevalla handles the infrastructure, deployment, scaling, and operational overhead. The configuration represents the application's actual operational requirements. That is the right level of abstraction for a product engineering team. No Kubernetes. No cloud service sprawl. No infrastructure team required.

The question worth asking

Before your next infrastructure review, try asking a different question than the one most teams ask. Instead of "is our infrastructure sophisticated enough to be production ready," ask this: if something goes wrong tonight, which developer on the team gets woken up, what tools do they have to diagnose it, and how long before they know what is wrong?

If the honest answer involves a specific person who built the infrastructure and is the only one who knows where to look, the system is not as production ready as it appears. Production ready means any developer on the team can diagnose common incidents using tools and information that are already available. It means alerts fire before users notice. It means the failure modes are known and handled.

Those things come from application design and operational discipline. They are not produced by infrastructure complexity, and they are not prevented by infrastructure simplicity.

For most Laravel teams, the cleaner path to production ready is to get the application-layer work right and deploy on a platform that handles everything else. Sevalla is built for exactly that. The managed infrastructure, the Git-based deployments, the built-in queue workers and cron management, all of it exists so that production ready is something your application achieves, not something your infrastructure performs.

Deep dive into the cloud!

Stake your claim on the Interwebz today with Sevalla's platform!
Deploy your application, database, or static site in minutes.

Get started