Continuous delivery without the CI/CD plumbing
Most Laravel teams end up with a 300-line GitHub Actions workflow nobody fully understands. Here's why that complexity is costing you more than you think, and what the alternative looks like.
There is a file sitting in most Laravel repositories that nobody is proud of. It lives in .github/workflows/ and it is somewhere between 200 and 400 lines of YAML. It was written incrementally, over months, by different people, each of whom added what they needed and moved on. Nobody owns it. Nobody fully understands it. And when it breaks on a Friday afternoon, the entire team quietly hopes it is someone else's problem to fix.
That file is your deployment pipeline. And it is costing you more than you think in time, risk, and engineering focus.
This post is for teams already running production applications where deployments affect real users, not side projects.
How the pipeline got this way#
The pipeline did not start out complicated. Most of them begin the same way: a simple workflow that runs tests on pull requests. Five, maybe ten lines. Clean, readable, understood by everyone.
Then comes the first deployment step. You need to build the application and push it somewhere. Now you are configuring AWS credentials as repository secrets. You add an IAM user, generate access keys, store them in GitHub. Simple enough.
Then you need the build to warm the caches before deploying. So you add php artisan config:cache, php artisan route:cache, php artisan view:cache. Reasonable. Then you realise deployments need to run database migrations, but only after the new code is live, and only if the tests passed, and only on the main branch, not on staging, unless the branch is named release/*, in which case...
And now you have conditions. YAML conditionals are where pipelines start to break down.
You add environment-specific secrets. Then you need a step that only runs on rollback. Then someone adds Slack notifications for failed deploys, which requires another secret and a different action from the marketplace that was pinned to a specific SHA because unpinned actions are a security risk. Then a new team member joins and asks how the pipeline works and you realise you cannot explain it in under ten minutes.
The file grew one reasonable decision at a time. That's how most teams end up with infrastructure they don't fully understand. The result is infrastructure code that your application developers did not sign up to maintain.
The real problem with pipeline complexity#
The obvious problem is that complex pipelines break in complex ways. A failed deploy can stem from an expired AWS credential, a changed IAM policy, a misconfigured environment variable, a Docker layer cache miss, an ECS health check that times out because the new container takes four seconds longer to boot than the health check allows.
Each of those failure modes lives in a different part of the stack. Debugging them requires context that most application developers do not have and should not need to acquire.
But there is a less obvious problem that I think matters more: complex pipelines create deployment anxiety.
When deploying is simple and predictable, developers deploy often. Small changes, fast feedback, low risk per deploy. When deploying is complicated and occasionally mysterious, developers batch changes together to reduce the number of deploys. Larger batches mean larger diffs. Larger diffs mean harder reviews and harder rollbacks. The blast radius of any individual deploy grows as the deployment frequency drops.
This is the opposite of what continuous delivery is supposed to achieve. The pipeline meant to make delivery safer ends up making it riskier, because the complexity of the pipeline itself changes developer behaviour.
What your pipeline is actually doing#
Let me make this concrete. Here is a representative GitHub Actions workflow for a Laravel application deploying to AWS ECS. This is not a worst-case example. This is roughly what a thoughtful team ends up with after a year of iteration and is what "standard" looks like when you run CI/CD on AWS.
name: Deploy to Production
on:
push:
branches: [main]
env:
AWS_REGION: us-east-1
ECR_REPOSITORY: my-laravel-app
ECS_SERVICE: my-laravel-service
ECS_CLUSTER: my-cluster
CONTAINER_NAME: app
jobs:
test:
runs-on: ubuntu-latest
services:
mysql:
image: mysql:8.0
env:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: testing
ports:
- 3306:3306
options: >-
--health-cmd="mysqladmin ping"
--health-interval=10s
--health-timeout=5s
--health-retries=3
steps:
- uses: actions/checkout@v4
- uses: shivammathur/setup-php@v2
with:
php-version: "8.5"
extensions: mbstring, pdo_mysql, redis
- name: Install dependencies
run: composer install --no-interaction --prefer-dist
- name: Prepare environment
run: |
cp .env.testing .env
php artisan key:generate
- name: Run migrations
run: php artisan migrate --force
- name: Run tests
run: php artisan test
build-and-deploy:
needs: test
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials@v4
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Build, tag, and push image to ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ github.sha }}
run: |
docker build \
--build-arg APP_ENV=production \
--build-arg COMPOSER_FLAGS="--no-dev --optimize-autoloader" \
-t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "image=$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG" >> $GITHUB_OUTPUT
- name: Download task definition
run: |
aws ecs describe-task-definition \
--task-definition my-laravel-task \
--query taskDefinition > task-definition.json
- name: Update ECS task definition with new image
id: task-def
uses: aws-actions/amazon-ecs-render-task-definition@v1
with:
task-definition: task-definition.json
container-name: ${{ env.CONTAINER_NAME }}
image: ${{ steps.build-image.outputs.image }}
- name: Deploy to ECS
uses: aws-actions/amazon-ecs-deploy-task-definition@v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: ${{ env.ECS_SERVICE }}
cluster: ${{ env.ECS_CLUSTER }}
wait-for-service-stability: true
- name: Run migrations
run: |
aws ecs run-task \
--cluster ${{ env.ECS_CLUSTER }} \
--task-definition my-laravel-migrate \
--launch-type FARGATE \
--network-configuration "awsvpcConfiguration={subnets=[subnet-xxx],securityGroups=[sg-xxx],assignPublicIp=ENABLED}" \
--overrides '{"containerOverrides":[{"name":"app","command":["php","artisan","migrate","--force"]}]}'Count the moving parts. This is what teams are expected to own just to deploy code. Two IAM credentials stored as secrets. An ECR repository. A Docker build. An ECS cluster, service, and task definition. A separate task definition for running migrations. Subnet IDs and security group IDs hardcoded into a shell command buried in a YAML file.
When this works, it works invisibly. When it breaks, and it will break, you need to know all of it.
The alternative removes the problem#
Here is the same application deployed on Sevalla, without managing the infrastructure yourself. This is the entire deployment configuration:
app:
name: my-laravel-app
runtime: php
version: "8.5"
build:
buildpacks: true
run:
- composer install --no-dev --optimize-autoloader
- php artisan config:cache
- php artisan route:cache
- php artisan view:cache
workers:
- name: queue
command: php artisan queue:work --sleep=3 --tries=3
crons:
- name: scheduler
schedule: "* * * * *"
command: php artisan schedule:run
environment:
- APP_ENV=production
- LOG_CHANNEL=stderrYou push to Git. That's the deployment. Sevalla runs your build steps, deploys the application, manages TLS, handles routing, runs your queue worker, and executes your scheduler. Migrations run as part of your build process or as a release command. There are no AWS credentials to rotate, no IAM policies to audit, no Docker image registry to manage, no ECS task definitions to version.
The only thing left to debug is your application. When something goes wrong, a Laravel developer can diagnose it without needing to understand container orchestration.
That is not a toy setup. It is a more honest match between what a product engineering team needs and what they should be responsible for operating. This is what it looks like when infrastructure stops being your problem.
The seconds-to-understand test#
Here is a test I think is worth applying to any piece of infrastructure your team owns: how long does it take a developer who did not write it to understand what it does and why?
For a 300-line GitHub Actions workflow with multiple AWS service dependencies, the honest answer is probably an hour. Maybe more, depending on how familiar they are with ECS specifically.
For the Sevalla config above, it is about sixty seconds. The build steps run during deployment. The queue worker keeps running. The scheduler fires every minute. Environment variables go in the dashboard. Done.
That difference compounds across every new hire, every incident, every on-call rotation. The cognitive overhead of owning a complex pipeline is not a one-time cost. It is a recurring one, paid every time someone needs to touch it.
What you get back#
The teams I have seen move away from self-managed CI/CD pipelines do not primarily talk about the time saved setting things up. They talk about the change in how their developers relate to deploying.
When deployment is boring, developers stop thinking about it as a risky event. They deploy smaller changes more often. They get feedback faster. They catch regressions earlier. The feedback loop that makes continuous delivery valuable actually closes.
The engineering lead stops being the person who gets paged when the pipeline fails at 6pm. The senior developer who held all the AWS context in their head can redirect that attention to harder problems in the application. The junior developer who was quietly afraid to trigger a deploy because they did not understand what would happen can now just push to main and watch it work.
These are not small gains. They are the difference between a team that ships with confidence and one that treats every deployment as a calculated risk.
A practical way to think about this#
If you are running a self-managed pipeline right now, the question is not whether it works. It probably does work, most of the time. The question is what it costs to keep it working and whether that cost is justified by what you are getting in return.
Start by counting the secrets in your repository. Every secret is a rotation risk and an audit obligation. Then look at the last five pipeline failures and how long each one took to diagnose. Then ask your developers, honestly, whether they think about the pipeline when they are about to push.
The answers will tell you whether the complexity you are carrying is proportionate to the control it gives you.
For most teams, the math does not hold up. The pipeline is complex because infrastructure pipelines accumulate complexity, not because the complexity is earning its keep. The control it provides is control over things that were never part of your competitive advantage.
Sevalla is built for teams that no longer want to manage this complexity. Git-based deployments, managed infrastructure, no pipeline plumbing to maintain. The interesting part of your deployment is your application code. That is what should be getting your attention.