Try Sevalla today and get $50 free credit

Blog

Building AI-powered applications in Laravel

This guide walks through prompt organization, structured outputs, and AI-discoverable capabilities using Prism and Laravel MCP.

·by Steve McDougall

I’ve been exploring AI integration patterns for Laravel applications and want to share practical approaches for building maintainable, provider-agnostic AI features.

In this article, we’ll build a simple weather intelligence system that demonstrates the core concepts behind production-ready, AI-powered Laravel applications.

The goal isn’t just to get something working — it’s to establish patterns that continue to hold up as AI features grow in scope and complexity.

The problem with direct LLM integration

Before building, it helps to understand why this matters. The most straightforward way to add AI to a Laravel application is to make direct HTTP calls to an LLM API from your controllers. For prototypes or small features, that approach can work just fine.

The challenges appear as your application grows:

You want to test a different model, which means updating code in multiple places. You need structured outputs for reliability, so you start parsing JSON strings and handling edge cases. You want to expose AI capabilities to other agents, which leads to building custom APIs on top of APIs.

Each new requirement compounds the complexity. What becomes clear quickly is the need for proper abstractions from the start.

That’s where Prism PHP and Laravel MCP come in.

Prism provides a clean, consistent interface across LLM providers, whether you’re working with Ollama, Claude, GPT-4, or others, making it possible to switch models with minimal changes.

Laravel MCP (Model Context Protocol) makes it possible to expose AI capabilities as structured tools that other AI agents can discover and use. Think of it as an API designed specifically for AI consumption.

Let’s build something real.

Setting up your environment

Start with a fresh Laravel application. This assumes PHP 8.4+ and Composer are already installed:

laravel new ai-weather-app
cd ai-weather-app

Install the dependencies:

composer require prism-php/prism
composer require laravel/mcp

Why Prism specifically? After evaluating several options, Prism stands out because it handles provider-specific differences while giving you a Laravel-native experience. You're not learning a new paradigm. You're using familiar patterns.

Configuring Ollama

For this tutorial, we're using Ollama. It runs locally, costs nothing to experiment with, and performs well for most tasks. Plus, it keeps us provider-neutral.

Install Ollama from ollama.ai, then pull a model:

ollama pull llama3.2

In this example, llama3.2 is used because it’s fast and capable for this type of workload, but any supported model can be substituted.

Publish Prism’s configuration:

php artisan vendor:publish --tag=prism-config

In config/prism.php, verify the Ollama configuration:

'providers' => [
    'ollama' => [
        'url' => env('OLLAMA_URL', 'http://localhost:11434'),
    ],
    // ... other providers
],

Add to your .env:

OLLAMA_URL=http://localhost:11434

Organizing prompts as files

One pattern that significantly improves maintainability is keeping prompts in dedicated files instead of embedding them as inline strings throughout the codebase.

Prompts evolve over time. You may want to A/B test variations, involve non-developers in refining tone or behavior, or track changes through version control. Inline strings make this difficult; file-based prompts make it straightforward.

Create the directory structure:

mkdir -p resources/prompts/system
mkdir -p resources/prompts/user

Create resources/prompts/system/weather_analyst.txt:

You are an expert meteorological analyst with a gift for explaining weather patterns in clear, practical terms.

Your role is to:
- Interpret weather data accurately
- Provide context about what the conditions mean for daily activities
- Offer insights about trends or unusual patterns
- Communicate in a friendly, accessible tone

Always prioritize accuracy while remaining conversational.

The specificity here is intentional. Generic prompts like "you are a helpful assistant" rarely produce focused results. Give your AI a clear role.

Create resources/prompts/user/analyze_weather.blade.php:

I need you to analyze the weather conditions for {{ $city }}.

Current conditions:
- Temperature: {{ $temperature }}°C
- Conditions: {{ $conditions }}
- Humidity: {{ $humidity }}%
- Wind Speed: {{ $wind_speed }} km/h

Provide a practical summary that helps someone plan their day. What should they know about these conditions?

Using Blade templates for user prompts gives you the full power of Laravel's templating engine. Variables, conditionals, loops. Everything you already know.

Building your first AI service

Let's create a service to handle the AI interaction. Keeping this logic separate from your controllers is important for testing and reusability.

php artisan make:class Services/WeatherAnalysisService

In app/Services/WeatherAnalysisService.php:

<?php

namespace App\Services;

use EchoLabs\Prism\Facades\Prism;
use EchoLabs\Prism\Enums\Provider;

class WeatherAnalysisService
{
  public function analyzeConditions(array $weatherData): string
  {
    $systemPrompt = file_get_contents(
      resource_path('prompts/system/weather_analyst.txt')
    );

    $userPrompt = view('prompts.user.analyze_weather', [
      'city' => $weatherData['city'],
      'temperature' => $weatherData['temperature'],
      'conditions' => $weatherData['conditions'],
      'humidity' => $weatherData['humidity'],
      'wind_speed' => $weatherData['wind_speed'],
    ])->render();

    $response = Prism::text()
      ->using(Provider::Ollama, 'llama3.2')
      ->withSystemPrompt($systemPrompt)
      ->withPrompt($userPrompt)
      ->generate();

    return $response->text;
  }
}

Look at how clean this is. The service orchestrates the pieces. The prompts live in maintainable files. The LLM provider is explicitly specified but easily changed.

Let's test it with a simple route in routes/web.php:

use App\Services\WeatherAnalysisService;
use Illuminate\Support\Facades\Route;

Route::get('/test-weather', function (WeatherAnalysisService $service) {
  $analysis = $service->analyzeConditions([
    'city' => 'London',
    'temperature' => 18,
    'conditions' => 'Partly cloudy',
    'humidity' => 65,
    'wind_speed' => 15,
  ]);

  return response()->json(['analysis' => $analysis]);
});

Visit /test-weather in your browser. You should get back an AI-generated weather analysis. If you're getting connection errors, make sure Ollama is running in the background.

Understanding MCP: AI-discoverable capabilities

Now let's talk about something powerful: making your AI features discoverable and usable by other AI agents.

The Model Context Protocol (MCP) is a standardized way for AI agents to discover and invoke tools. Think of it like this: instead of building a REST API that returns JSON for humans to parse, you're building an interface that AI agents can understand natively.

This matters if you're building Claude Desktop integrations, creating custom tools for VS Code, or enabling AI-to-AI communication in your applications.

Laravel MCP implements this protocol in a Laravel-native way. First, publish the MCP routes:

php artisan vendor:publish --tag=ai-routes

This creates routes/ai.php. Just like routes/api.php defines your HTTP API, routes/ai.php defines your AI capabilities.

Creating an MCP server

An MCP Server groups related AI capabilities together. Let's create one for weather analysis:

php artisan make:mcp-server WeatherServer

This generates app/Mcp/Servers/WeatherServer.php:

<?php

namespace App\Mcp\Servers;

use Laravel\Mcp\Server;

class WeatherServer extends Server
{
  protected string $name = 'Weather Intelligence';

  protected string $version = '1.0.0';

  protected string $description =
    'Provides AI-powered weather analysis and insights using local LLM capabilities.';

  protected array $tools = [
    \App\Mcp\Tools\AnalyzeWeatherTool::class,
  ];

  protected array $prompts = [
    \App\Mcp\Prompts\WeatherInsightPrompt::class,
  ];
}

Register it in routes/ai.php:

use App\Mcp\Servers\WeatherServer;
use Laravel\Mcp\Facades\Mcp;

Mcp::serve(WeatherServer::class);

Defining an MCP tool

Tools are the actual capabilities your AI exposes. Let's create the weather analysis tool:

php artisan make:mcp-tool AnalyzeWeatherTool

In app/Mcp/Tools/AnalyzeWeatherTool.php:

<?php

namespace App\Mcp\Tools;

use App\Services\WeatherAnalysisService;
use Laravel\Mcp\Tool;

class AnalyzeWeatherTool extends Tool
{
  protected string $name = 'analyze_weather';

  protected string $description =
    'Analyzes weather conditions for a city and provides practical insights.';

  public function __construct(
    private WeatherAnalysisService $weatherService
  ) {}

  public function schema(): array
  {
    return [
      'type' => 'object',
      'properties' => [
        'city' => [
          'type' => 'string',
          'description' => 'The city to analyze weather for',
        ],
        'temperature' => [
          'type' => 'number',
          'description' => 'Current temperature in Celsius',
        ],
        'conditions' => [
          'type' => 'string',
          'description' => 'Current weather conditions (e.g., sunny, cloudy, rainy)',
        ],
        'humidity' => [
          'type' => 'number',
          'description' => 'Humidity percentage',
        ],
        'wind_speed' => [
          'type' => 'number',
          'description' => 'Wind speed in km/h',
        ],
      ],
      'required' => ['city', 'temperature', 'conditions'],
    ];
  }

  public function handle(array $arguments): array
  {
    $analysis = $this->weatherService->analyzeConditions($arguments);

    return [
      'success' => true,
      'city' => $arguments['city'],
      'analysis' => $analysis,
      'timestamp' => now()->toIso8601String(),
    ];
  }
}

The schema method is crucial. It tells AI agents exactly what parameters they need to provide. The handle method does the actual work. Any AI agent that understands MCP can now discover this tool, understand its requirements, and invoke it.

Enforcing response structure

One challenge with LLMs is output consistency. Sometimes you get JSON, sometimes prose, sometimes a mix. Prism's structured outputs solve this by enforcing a schema.

Add this method to your WeatherAnalysisService:

use EchoLabs\Prism\Schema\ObjectSchema;
use EchoLabs\Prism\Schema\StringSchema;
use EchoLabs\Prism\Schema\NumberSchema;

public function analyzeConditionsStructured(array $weatherData): object
{
  $schema = new ObjectSchema(
    name: 'weather_analysis',
    description: 'Structured weather analysis response',
    properties: [
      new StringSchema(
        name: 'summary',
        description: 'Brief weather summary'
      ),
      new StringSchema(
        name: 'recommendations',
        description: 'Practical recommendations for the day'
      ),
      new StringSchema(
        name: 'notable_conditions',
        description: 'Any unusual or noteworthy conditions'
      ),
      new NumberSchema(
        name: 'comfort_index',
        description: 'Comfort rating from 1-10'
      ),
    ],
    requiredFields: ['summary', 'recommendations', 'comfort_index']
  );

  $systemPrompt = file_get_contents(
    resource_path('prompts/system/weather_analyst.txt')
  );

  $userPrompt = view('prompts.user.analyze_weather', $weatherData)->render();

  $response = Prism::structured()
    ->using(Provider::Ollama, 'llama3.2')
    ->withSystemPrompt($systemPrompt)
    ->withPrompt($userPrompt)
    ->withSchema($schema)
    ->generate();

  return $response->structured;
}

Now you're guaranteed to receive an object with exactly the fields you specified.

Streaming for better user experience

When your AI features face users, waiting for a complete response creates a poor experience. Streaming displays the response as it's generated:

public function analyzeConditionsStream(array $weatherData): \Generator
{
  $systemPrompt = file_get_contents(
    resource_path('prompts/system/weather_analyst.txt')
  );

  $userPrompt = view('prompts.user.analyze_weather', $weatherData)->render();

  $stream = Prism::text()
    ->using(Provider::Ollama, 'llama3.2')
    ->withSystemPrompt($systemPrompt)
    ->withPrompt($userPrompt)
    ->stream();

  foreach ($stream as $chunk) {
    yield $chunk->text;
  }
}

Combine this with Server-Sent Events or WebSockets on your frontend for a responsive, engaging user experience.

Testing without API calls

Your test suite shouldn't make real LLM calls. It would be slow, potentially expensive, and non-deterministic. Prism provides fakes for this:

use Tests\TestCase;
use EchoLabs\Prism\Facades\Prism;
use EchoLabs\Prism\Testing\PrismFake;

class WeatherAnalysisServiceTest extends TestCase
{
  public function test_it_analyzes_weather_conditions(): void
  {
    $fake = Prism::fake([
      'This is a beautiful day in London with perfect conditions for outdoor activities.',
    ]);

    $service = app(WeatherAnalysisService::class);

    $result = $service->analyzeConditions([
      'city' => 'London',
      'temperature' => 22,
      'conditions' => 'Sunny',
      'humidity' => 55,
      'wind_speed' => 10,
    ]);

    $this->assertStringContainsString('beautiful day', $result);

    $fake->assertPromptContains('London');
  }
}

Putting it together

Here's a complete controller demonstrating both traditional HTTP endpoints and streaming:

<?php

namespace App\Http\Controllers;

use App\Services\WeatherAnalysisService;
use Illuminate\Http\Request;

class WeatherAnalysisController extends Controller
{
  public function __construct(
    private WeatherAnalysisService $service
  ) {}

  public function analyze(Request $request)
  {
    $validated = $request->validate([
      'city' => 'required|string',
      'temperature' => 'required|numeric',
      'conditions' => 'required|string',
      'humidity' => 'nullable|numeric',
      'wind_speed' => 'nullable|numeric',
    ]);

    $analysis = $this->service->analyzeConditionsStructured($validated);

    return response()->json([
      'data' => $analysis,
    ]);
  }

  public function analyzeStream(Request $request)
  {
    $validated = $request->validate([
      'city' => 'required|string',
      'temperature' => 'required|numeric',
      'conditions' => 'required|string',
      'humidity' => 'nullable|numeric',
      'wind_speed' => 'nullable|numeric',
    ]);

    return response()->stream(function () use ($validated) {
      foreach ($this->service->analyzeConditionsStream($validated) as $chunk) {
        echo "data: " . json_encode(['chunk' => $chunk]) . "\n\n";
        ob_flush();
        flush();
      }
    }, 200, [
      'Content-Type' => 'text/event-stream',
      'Cache-Control' => 'no-cache',
      'X-Accel-Buffering' => 'no',
    ]);
  }
}

Why these patterns matter

Let me break down what makes this architecture work:

  • Separation of concerns: Prism handles LLM communication. MCP handles capability exposure. Your application logic stays focused on business problems.
  • Provider flexibility: Switching from Ollama to GPT-4 or Claude? Change one line. A/B testing different models? Easy.
  • Prompt maintainability: Your prompts live in version-controlled files. Anyone can iterate on them without touching code.
  • Testability: Mock LLM responses for reliable, fast tests.
  • Scalability: Adding new AI capabilities means adding new tools and prompts. The structure doesn't fight you.
  • Interoperability: AI agents can discover and use your capabilities through MCP without custom integration work.

Where to go from here

This foundation supports increasingly sophisticated features:

  • Multi-step workflows: Chain AI calls together. Have one AI analyze data, another synthesize it, another present it.
  • Dynamic tool selection: Let the AI choose which tools to use based on context.
  • Multi-modal capabilities: Add image analysis, document processing, or audio transcription.
  • Response caching: Reduce costs by caching expensive AI responses.
  • Usage monitoring: Track token consumption, response times, and costs across your application.
  • Rate limiting: Protect your resources from runaway AI loops or abuse.

The patterns you've learned here scale as your needs grow.

Wrapping up

Building AI features into Laravel applications doesn't require complexity. With Prism and MCP, you get clean abstractions that keep your options open.

File-based prompts make iteration easy. Structured outputs ensure reliability. Provider agnosticism means you can experiment freely. MCP integration enables AI-to-AI communication without custom API work.

When you want to switch providers, test new models, or expose your capabilities to other agents, these patterns have you covered.

With these patterns in place, you have a solid foundation for building reliable, scalable AI features in Laravel.

Deep dive into the cloud!

Stake your claim on the Interwebz today with Sevalla's platform!
Deploy your application, database, or static site in minutes.

Get started