1. Why AI Workflows Matter for SaaS
Most AI integrations are single-shot: user asks a question, AI answers. That's a feature. But the real value — the kind that justifies a monthly subscription — comes from automated multi-step workflows that save users hours of manual work.
Examples of AI workflows that SaaS products charge for:
- Lead qualification — New form submission → AI scores the lead → enriches company data → routes to the right sales rep → drafts a personalized email
- Content repurposing — Blog post uploaded → AI generates social media posts → email newsletter version → Twitter thread → meta description — all at once
- Customer feedback analysis — Batch of reviews → sentiment analysis → category tagging → priority scoring → summary report for the product team
- Invoice processing — PDF uploaded → OCR extraction → AI structures the data → validates against rules → creates entries in the accounting system
- Support ticket triage — New ticket → AI classifies urgency → identifies product area → suggests solutions from knowledge base → auto-responds or escalates
Each of these is a pipeline of 3-5 AI calls chained together, with conditional logic, error handling, and background processing. Laravel's queue system and Pipeline class make this surprisingly clean to build.
2. Three Patterns for AI Pipelines in Laravel
There are three ways to chain AI calls in Laravel. Each has a sweet spot:
Pattern 1: Laravel Pipeline (synchronous)
Use Illuminate\Pipeline\Pipeline for fast workflows (under 30 seconds) where the user waits for results. Each "pipe" transforms the data and passes it to the next.
- Best for: 2-4 quick AI calls, real-time responses
- Example: Classify input → extract entities → generate response
Pattern 2: Job Chains (asynchronous)
Use Bus::chain() for long-running workflows that process in the background. Each job is a step in the pipeline, and the chain ensures they run in order.
- Best for: 3-10 steps, heavy processing, batched operations
- Example: Upload document → extract text → chunk → embed → generate summary → email results
Pattern 3: Agent with Tools (autonomous)
Let the AI agent decide which steps to take using tools. The agent loops until the task is complete.
- Best for: Dynamic workflows where the steps aren't predetermined
- Example: Research agent that searches, reads, and synthesizes until it has enough information
This tutorial focuses on Patterns 1 and 2 — they cover 90% of SaaS AI workflows. For Pattern 3, see our AI agents guide.
3. The Workflow Data Model
Every workflow needs tracking. Users want to see what's happening, and you need observability for debugging. Here's a lightweight workflow tracking model:
php artisan make:model AiWorkflow -m
php artisan make:model AiWorkflowStep -m
<?php
// database/migrations/xxxx_create_ai_workflows_table.php
return new class extends Migration
{
public function up(): void
{
Schema::create('ai_workflows', function (Blueprint $table) {
$table->id();
$table->foreignId('team_id')->constrained()->cascadeOnDelete();
$table->foreignId('user_id')->constrained()->cascadeOnDelete();
$table->string('type'); // feedback_analysis, content_repurpose, etc.
$table->string('status')->default('pending'); // pending, running, completed, failed
$table->json('input')->nullable();
$table->json('output')->nullable();
$table->integer('total_steps')->default(0);
$table->integer('completed_steps')->default(0);
$table->integer('total_tokens_used')->default(0);
$table->decimal('estimated_cost_usd', 8, 6)->default(0);
$table->text('error_message')->nullable();
$table->timestamp('started_at')->nullable();
$table->timestamp('completed_at')->nullable();
$table->timestamps();
$table->index(['team_id', 'type', 'status']);
});
Schema::create('ai_workflow_steps', function (Blueprint $table) {
$table->id();
$table->foreignId('ai_workflow_id')->constrained()->cascadeOnDelete();
$table->string('name');
$table->integer('order');
$table->string('status')->default('pending');
$table->json('input')->nullable();
$table->json('output')->nullable();
$table->integer('tokens_used')->default(0);
$table->integer('duration_ms')->default(0);
$table->text('error_message')->nullable();
$table->timestamps();
$table->index(['ai_workflow_id', 'order']);
});
}
};
<?php
namespace App\Models;
use Illuminate\Database\Eloquent\Model;
use Illuminate\Database\Eloquent\Relations\BelongsTo;
use Illuminate\Database\Eloquent\Relations\HasMany;
class AiWorkflow extends Model
{
protected $guarded = [];
protected $casts = [
'input' => 'array',
'output' => 'array',
'started_at' => 'datetime',
'completed_at' => 'datetime',
];
public function team(): BelongsTo
{
return $this->belongsTo(Team::class);
}
public function user(): BelongsTo
{
return $this->belongsTo(User::class);
}
public function steps(): HasMany
{
return $this->hasMany(AiWorkflowStep::class)->orderBy('order');
}
public function markRunning(): void
{
$this->update(['status' => 'running', 'started_at' => now()]);
}
public function markCompleted(array $output = []): void
{
$this->update([
'status' => 'completed',
'output' => $output,
'completed_at' => now(),
]);
}
public function markFailed(string $error): void
{
$this->update([
'status' => 'failed',
'error_message' => $error,
'completed_at' => now(),
]);
}
public function advanceStep(string $stepName, array $output, int $tokens = 0): void
{
$this->steps()->where('name', $stepName)->update([
'status' => 'completed',
'output' => $output,
'tokens_used' => $tokens,
]);
$this->increment('completed_steps');
$this->increment('total_tokens_used', $tokens);
}
public function progressPercent(): int
{
if ($this->total_steps === 0) return 0;
return (int) round(($this->completed_steps / $this->total_steps) * 100);
}
}
4. Building Pipelines with Laravel's Pipeline Class
Laravel's Illuminate\Pipeline\Pipeline is perfect for synchronous AI workflows. Each pipe receives data, transforms it with an AI call, and passes it along.
<?php
namespace App\Ai\Workflows;
use Illuminate\Pipeline\Pipeline;
use Illuminate\Support\Facades\App;
abstract class AiPipeline
{
/**
* Define the pipes (steps) for this workflow.
*/
abstract protected function pipes(): array;
/**
* Run the pipeline with the given input data.
*/
public function run(array $data): array
{
return App::make(Pipeline::class)
->send($data)
->through($this->pipes())
->thenReturn();
}
}
Each pipe is a simple class with a handle method:
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
use Laravel\Ai\Facades\Ai;
class ClassifySentiment
{
public function handle(array $data, Closure $next): mixed
{
$response = Ai::agent('claude-sonnet-4-5-20250514')
->system('Classify the sentiment of the given text. Respond with exactly one word: positive, negative, or neutral.')
->prompt($data['text']);
$data['sentiment'] = strtolower(trim($response->text));
return $next($data);
}
}
The pattern is simple: read from $data, make an AI call, add results to $data, call $next($data). Each pipe can access everything the previous pipes produced.
5. Real-World Example: Customer Feedback Analyzer
Let's build a pipeline that takes a customer review and produces a structured analysis: sentiment, category, key themes, priority score, and a suggested response.
<?php
namespace App\Ai\Workflows;
use App\Ai\Workflows\Pipes\ClassifySentiment;
use App\Ai\Workflows\Pipes\CategorizeIssue;
use App\Ai\Workflows\Pipes\ExtractThemes;
use App\Ai\Workflows\Pipes\ScorePriority;
use App\Ai\Workflows\Pipes\DraftResponse;
class FeedbackAnalysisPipeline extends AiPipeline
{
protected function pipes(): array
{
return [
ClassifySentiment::class,
CategorizeIssue::class,
ExtractThemes::class,
ScorePriority::class,
DraftResponse::class,
];
}
}
Pipe: Categorize the Issue
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
use Laravel\Ai\Facades\Ai;
class CategorizeIssue
{
public function handle(array $data, Closure $next): mixed
{
$response = Ai::structured('claude-sonnet-4-5-20250514')
->system('Categorize this customer feedback into one or more categories.')
->schema([
'primary_category' => 'string: billing, bug, feature_request, ux, performance, documentation, other',
'secondary_category' => 'string|nullable',
'confidence' => 'float: 0.0 to 1.0',
])
->prompt("Feedback: {$data['text']}\nSentiment: {$data['sentiment']}");
$data['category'] = $response['primary_category'];
$data['secondary_category'] = $response['secondary_category'];
$data['category_confidence'] = $response['confidence'];
return $next($data);
}
}
Pipe: Extract Key Themes
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
use Laravel\Ai\Facades\Ai;
class ExtractThemes
{
public function handle(array $data, Closure $next): mixed
{
$response = Ai::structured('claude-sonnet-4-5-20250514')
->system('Extract the key themes and specific issues from this feedback.')
->schema([
'themes' => 'array of strings: 2-5 short theme descriptions',
'specific_issues' => 'array of strings: concrete problems mentioned',
'feature_requests' => 'array of strings: features or improvements requested',
])
->prompt($data['text']);
$data['themes'] = $response['themes'];
$data['specific_issues'] = $response['specific_issues'];
$data['feature_requests'] = $response['feature_requests'];
return $next($data);
}
}
Pipe: Score Priority
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
use Laravel\Ai\Facades\Ai;
class ScorePriority
{
public function handle(array $data, Closure $next): mixed
{
$context = json_encode([
'sentiment' => $data['sentiment'],
'category' => $data['category'],
'themes' => $data['themes'],
'specific_issues' => $data['specific_issues'],
]);
$response = Ai::structured('claude-sonnet-4-5-20250514')
->system('Score the priority of this customer feedback from 1-10 based on the analysis provided. Consider: revenue impact, user churn risk, number of users likely affected, and alignment with product roadmap.')
->schema([
'priority_score' => 'integer: 1-10',
'reasoning' => 'string: one-sentence explanation',
'churn_risk' => 'string: low, medium, high',
])
->prompt($context);
$data['priority_score'] = $response['priority_score'];
$data['priority_reasoning'] = $response['reasoning'];
$data['churn_risk'] = $response['churn_risk'];
return $next($data);
}
}
Pipe: Draft a Response
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
use Laravel\Ai\Facades\Ai;
class DraftResponse
{
public function handle(array $data, Closure $next): mixed
{
$context = "Original feedback: {$data['text']}\n" .
"Sentiment: {$data['sentiment']}\n" .
"Category: {$data['category']}\n" .
"Priority: {$data['priority_score']}/10\n" .
"Churn risk: {$data['churn_risk']}";
$response = Ai::agent('claude-sonnet-4-5-20250514')
->system('Draft a professional, empathetic customer response. Acknowledge their feedback, address their specific concerns, and if applicable, mention that the team is looking into it. Keep it under 150 words. Do not make promises about timelines.')
->prompt($context);
$data['suggested_response'] = $response->text;
return $next($data);
}
}
Running the Pipeline
// In a controller or Livewire component
$pipeline = new FeedbackAnalysisPipeline;
$result = $pipeline->run([
'text' => 'Your billing page is really confusing. I was charged twice last month and it took 3 days to get a refund. The support team was helpful but the process needs improvement.',
]);
// $result now contains:
// [
// 'text' => '...',
// 'sentiment' => 'negative',
// 'category' => 'billing',
// 'secondary_category' => 'ux',
// 'category_confidence' => 0.92,
// 'themes' => ['confusing billing interface', 'duplicate charges', 'slow refund process'],
// 'specific_issues' => ['charged twice', '3 day refund delay'],
// 'feature_requests' => ['clearer billing page', 'faster refund processing'],
// 'priority_score' => 8,
// 'priority_reasoning' => 'Billing issues directly impact revenue and trust...',
// 'churn_risk' => 'high',
// 'suggested_response' => 'Thank you for sharing this feedback...',
// ]
Five AI calls, each building on the previous result, all in a clean and testable structure. The entire pipeline runs in 3-5 seconds with claude-sonnet-4-5.
6. Queued Job Chains for Long-Running Workflows
For workflows that process large amounts of data or take longer than 30 seconds, use Laravel's Bus::chain(). Each step is a queued job that runs in the background.
<?php
namespace App\Ai\Workflows;
use App\Jobs\Workflows\Steps;
use App\Models\AiWorkflow;
use Illuminate\Support\Facades\Bus;
class ContentRepurposePipeline
{
public static function dispatch(AiWorkflow $workflow): void
{
$steps = [
'analyze_content',
'generate_social_posts',
'generate_email_newsletter',
'generate_twitter_thread',
'generate_meta_description',
'compile_results',
];
// Register steps in the database for tracking
foreach ($steps as $index => $step) {
$workflow->steps()->create([
'name' => $step,
'order' => $index + 1,
'status' => 'pending',
]);
}
$workflow->update(['total_steps' => count($steps)]);
$workflow->markRunning();
Bus::chain([
new Steps\AnalyzeContent($workflow),
new Steps\GenerateSocialPosts($workflow),
new Steps\GenerateEmailNewsletter($workflow),
new Steps\GenerateTwitterThread($workflow),
new Steps\GenerateMetaDescription($workflow),
new Steps\CompileResults($workflow),
])
->onQueue('ai-workflows')
->catch(function (\Throwable $e) use ($workflow) {
$workflow->markFailed($e->getMessage());
})
->dispatch();
}
}
7. Real-World Example: Content Repurposing Engine
A user pastes a blog post, and the system generates a full social media content package in the background. Here's one of the step jobs:
<?php
namespace App\Jobs\Workflows\Steps;
use App\Models\AiWorkflow;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Laravel\Ai\Facades\Ai;
class AnalyzeContent implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 15;
public function __construct(
public AiWorkflow $workflow,
) {}
public function handle(): void
{
$input = $this->workflow->input;
$response = Ai::structured('claude-sonnet-4-5-20250514')
->system('Analyze this blog post content. Extract the key message, target audience, main arguments, and tone of voice.')
->schema([
'key_message' => 'string: the core message in one sentence',
'target_audience' => 'string: who this content is for',
'main_points' => 'array of strings: 3-5 key arguments or takeaways',
'tone' => 'string: professional, casual, technical, persuasive, educational',
'word_count' => 'integer',
'suggested_hashtags' => 'array of strings: 5-8 relevant hashtags',
])
->prompt($input['content']);
$this->workflow->advanceStep('analyze_content', $response, $response->usage->totalTokens ?? 0);
}
}
<?php
namespace App\Jobs\Workflows\Steps;
use App\Models\AiWorkflow;
use Illuminate\Bus\Queueable;
use Illuminate\Contracts\Queue\ShouldQueue;
use Illuminate\Foundation\Bus\Dispatchable;
use Illuminate\Queue\InteractsWithQueue;
use Illuminate\Queue\SerializesModels;
use Laravel\Ai\Facades\Ai;
class GenerateSocialPosts implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
public int $tries = 3;
public int $backoff = 15;
public function __construct(
public AiWorkflow $workflow,
) {}
public function handle(): void
{
// Get the analysis from the previous step
$analysis = $this->workflow->steps()
->where('name', 'analyze_content')
->first()
->output;
$response = Ai::structured('claude-sonnet-4-5-20250514')
->system('Generate social media posts based on the content analysis provided. Create platform-specific posts that maintain the original message but are optimized for each platform.')
->schema([
'linkedin' => 'string: professional LinkedIn post, 150-200 words, with line breaks',
'instagram' => 'string: engaging Instagram caption with emoji and call-to-action',
'facebook' => 'string: conversational Facebook post, 100-150 words',
])
->prompt("Content analysis:\n" . json_encode($analysis, JSON_PRETTY_PRINT) .
"\n\nOriginal content:\n" . $this->workflow->input['content']);
$this->workflow->advanceStep('generate_social_posts', $response, $response->usage->totalTokens ?? 0);
}
}
Triggering the Workflow
<?php
namespace App\Http\Controllers;
use App\Ai\Workflows\ContentRepurposePipeline;
use App\Models\AiWorkflow;
use Illuminate\Http\Request;
class ContentRepurposeController extends Controller
{
public function store(Request $request)
{
$request->validate([
'content' => 'required|string|min:100|max:50000',
'title' => 'required|string|max:255',
]);
$workflow = AiWorkflow::create([
'team_id' => $request->user()->currentTeam->id,
'user_id' => $request->user()->id,
'type' => 'content_repurpose',
'input' => [
'content' => $request->content,
'title' => $request->title,
],
]);
ContentRepurposePipeline::dispatch($workflow);
return response()->json([
'workflow_id' => $workflow->id,
'status' => 'running',
'message' => 'Content repurposing started. Check back for results.',
], 202);
}
public function show(AiWorkflow $workflow)
{
$this->authorize('view', $workflow);
return response()->json([
'id' => $workflow->id,
'status' => $workflow->status,
'progress' => $workflow->progressPercent(),
'steps' => $workflow->steps->map(fn ($step) => [
'name' => $step->name,
'status' => $step->status,
'output' => $step->output,
]),
'output' => $workflow->output,
'total_tokens' => $workflow->total_tokens_used,
'estimated_cost' => $workflow->estimated_cost_usd,
]);
}
}
8. Error Handling, Retries & Fallbacks
AI API calls fail. Rate limits, timeouts, model errors — they all happen in production. Build resilience into every step:
<?php
namespace App\Ai\Workflows\Concerns;
use App\Models\AiWorkflow;
use Illuminate\Support\Facades\Log;
use Laravel\Ai\Facades\Ai;
trait HandlesAiFailures
{
/**
* Execute an AI call with fallback provider.
*/
protected function aiWithFallback(
string $systemPrompt,
string $userPrompt,
?array $schema = null,
): mixed {
try {
$agent = $schema
? Ai::structured('claude-sonnet-4-5-20250514')
: Ai::agent('claude-sonnet-4-5-20250514');
$query = $agent->system($systemPrompt);
if ($schema) {
$query = $query->schema($schema);
}
return $query->prompt($userPrompt);
} catch (\Throwable $e) {
Log::warning('Primary AI provider failed, trying fallback', [
'error' => $e->getMessage(),
]);
// Fallback to OpenAI
$agent = $schema
? Ai::structured('gpt-4o')
: Ai::agent('gpt-4o');
$query = $agent->system($systemPrompt);
if ($schema) {
$query = $query->schema($schema);
}
return $query->prompt($userPrompt);
}
}
/**
* Mark a workflow step as failed and decide whether to continue.
*/
protected function handleStepFailure(
AiWorkflow $workflow,
string $stepName,
\Throwable $e,
bool $critical = true,
): void {
$workflow->steps()->where('name', $stepName)->update([
'status' => 'failed',
'error_message' => $e->getMessage(),
]);
Log::error("Workflow step failed: {$stepName}", [
'workflow_id' => $workflow->id,
'error' => $e->getMessage(),
]);
if ($critical) {
$workflow->markFailed("Step '{$stepName}' failed: {$e->getMessage()}");
throw $e; // Stop the chain
}
// Non-critical: log and continue
$workflow->advanceStep($stepName, [
'skipped' => true,
'reason' => $e->getMessage(),
]);
}
}
Use the trait in your step jobs:
class GenerateSocialPosts implements ShouldQueue
{
use Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
use HandlesAiFailures;
public function handle(): void
{
try {
$response = $this->aiWithFallback(
systemPrompt: 'Generate social media posts...',
userPrompt: $this->workflow->input['content'],
schema: ['linkedin' => 'string', 'instagram' => 'string'],
);
$this->workflow->advanceStep('generate_social_posts', $response);
} catch (\Throwable $e) {
// Social posts are non-critical — workflow can continue without them
$this->handleStepFailure($this->workflow, 'generate_social_posts', $e, critical: false);
}
}
}
9. Real-Time Progress Tracking
Users want to see their workflow progressing. Broadcast step completions over WebSockets using Laravel's event broadcasting:
<?php
namespace App\Events;
use App\Models\AiWorkflow;
use Illuminate\Broadcasting\Channel;
use Illuminate\Broadcasting\InteractsWithSockets;
use Illuminate\Contracts\Broadcasting\ShouldBroadcast;
use Illuminate\Foundation\Events\Dispatchable;
class WorkflowStepCompleted implements ShouldBroadcast
{
use Dispatchable, InteractsWithSockets;
public function __construct(
public AiWorkflow $workflow,
public string $stepName,
public int $progressPercent,
) {}
public function broadcastOn(): Channel
{
return new Channel("team.{$this->workflow->team_id}.workflows");
}
public function broadcastWith(): array
{
return [
'workflow_id' => $this->workflow->id,
'step' => $this->stepName,
'progress' => $this->progressPercent,
'status' => $this->workflow->status,
];
}
}
Fire the event from your advanceStep method:
// In AiWorkflow model
public function advanceStep(string $stepName, array $output, int $tokens = 0): void
{
$this->steps()->where('name', $stepName)->update([
'status' => 'completed',
'output' => $output,
'tokens_used' => $tokens,
]);
$this->increment('completed_steps');
$this->increment('total_tokens_used', $tokens);
event(new \App\Events\WorkflowStepCompleted(
$this, $stepName, $this->progressPercent()
));
}
On the frontend, listen with Echo:
// resources/js/workflow-tracker.js
Echo.channel(`team.${teamId}.workflows`)
.listen('WorkflowStepCompleted', (event) => {
const progressBar = document.getElementById(`workflow-${event.workflow_id}-progress`);
progressBar.style.width = `${event.progress}%`;
progressBar.textContent = `${event.progress}%`;
if (event.status === 'completed') {
showNotification('Workflow complete! Your content is ready.');
loadWorkflowResults(event.workflow_id);
}
});
10. Conditional Branching & Fan-Out
Some workflows need different paths based on AI output. For example: if sentiment is negative and churn risk is high, trigger an urgent alert. Otherwise, just log it.
<?php
namespace App\Ai\Workflows\Pipes;
use Closure;
class RouteByPriority
{
public function handle(array $data, Closure $next): mixed
{
if ($data['priority_score'] >= 8 && $data['churn_risk'] === 'high') {
// High priority: notify team immediately
$data['actions'][] = 'urgent_alert';
$data['actions'][] = 'auto_escalate';
} elseif ($data['priority_score'] >= 5) {
// Medium priority: add to review queue
$data['actions'][] = 'add_to_review_queue';
} else {
// Low priority: auto-respond and log
$data['actions'][] = 'auto_respond';
}
return $next($data);
}
}
For fan-out (running multiple AI steps in parallel), use job batches:
use Illuminate\Bus\Batch;
use Illuminate\Support\Facades\Bus;
// Fan-out: generate all social media content in parallel
Bus::batch([
new GenerateLinkedInPost($workflow),
new GenerateTwitterThread($workflow),
new GenerateInstagramCaption($workflow),
new GenerateEmailNewsletter($workflow),
])
->name("workflow-{$workflow->id}-social-content")
->onQueue('ai-workflows')
->then(function (Batch $batch) use ($workflow) {
// All parallel jobs completed — compile the results
CompileResults::dispatch($workflow);
})
->catch(function (Batch $batch, \Throwable $e) use ($workflow) {
$workflow->markFailed($e->getMessage());
})
->dispatch();
This cuts a 4-step sequential process (20 seconds) into a parallel batch (5 seconds). The then() callback fires only when all batch jobs complete.
11. Cost Control & Rate Limiting
AI workflows can be expensive if unchecked. A 6-step pipeline with claude-sonnet-4-5 costs roughly $0.01-0.05 per run. At 10,000 runs/month, that's $100-500 in API costs alone. Build cost controls into your pipeline:
<?php
namespace App\Ai\Workflows\Middleware;
use Closure;
class EnforceWorkflowBudget
{
public function handle(array $data, Closure $next): mixed
{
$workflow = $data['_workflow'] ?? null;
if (!$workflow) return $next($data);
$team = $workflow->team;
// Check monthly AI spend
$monthlySpend = $team->aiWorkflows()
->where('created_at', '>=', now()->startOfMonth())
->sum('estimated_cost_usd');
$budgetLimit = match (true) {
$team->onBusinessPlan() => 50.00,
$team->onProPlan() => 10.00,
default => 1.00,
};
if ($monthlySpend >= $budgetLimit) {
throw new \App\Exceptions\AiBudgetExceededException(
"Monthly AI workflow budget of \${$budgetLimit} reached. Upgrade for more."
);
}
return $next($data);
}
}
// Rate limit workflows per team using Laravel's rate limiter
use Illuminate\Support\Facades\RateLimiter;
class WorkflowController extends Controller
{
public function store(Request $request)
{
$teamId = $request->user()->currentTeam->id;
$executed = RateLimiter::attempt(
key: "ai-workflow:{$teamId}",
maxAttempts: 20, // 20 workflows per hour
callback: function () use ($request) {
// Create and dispatch the workflow
return $this->createWorkflow($request);
},
decaySeconds: 3600,
);
if (!$executed) {
return response()->json([
'error' => 'Workflow rate limit reached. Please try again later.',
'retry_after' => RateLimiter::availableIn("ai-workflow:{$teamId}"),
], 429);
}
return $executed;
}
}
Also add cost tracking to each step so you can show users their spend in the dashboard:
// After each AI call, estimate cost
$inputCost = ($response->usage->inputTokens ?? 0) / 1_000_000 * 3.00; // Claude Sonnet input
$outputCost = ($response->usage->outputTokens ?? 0) / 1_000_000 * 15.00; // Claude Sonnet output
$totalCost = $inputCost + $outputCost;
$workflow->increment('estimated_cost_usd', $totalCost);
12. Testing AI Workflows
Test each pipe individually with faked AI responses, then test the full pipeline as an integration test.
<?php
use App\Ai\Workflows\FeedbackAnalysisPipeline;
use App\Ai\Workflows\Pipes\ClassifySentiment;
use App\Ai\Workflows\Pipes\ScorePriority;
use Laravel\Ai\Facades\Ai;
// Unit test: individual pipe
test('ClassifySentiment pipe detects negative sentiment', function () {
Ai::fake(['negative']);
$pipe = new ClassifySentiment;
$result = $pipe->handle(
['text' => 'This product is terrible and the support is useless.'],
fn ($data) => $data,
);
expect($result['sentiment'])->toBe('negative');
Ai::assertPrompted();
});
// Unit test: priority scoring uses context from previous pipes
test('ScorePriority considers sentiment and category', function () {
Ai::fake([json_encode([
'priority_score' => 9,
'reasoning' => 'Billing bug with high churn risk',
'churn_risk' => 'high',
])]);
$pipe = new ScorePriority;
$result = $pipe->handle([
'text' => 'I was charged twice!',
'sentiment' => 'negative',
'category' => 'billing',
'themes' => ['double charge'],
'specific_issues' => ['charged twice'],
], fn ($data) => $data);
expect($result['priority_score'])->toBe(9)
->and($result['churn_risk'])->toBe('high');
});
// Integration test: full pipeline
test('feedback analysis pipeline produces complete analysis', function () {
Ai::fake([
'negative', // ClassifySentiment
json_encode(['primary_category' => 'billing', // CategorizeIssue
'secondary_category' => 'ux',
'confidence' => 0.91]),
json_encode(['themes' => ['billing confusion'], // ExtractThemes
'specific_issues' => ['double charge'],
'feature_requests' => ['clearer invoices']]),
json_encode(['priority_score' => 8, // ScorePriority
'reasoning' => 'Direct revenue impact',
'churn_risk' => 'high']),
'Thank you for bringing this to our attention...', // DraftResponse
]);
$pipeline = new FeedbackAnalysisPipeline;
$result = $pipeline->run([
'text' => 'Your billing page charged me twice last month.',
]);
expect($result)
->toHaveKey('sentiment', 'negative')
->toHaveKey('category', 'billing')
->toHaveKey('priority_score', 8)
->toHaveKey('churn_risk', 'high')
->toHaveKey('suggested_response');
Ai::assertPromptedTimes(5);
});
// Integration test: queued workflow
test('content repurpose workflow completes all steps', function () {
Ai::fake([
json_encode(['key_message' => 'AI is transforming SaaS',
'target_audience' => 'SaaS founders',
'main_points' => ['AI agents', 'workflows', 'automation'],
'tone' => 'technical',
'word_count' => 1200,
'suggested_hashtags' => ['#AI', '#SaaS', '#Laravel']]),
json_encode(['linkedin' => 'AI is transforming...', 'instagram' => 'New post!', 'facebook' => 'Check out...']),
json_encode(['subject' => 'AI in SaaS', 'body' => 'This week...']),
json_encode(['tweets' => ['Tweet 1', 'Tweet 2', 'Tweet 3']]),
json_encode(['meta_description' => 'Learn how AI workflows...']),
]);
$user = User::factory()->withTeam()->create();
$workflow = AiWorkflow::create([
'team_id' => $user->currentTeam->id,
'user_id' => $user->id,
'type' => 'content_repurpose',
'input' => ['content' => 'A blog post about AI workflows...', 'title' => 'AI Workflows'],
]);
ContentRepurposePipeline::dispatch($workflow);
// Process all queued jobs
$this->artisan('queue:work', ['--once' => true]);
$workflow->refresh();
expect($workflow->status)->toBe('completed')
->and($workflow->completed_steps)->toBe($workflow->total_steps);
});
13. Conclusion
You now have the patterns to build production-ready AI workflows in Laravel 12. Let's recap:
- Pipeline pattern — Synchronous chains for fast, real-time workflows (under 30s)
- Job chains — Asynchronous background workflows with ordered step execution
- Job batches — Parallel fan-out for independent steps that can run simultaneously
- Workflow tracking — Database model for status, progress, cost, and step-level output
- Error handling — Provider fallback, critical vs. non-critical failures, automatic retries
- Real-time progress — WebSocket broadcasts for step-by-step updates in the UI
- Conditional branching — Route workflows based on AI output (urgency, sentiment, category)
- Cost control — Per-team budgets, rate limiting, and cost tracking per workflow
- Testing — Faked AI responses for deterministic, zero-cost testing of each pipe and the full pipeline
AI workflows are what turn a simple chatbot into a product worth paying for. The customer feedback analyzer, content repurposing engine, support ticket triage, lead qualification — these are the features that justify $29/mo or $79/mo subscription plans.
The foundation under all of this — authentication, team-based multi-tenancy, subscription billing with plan limits, queue infrastructure, and admin panel — is a non-trivial amount of code. LaraSpeed ships with all of it pre-built. Plug in your AI agents and pipelines, and you have a billable SaaS product.