1. The Three Contenders at a Glance
Before we dive into code, let's establish what each package is and who maintains it:
OpenAI PHP (openai-php/client + openai-php/laravel) is the unofficial-official PHP client for the OpenAI API, maintained by Nuno Maduro (Laravel core team member) and Sandro Gehri. With 5,700+ GitHub stars and 21+ million Packagist downloads, it's the most widely adopted option. It maps 1:1 to the OpenAI API surface — chat completions, embeddings, images (DALL-E), audio (Whisper, TTS), the new Responses API, and more. The trade-off: it only speaks OpenAI.
Prism (prism-php/prism) is a multi-provider AI abstraction layer for Laravel, created by TJ Miller at Echo Labs. Think of it as the Vercel AI SDK for PHP. It normalizes the API differences between OpenAI, Anthropic, Gemini, Groq, Mistral, DeepSeek, xAI, Ollama, and more behind a single fluent interface. With 2,300+ stars and approaching v1.0 (currently v0.99), it's the go-to choice for developers who want provider flexibility.
Laravel AI SDK (laravel/ai) is the official, first-party AI package released by the Laravel team on February 5, 2026. Here's the key insight that most developers miss: the Laravel AI SDK uses Prism as a dependency under the hood. It requires prism-php/prism: ^0.99.0 in its composer.json. The SDK doesn't reinvent multi-provider abstraction — it delegates that to Prism and adds Laravel-specific features on top: the Agent pattern with Artisan scaffolding, database-backed conversation persistence, SSE streaming with Vercel AI SDK protocol support, queue integration, broadcasting, and comprehensive testing fakes.
2. Architecture: How They Relate to Each Other
Understanding the architecture helps everything else click. These three packages sit at different layers of the stack:
┌─────────────────────────────────────────────────┐
│ Laravel AI SDK (laravel/ai) │
│ Agents, Artisan commands, DB persistence, │
│ queues, broadcasting, testing fakes │
├─────────────────────────────────────────────────┤
│ Prism (prism-php/prism) │
│ Multi-provider abstraction, fluent builder, │
│ tools, structured output, embeddings │
├──────────────────┬──────────────────────────────┤
│ OpenAI driver │ Anthropic, Gemini, Groq... │
└──────────────────┴──────────────────────────────┘
┌─────────────────────────────────────────────────┐
│ OpenAI PHP (openai-php/client) │
│ Direct 1:1 mapping to OpenAI API │
│ Chat, Embeddings, Images, Audio, Assistants, │
│ Responses API, Fine-tuning, Batches │
├─────────────────────────────────────────────────┤
│ OpenAI API only │
└─────────────────────────────────────────────────┘
The Laravel AI SDK wraps Prism, which wraps individual provider APIs. The OpenAI PHP package sits independently — it talks directly to OpenAI's REST API with zero abstraction. This is why comparing them isn't an apples-to-apples exercise. They solve different problems at different levels of abstraction.
A common misconception: you don't need to choose between Prism and the Laravel AI SDK. If you install laravel/ai, you get Prism automatically. The question is whether you need the SDK's higher-level features or whether Prism's lower-level fluent builder is enough.
3. Installation and Setup
OpenAI PHP
composer require openai-php/laravel
php artisan openai:install
# .env
OPENAI_API_KEY=sk-...
OPENAI_ORGANIZATION=org-... # optional
Publishes config/openai.php. You get a Facade and can inject OpenAI\Client via the service container. Simple.
Prism
composer require prism-php/prism
php artisan vendor:publish --tag=prism-config
# .env — configure whichever providers you need
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
# No key needed for Ollama (local)
Publishes config/prism.php where you configure providers, default models, and timeouts. Add as many providers as you want.
Laravel AI SDK
composer require laravel/ai
php artisan install:ai
The install:ai command publishes config, runs migrations for conversation persistence tables (agent_conversations and agent_conversation_messages), and sets up your .env. Since it depends on Prism, your provider API keys go in the same .env variables.
4. Basic Chat Completion — Side by Side
The simplest test: send a prompt, get a response. Here's the same task in all three packages.
OpenAI PHP
use OpenAI\Laravel\Facades\OpenAI;
$response = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'system', 'content' => 'You are a helpful assistant.'],
['role' => 'user', 'content' => 'What is Laravel?'],
],
]);
echo $response->choices[0]->message->content;
echo $response->usage->totalTokens;
Direct, explicit, and verbose. You control every parameter because you're writing the raw API payload. If you've used the OpenAI API in Python or JavaScript, this feels immediately familiar.
Prism
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withSystemPrompt('You are a helpful assistant.')
->withPrompt('What is Laravel?')
->asText();
echo $response->text;
echo $response->usage->promptTokens;
Fluent builder pattern. Swap Provider::OpenAI for Provider::Anthropic and change the model name — everything else stays the same. That's Prism's core value proposition.
Laravel AI SDK
php artisan make:agent Assistant
<?php
namespace App\Ai\Agents;
use Laravel\Ai\Contracts\Agent;
use Laravel\Ai\Promptable;
#[Provider(Lab::OpenAI)]
#[Model('gpt-4o')]
class Assistant implements Agent
{
use Promptable;
public function instructions(): string
{
return 'You are a helpful assistant.';
}
}
// Usage:
$response = (new Assistant)->prompt('What is Laravel?');
echo $response->text;
More code upfront to define the agent class, but every subsequent call is a one-liner. The agent encapsulates model, provider, system prompt, and behavior in a reusable, testable class. For a one-off API call, this is overkill. For a SaaS feature that gets called hundreds of times, it's clean architecture.
5. Tool Calling / Function Calling
Tool calling (formerly "function calling") is where the AI model decides to invoke a function you define, gets the result, and continues generating. Here's how each package handles it.
OpenAI PHP
$response = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'What is the weather in Paris?'],
],
'tools' => [
[
'type' => 'function',
'function' => [
'name' => 'get_weather',
'description' => 'Get current weather for a location',
'parameters' => [
'type' => 'object',
'properties' => [
'city' => [
'type' => 'string',
'description' => 'The city name',
],
],
'required' => ['city'],
],
],
],
],
]);
// You must manually handle the tool call loop:
$toolCall = $response->choices[0]->message->toolCalls[0];
$args = json_decode($toolCall->function->arguments, true);
$result = getWeather($args['city']); // your function
// Send the result back in a follow-up request
$followUp = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'What is the weather in Paris?'],
['role' => 'assistant', 'tool_calls' => [$toolCall]],
['role' => 'tool', 'tool_call_id' => $toolCall->id, 'content' => $result],
],
]);
With OpenAI PHP, you manage the tool-calling loop yourself. You parse the tool call from the response, execute your function, then send the result back in a new API call. This gives you full control but requires significant boilerplate for multi-step tool chains.
Prism
use Prism\Prism\Facades\Prism;
use Prism\Prism\Facades\Tool;
use Prism\Prism\Enums\Provider;
$weatherTool = Tool::as('get_weather')
->for('Get current weather for a location')
->withStringParameter('city', 'The city name')
->using(function (string $city): string {
return getWeather($city);
});
$response = Prism::text()
->using(Provider::OpenAI, 'gpt-4o')
->withPrompt('What is the weather in Paris?')
->withTools([$weatherTool])
->withMaxSteps(3)
->asText();
echo $response->text;
Prism handles the tool-calling loop automatically. You define tools with a fluent builder, set withMaxSteps(), and Prism runs the back-and-forth until the model produces a final text response. Same behavior across all providers.
Laravel AI SDK
php artisan make:tool GetWeather
<?php
namespace App\Ai\Tools;
use Illuminate\Contracts\JsonSchema\JsonSchema;
use Laravel\Ai\Contracts\Tool;
use Laravel\Ai\Tools\Request;
class GetWeather implements Tool
{
public function description(): string
{
return 'Get current weather for a location';
}
public function schema(JsonSchema $schema): array
{
return [
'city' => $schema->string()->description('The city name')->required(),
];
}
public function handle(Request $request): string
{
return getWeather($request['city']);
}
}
// In your agent class:
class WeatherAssistant implements Agent, HasTools
{
use Promptable;
public function instructions(): string { return 'You help with weather queries.'; }
public function tools(): iterable
{
return [new GetWeather];
}
}
The SDK scaffolds tool classes with make:tool. Tools are proper classes with typed schemas, dependency injection support, and reusability across multiple agents. The tool loop is handled automatically, just like Prism.
6. Structured Output
Structured output forces the AI to return data in a specific JSON schema instead of free-form text. Essential for building reliable SaaS features.
OpenAI PHP
$response = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => [
['role' => 'user', 'content' => 'Extract: John Doe, age 30, engineer'],
],
'response_format' => [
'type' => 'json_schema',
'json_schema' => [
'name' => 'person',
'strict' => true,
'schema' => [
'type' => 'object',
'properties' => [
'name' => ['type' => 'string'],
'age' => ['type' => 'integer'],
'job' => ['type' => 'string'],
],
'required' => ['name', 'age', 'job'],
'additionalProperties' => false,
],
],
],
]);
$data = json_decode($response->choices[0]->message->content, true);
// $data['name'] = 'John Doe', $data['age'] = 30, $data['job'] = 'engineer'
You write the JSON Schema by hand. Strict mode guarantees the output matches. The result is a JSON string that you json_decode yourself. OpenAI-only feature.
Prism
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
use Prism\Prism\Schema\ObjectSchema;
use Prism\Prism\Schema\StringSchema;
use Prism\Prism\Schema\NumberSchema;
$schema = new ObjectSchema(
name: 'person',
description: 'Extracted person data',
properties: [
new StringSchema('name', 'Full name'),
new NumberSchema('age', 'Age in years'),
new StringSchema('job', 'Job title'),
],
requiredFields: ['name', 'age', 'job'],
);
$response = Prism::structured()
->using(Provider::OpenAI, 'gpt-4o')
->withSchema($schema)
->withPrompt('Extract: John Doe, age 30, engineer')
->asStructured();
$data = $response->structured;
// $data['name'] = 'John Doe', $data['age'] = 30, $data['job'] = 'engineer'
Provider-agnostic schema definitions. Prism translates the schema to each provider's native format (OpenAI's json_schema, Anthropic's tool-calling mode, etc.). Same code works across providers.
Laravel AI SDK
class PersonExtractor implements Agent, HasStructuredOutput
{
use Promptable;
public function instructions(): string
{
return 'Extract structured person data from text.';
}
public function schema(JsonSchema $schema): array
{
return [
'name' => $schema->string()->required(),
'age' => $schema->integer()->required(),
'job' => $schema->string()->required(),
];
}
}
// Usage:
$data = (new PersonExtractor)->prompt('Extract: John Doe, age 30, engineer');
// $data['name'] = 'John Doe', $data['age'] = 30, $data['job'] = 'engineer'
The schema lives inside the agent class using Laravel's illuminate/json-schema package. The result is already decoded — you access fields with array syntax directly. Schema validation and retries are handled by the SDK.
7. Streaming Responses
OpenAI PHP
$stream = OpenAI::chat()->createStreamed([
'model' => 'gpt-4o',
'messages' => [['role' => 'user', 'content' => 'Tell me a story.']],
]);
foreach ($stream as $chunk) {
echo $chunk->choices[0]->delta->content ?? '';
}
Returns a PHP iterator of chunks. Simple and direct, but you handle SSE framing and HTTP response headers yourself if you want to stream to a browser.
Prism
$response = Prism::text()
->using(Provider::Anthropic, 'claude-sonnet-4-5-20250514')
->withPrompt('Tell me a story.')
->asStream();
foreach ($response as $chunk) {
echo $chunk->text;
}
Unified streaming interface across all providers. You can also access $chunk->usage for token counts on supported providers.
Laravel AI SDK
// In a route — returns a proper SSE response
Route::post('/ai/chat', function (Request $request) {
return (new StoryAgent)->stream($request->input('prompt'));
});
// With Vercel AI SDK data protocol (for React/Next.js frontends)
return (new StoryAgent)->stream($prompt)->usingVercelDataProtocol();
// Broadcast to multiple clients via WebSockets
(new StoryAgent)->broadcastOnQueue($prompt, new Channel('stories'));
The SDK handles SSE response headers, supports the Vercel AI SDK data protocol out of the box, and can broadcast streaming responses over WebSocket channels via Laravel Reverb. This is where the SDK pulls significantly ahead for real-time SaaS features.
8. Multi-Provider Support
This is the single biggest differentiator between the three packages.
| Provider | OpenAI PHP | Prism | Laravel AI SDK |
|---|---|---|---|
| OpenAI (GPT-4o, o1, etc.) | Yes | Yes | Yes |
| Anthropic (Claude) | No | Yes | Yes |
| Google Gemini | No | Yes | Yes |
| Groq | No | Yes | Yes |
| Mistral | No | Yes | Yes |
| DeepSeek | No | Yes | Yes |
| xAI (Grok) | No | Yes | Yes |
| Ollama (local) | No | Yes | Yes |
| OpenRouter | No | Yes | Yes |
| Provider failover | No | No | Yes |
OpenAI PHP is locked to OpenAI. If you want to add Claude as a fallback or use a cheaper model from Groq for low-priority tasks, you need a second package.
Prism supports 9+ providers through a unified interface. Switching is a one-line change:
// Switch from OpenAI to Claude — only these two lines change:
->using(Provider::Anthropic, 'claude-sonnet-4-5-20250514')
Laravel AI SDK inherits all of Prism's providers and adds automatic failover. Pass an array of providers and the SDK falls back to the next one if the primary is down:
#[Provider([Lab::OpenAI, Lab::Anthropic, Lab::Gemini])]
class MyAgent implements Agent { ... }
9. Agent Pattern and Conversation Persistence
This is where the Laravel AI SDK has no competition. Neither OpenAI PHP nor Prism offer a built-in agent abstraction or conversation persistence.
OpenAI PHP — Manual Implementation
// You'd build this yourself:
// 1. Create a conversations table
// 2. Store messages manually after each API call
// 3. Load history before each request
// 4. Manage token limits and context windows
$history = Message::where('conversation_id', $id)
->orderBy('created_at')
->get()
->map(fn ($m) => ['role' => $m->role, 'content' => $m->content])
->toArray();
$history[] = ['role' => 'user', 'content' => $newMessage];
$response = OpenAI::chat()->create([
'model' => 'gpt-4o',
'messages' => $history,
]);
Message::create([
'conversation_id' => $id,
'role' => 'assistant',
'content' => $response->choices[0]->message->content,
]);
Prism — Manual Implementation
// Same situation — Prism handles the API call but not persistence:
use Prism\Prism\ValueObjects\Messages\UserMessage;
use Prism\Prism\ValueObjects\Messages\AssistantMessage;
$messages = loadConversationHistory($conversationId); // you build this
$messages[] = new UserMessage($newMessage);
$response = Prism::text()
->using(Provider::Anthropic, 'claude-sonnet-4-5-20250514')
->withMessages($messages)
->asText();
saveMessage($conversationId, 'assistant', $response->text); // you build this
Laravel AI SDK — Built-In
class SupportAgent implements Agent, Conversational, HasTools
{
use Promptable, RemembersConversations;
public function instructions(): string
{
return 'You are a customer support agent for our SaaS platform.';
}
public function tools(): iterable { return [new SearchDocs]; }
}
// Start a conversation:
$response = (new SupportAgent)->forUser($user)->prompt('How do I reset my password?');
$conversationId = $response->conversationId;
// Continue it later (history is loaded from the database automatically):
$response = (new SupportAgent)
->continue($conversationId, as: $user)
->prompt('What about two-factor authentication?');
Two lines to start, two lines to continue. The SDK handles message storage, history loading, and context management in the agent_conversations and agent_conversation_messages tables. For a SaaS application with customer-facing AI chat, this saves days of development time.
10. Testing
OpenAI PHP
use OpenAI\Laravel\Facades\OpenAI;
use OpenAI\Responses\Chat\CreateResponse;
OpenAI::fake([
CreateResponse::fake([
'choices' => [
['message' => ['content' => 'Mocked response']],
],
]),
]);
// Run your code...
OpenAI::assertSent(Chat::class, function (string $method, array $params): bool {
return $method === 'create' && $params['model'] === 'gpt-4o';
});
Solid fake/assert system. You mock full response objects with CreateResponse::fake(). Works well but requires knowledge of the OpenAI response structure.
Prism
use Prism\Prism\Facades\Prism;
use Prism\Prism\Testing\PrismFake;
use Prism\Prism\ValueObjects\TextResult;
$fake = Prism::fake([
new TextResult('Mocked response', usage: new Usage(10, 20)),
]);
// Run your code...
$fake->assertCallCount(1);
$fake->assertPrompt('What is Laravel?');
Clean fakes with PrismFake. Provider-agnostic assertions — you test the prompt and response, not the provider-specific API structure.
Laravel AI SDK
use App\Ai\Agents\SupportAgent;
test('support agent responds correctly', function () {
SupportAgent::fake(['I can help you with that.']);
$response = $this->postJson('/ai/chat', ['message' => 'Help me']);
$response->assertOk();
SupportAgent::assertPrompted(fn ($prompt) =>
str_contains($prompt->prompt, 'Help me')
);
});
test('no stray AI calls in tests', function () {
SupportAgent::fake()->preventStrayPrompts();
// Any un-faked AI call now throws an exception
});
Agent-level fakes are the cleanest. You fake the agent, not the HTTP layer. The preventStrayPrompts() method ensures no test accidentally makes a real API call — add it to your base test case for safety. The SDK also provides fakes for Image, Audio, Transcription, and Embeddings.
11. Feature Comparison Table
| Feature | OpenAI PHP | Prism | Laravel AI SDK |
|---|---|---|---|
| Maintainer | Nuno Maduro | TJ Miller | Laravel Team |
| GitHub Stars | 5,700+ | 2,300+ | New (Feb 2026) |
| Multi-provider | OpenAI only | 9+ providers | 9+ (via Prism) |
| Provider failover | No | No | Yes |
| Chat completions | Yes | Yes | Yes |
| Tool calling | Manual loop | Auto loop | Auto + scaffolding |
| Structured output | Yes (raw JSON) | Yes (typed) | Yes (typed) |
| Streaming | Yes (iterator) | Yes (unified) | SSE + Vercel + broadcast |
| Agent pattern | No | No | Yes + Artisan |
| Conversation persistence | Manual | Manual | Built-in (DB) |
| Queue integration | Manual | Manual | Built-in |
| Testing fakes | Yes | Yes | Agent-level + preventStray |
| Image generation | DALL-E | No | Yes |
| Audio / TTS | Whisper + TTS | No | Yes |
| Embeddings | Yes | Yes | Yes + vector stores |
| Fine-tuning API | Yes | No | No |
| Responses API | Yes | No | No |
| Abstraction level | Low (raw API) | Medium (fluent) | High (agents) |
| Min PHP version | 8.2 | 8.2 | 8.3 |
| Min Laravel version | Any (via DI) | 11+ | 12+ |
12. When to Use Which — Decision Framework
Use OpenAI PHP when:
- You're committed to OpenAI and don't need provider switching. If GPT-4o is your model and you have no plans to change, the abstraction layers of Prism and the SDK add complexity without benefit.
- You need OpenAI-specific features like DALL-E image generation, Whisper transcription, the Responses API, fine-tuning management, batch processing, or real-time streaming with ephemeral tokens. These APIs aren't covered by Prism or the SDK.
- You want maximum control. Every API parameter is exposed. No magic, no abstraction. You see exactly what's sent and what comes back.
- You're not on Laravel. OpenAI PHP works with any PHP framework. The base client (
openai-php/client) is framework-agnostic.
Use Prism when:
- You want provider flexibility without the full SDK. Prism is leaner and more focused — text generation, tools, structured output, embeddings. No agents, no migrations, no extra database tables.
- You're on Laravel 11. The official AI SDK requires Laravel 12+. If you're not ready to upgrade, Prism works with Laravel 11.
- You need a lightweight solution. If you're building a simple feature (translate this text, summarize this document, classify this support ticket) and don't need conversation persistence or agent classes, Prism's fluent builder is the sweet spot.
- You want to evaluate providers. Testing the same prompt across OpenAI, Claude, Gemini, and Groq is a one-line change per provider. Great for benchmarking cost vs. quality.
Use the Laravel AI SDK when:
- You're building production AI features in a SaaS app. Customer support chatbots, AI writing assistants, analytics agents — anything that needs conversation persistence, rate limiting per user, queue-based processing, and proper testing. The SDK was designed for exactly this.
- You want the Agent pattern. Encapsulating AI behavior in classes with
make:agentscaffolding, PHP attributes for configuration, middleware for logging, andRemembersConversationsfor persistence is the most maintainable architecture for AI features. - You need streaming to a frontend. Built-in SSE response handling, Vercel AI SDK protocol support, and WebSocket broadcasting via Reverb are significant time-savers.
- You want first-party support. The SDK is maintained by the Laravel team, documented on laravel.com, and will evolve alongside the framework. For long-term projects, this matters.
- You want to build AI agents with tool-calling, structured output, and multi-step reasoning. The SDK's agent architecture handles the complexity.
Can you use them together?
Yes. A common pattern: use the Laravel AI SDK for your core AI agent features (customer support, content generation, analytics) and keep OpenAI PHP alongside it for OpenAI-specific features like image generation with DALL-E, audio transcription with Whisper, or the Responses API. They coexist in the same project without conflict.
You can also use Prism directly even when the SDK is installed — for quick one-off calls where creating an agent class would be overkill:
// Quick one-off translation — no agent needed
use Prism\Prism\Facades\Prism;
use Prism\Prism\Enums\Provider;
$translated = Prism::text()
->using(Provider::Anthropic, 'claude-haiku-4-5-20251001')
->withPrompt("Translate to French: {$text}")
->asText()
->text;
13. Conclusion
The best AI package for Laravel in 2026 depends on what you're building:
- OpenAI PHP — the battle-tested workhorse when you're all-in on OpenAI and need maximum API coverage. 21 million downloads speak for themselves.
- Prism — the lean, flexible middleware layer when you want multi-provider abstraction without framework-level opinions. Approaching v1.0 and production-proven.
- Laravel AI SDK — the full-stack solution when you're building serious AI-powered SaaS features. Agents, persistence, queues, streaming, testing — all first-party.
Remember: the SDK doesn't replace Prism — it builds on it. And none of these replace OpenAI PHP for OpenAI-specific features like DALL-E, Whisper, and fine-tuning. Choose by use case, not by hype.
For most Laravel 12 SaaS projects starting today, our recommendation is clear: start with the Laravel AI SDK. You get Prism's provider flexibility, the agent pattern for clean architecture, and all the Laravel integrations (queues, broadcasting, testing) that make production features possible. If you later need DALL-E or Whisper, add openai-php/laravel alongside it.