Skip to content

Conversation

@JustSteveKing
Copy link

This pull request introduces a new proposed standard, PSR-AI, which defines a common interface for interacting with AI models in PHP. The document outlines the motivation, principles, and a set of interfaces for model communication, streaming, tool invocation, and response handling. It aims to provide a framework-agnostic, provider-neutral abstraction for AI integration, similar to prior PSRs for logging and HTTP clients.

Core interface definitions:

  • Introduces ClientInterface, MessageInterface, StreamInterface, ToolInterface, and ResponseInterface in the Psr\Ai namespace, specifying required methods and behaviors for each to standardize AI model interaction in PHP.

Standardization principles and compatibility:

  • Establishes basic principles for interoperability, immutability, streaming, and tool serialization, with explicit compatibility with the Model Context Protocol (MCP).

Usage examples and ABNF definitions:

  • Provides example usage for synchronous calls, streaming responses, and tool registration, along with ABNF grammar for message and streaming semantics.

Reference materials:

  • Includes an extensive reference list covering related RFCs, PSRs, protocols, APIs, and PHP language features to guide implementation and ensure broad compatibility.

@JustSteveKing JustSteveKing self-assigned this Nov 10, 2025
@JustSteveKing JustSteveKing requested a review from a team as a code owner November 10, 2025 12:15
@alexander-schranz
Copy link

ping @wachterjohannes. Who has lot of knowledge about this Topic as creator of: https://github.com/modelflow-ai/.github/ an abstraction around multiple AI clients.

Also @chr-hertel as creator of llm-chain and now driving part of Symfony AI.

Copy link
Contributor

@KorvinSzanto KorvinSzanto left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea, and I think these concepts are mature enough to beg a PSR, but I worry about this specific implementation and how it might stunt implementor's ability to use LLMs as they sit today.

One extra question I had that I think I expressed in discord is this intended to be used with embedding requests?

This PSR aims to standardize how PHP applications and frameworks:
- Send messages to AI models.
- Receive structured and streamed responses.
- Register callable tools or functions for model invocation.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where is tool registration codified?

- Send messages to AI models.
- Receive structured and streamed responses.
- Register callable tools or functions for model invocation.
- Maintain interoperability with the [Model Context Protocol (MCP)](https://modelcontextprotocol.io/).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is this PSR facilitating MCP? Wouldn't an MCP server be implemented using the HTTP PSRs? I suppose the ToolInterface would be the only thing from here that'd make sense to use, unless your MCP exposed tools need to interact with an LLM

*/
public function stream(MessageInterface|array $message): StreamInterface;

public function invokeTool(ToolInterface $tool, array $arguments): mixed;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this here? What benefit does the client add by running $tool->execute($arguments) for you?

**Rules:**
- `complete()` MUST perform a synchronous model call and return a finalized `ResponseInterface`.
- `stream()` MUST return a `StreamInterface` that yields incremental output as available.
- `invokeTool()` MUST execute a callable tool in the same process or over MCP.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huh? So this ClientInterface is both a client for an MCP server and for an LLM API? We probably want to separate the MCP client out to it's own interface or its own PSR.

As currently written, the client can't really decide how to invoke a tool, the tool's execution is encompassed by ToolInterface::execute so the tool decides how and where to actually run. You'd probably be better served splitting out the value of a tool from the actual execution of a tool.

{
public function onToken(callable $callback): void;
public function onError(callable $callback): void;
public function onComplete(callable $callback): void;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMO these methods should go away, if we want this functionality codified, I'd recommend using the event dispatcher PSR.

**Rules:**
- Tools MUST define their input/output schema for validation and serialization.
- Implementations SHOULD align schema definitions with the [Model Context Protocol](https://modelcontextprotocol.io/).
- `execute()` MUST return serializable output or throw an exception.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Executing a tool should modify the message list. A tool typically will output one or more messages that represent the tool output, but we probably also want a path for them to remove messages from the list too. Also a "tool" might take the form of prompting the user for more information or a long running multi-process action that completes after a significant amount of time.

We probably want to provide more context to the execute function wherever it ends up, just giving it the arguments and nothing else will stunt the ability of the tools. We might consider a ToolContextInterface that includes the client, an identifier for the model invoking the tool, the current list of messages, etc.

```php
namespace Psr\Ai;

interface MessageInterface
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs to also encompass tool use and tool results. The actual message used will be different depending on the model / api in use.


$response = $client->complete($message);

echo $response->message()->content(); // "Paris"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this pretty much always be something like:
[['type' => 'text', 'text' => 'Paris']]? We can't really rely on strings being outputted here because the content can be images, audio, video, text, or whatever else people think up in the future.

$stream = $client->stream(new UserMessage('Write a haiku about PHP.'));

$stream->onToken(fn($token) => echo $token);
$stream->onComplete(fn() => echo "\nDone.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again I don't think this is a very ergonomic way to interact with the stream, instead it should be foreach ($stream as $chunk) { ... } or $client->eventDispatcher->addListener('some_codified_event', fn($chunk) => ...);.

At the very least you're missing something that actually causes the stream to be read.

```
stream = 1*(token / event)
token = 1*CHAR
event = "[" event-type ":" data "]"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't really match up with my experience using streaming responses from LLMs fwiw.

@chr-hertel
Copy link

Hey, thanks for the ping, for me this comes too early as that I could possibly predict how an abstracted standardization would look like to me, but that's potentially a matter of perspective - not saying that there is no purpose or benefit in this. Rather a matter of timing and approach for me.

I'm currently exploring how we build interfaces for Symfony AI, and can tell that some of my assumptions got invalidated over the last months over and over again - and for MCP, I'd rather advise to decouple, since the MCP project itself is evolving super fast and "Maintain interoperability" is quite an optimistic goal.

I still see some pain points, that I'd love to address as one PHP ecosystem instead of inventing the wheel multiple times:

  • Native Model Inference - would be a bummer to have multiple competing extensions => ort
  • MCP SDK - based on the way the MCP project is set up, it would be great to come together in one SDK, but that's also currently not that easy - time will tell ...

Everything else is tough ... even with players like Open AI explicitly trying to overthrow "standards" they created (renaming system to developer prompt, deprecating chat/completions in favor of responses) - or Google suddenly introducing server tools changing another unexpected dimension...

Lastly, I know for some it works to come from an academic and conceptual point of view, but I prefer to level up an abstraction from proven implementations - so i would rather wait two years or so and discuss commonalities of Instructorphp, Prism, Drupal AI, Llphant, Symfony AI, and others, to derive standards and create synergies rather bottom up. It will help with acceptance anyways.

Cheers!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants