Skip to content

fix(go): compat_oai plugin returns incomplete history when streaming#5239

Open
MikeRez0 wants to merge 3 commits into
genkit-ai:mainfrom
MikeRez0:go-compat_oai-stream-history
Open

fix(go): compat_oai plugin returns incomplete history when streaming#5239
MikeRez0 wants to merge 3 commits into
genkit-ai:mainfrom
MikeRez0:go-compat_oai-stream-history

Conversation

@MikeRez0
Copy link
Copy Markdown

@MikeRez0 MikeRez0 commented May 6, 2026

Changes fix this issue
#4683

Checklist (if applicable):

@google-cla
Copy link
Copy Markdown

google-cla Bot commented May 6, 2026

Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

View this failed invocation of the CLA check for more information.

For the most up to date status, view the checks section at the bottom of the pull request.

@github-actions github-actions Bot added the go label May 6, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the generateStream and convertChatCompletionToModelResponse functions in go/plugins/compat_oai/generate.go to accept and utilize the ai.ModelRequest object, ensuring the request context is preserved in the model response. A review comment identifies that the signature change for convertChatCompletionToModelResponse breaks the generateComplete function, which was not updated in this PR. The feedback includes a code suggestion to fix the compilation error and improve error handling by wrapping errors with contextual information.

Comment thread go/plugins/compat_oai/generate.go Outdated
@MichaelDoyle MichaelDoyle requested a review from apascal07 May 6, 2026 15:15
@MichaelDoyle
Copy link
Copy Markdown
Contributor

Thanks @MikeRez0! Can you get the CLA taken care of? In the mean time we'll get the review done.

@MikeRez0
Copy link
Copy Markdown
Author

MikeRez0 commented May 6, 2026

Already signed

@MikeRez0
Copy link
Copy Markdown
Author

@apascal07 Hi, Alex! Can you review changes, please? I want to use stream mode in my project

@apascal07
Copy link
Copy Markdown
Collaborator

A few requested changes:

  1. Keep convertChatCompletionToModelResponse pure — instead of threading req through it, set resp.Request = req in generateStream after the call, mirroring what generateComplete already does. That avoids changing the converter's signature and keeps a single pattern for both paths.
  2. Delete the commented-out // Request: &ai.ModelRequest{}, line rather than leaving it in.
  3. With the above, the existing resp.Request = req in generateComplete stays correct. As written, it's now redundant with the assignment inside the converter.
  4. Add a regression test for the streaming path that asserts resp.Request.Messages (or resp.History()) preserves the input messages — TestGenerator_Stream currently discards the response with _, so this bug could silently return.

Copy link
Copy Markdown
Collaborator

@apascal07 apascal07 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments above.

- Revert the signature change to convertChatCompletionToModelResponse so
  it stays a pure conversion. Set resp.Request = req in generateStream
  after the call, mirroring the pattern already used in generateComplete.
- Drop the misleading Request: &ai.ModelRequest{} default that caused the
  original bug; callers are now responsible for setting it.
- Add a regression test for genkit-ai#4683 that asserts streaming responses
  preserve the input messages on resp.Request.Messages / resp.History().
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants