Skip to content

refactor(js)!: remove Chat class and associated session methods#5248

Draft
pavelgj wants to merge 5 commits into
pj/init-transportfrom
pj/remove-chat
Draft

refactor(js)!: remove Chat class and associated session methods#5248
pavelgj wants to merge 5 commits into
pj/init-transportfrom
pj/remove-chat

Conversation

@pavelgj
Copy link
Copy Markdown
Member

@pavelgj pavelgj commented May 6, 2026

Parent PR: #5247

BREAKING: it's getting replaced by the new agents api

@pavelgj pavelgj changed the title refactor!: remove Chat class and associated session methods refactor(js)!: remove Chat class and associated session methods May 6, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request removes the legacy ai.chat and Session.chat implementation, deprecating the preamble-based agent handoff system. It introduces a new orchestration pattern for multi-agent systems using interrupts and manual conversation loops. Review feedback identifies significant issues in the new manual loop implementation within multiAgentMultiModel.ts, where conversation history is managed incorrectly, leading to duplicate user messages and incomplete context. Refactoring is suggested to use the response messages as the source of truth for history and to ensure proper tool response handling during agent transfers.

Comment on lines 31 to 57
async (input) => {
const chat = ai.chat(triageAgent);
const response = await chat.send(input.userInput);
return response.text;
let currentAgent: ExecutablePrompt<any> = triageAgent;
let textInput = input.userInput;
const history: any[] = [];

while (true) {
history.push({ role: 'user' as const, content: [{ text: textInput }] });
const response = await currentAgent(textInput, { messages: history });

if (response.finishReason === 'interrupted') {
const interrupt = response.interrupts.find(
(i) => i.toolRequest?.name === 'transferToAgent'
);
if (interrupt) {
const agentName = (interrupt.toolRequest.input as any).agentName;

// Resolve the target prompt specialist agent
currentAgent = ai.prompt(agentName);
textInput = 'Please continue with the new specialist.';
continue;
}
}

return response.text;
}
}
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation of the conversation loop has a few issues with history management:

  1. Duplicate User Messages: On line 37, history.push() adds the user's message to the history array. Then, on line 38, currentAgent(textInput, { messages: history }) is called. This also passes textInput as a prompt, which the generate function converts into another user message. This results in duplicate user messages being sent to the model.
  2. Incomplete History: The history array only accumulates user messages (textInput) from each turn. It doesn't include the model's responses, leading to an incomplete conversation history for subsequent turns.
  3. Missing Tool Response: When an agent transfer occurs, the conversation history for the next agent is missing a tool response message to correspond to the transferToAgent toolRequest.

I've suggested a refactoring of the loop to correctly manage the conversation history by using response.messages as the source of truth and manually adding the tool response for the interrupt.

  async (input) => {
    let currentAgent: ExecutablePrompt<any> = triageAgent;
    let textInput: string | undefined = input.userInput;
    let history: MessageData[] = [];

    while (true) {
      const response = await currentAgent(textInput, { messages: history });
      history = response.messages;
      textInput = undefined;

      if (response.finishReason === 'interrupted') {
        const interrupt = response.interrupts.find(
          (i) => i.toolRequest?.name === 'transferToAgent'
        );
        if (interrupt) {
          const agentName = (interrupt.toolRequest.input as any).agentName;

          history.push({
            role: 'tool',
            content: [
              {
                toolResponse: {
                  name: 'transferToAgent',
                  ref: interrupt.toolRequest.ref,
                  output: 'Transferring to ' + agentName + '.',
                },
              },
            ],
          });

          currentAgent = ai.prompt(agentName);
          textInput = 'Please continue with the new specialist.';
          continue;
        }
      }

      return response.text;
    }
  }
References
  1. When a tool fails or requires a state update in the conversation history, add a message to the history queue to inform the LLM and return control.

*/

import { z } from 'genkit';
import { ExecutablePrompt, z } from 'genkit';
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To support strongly-typed conversation history in the refactored logic below, please import MessageData.

Suggested change
import { ExecutablePrompt, z } from 'genkit';
import { ExecutablePrompt, MessageData, z } from 'genkit';

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant