fix: prevent response_model from being passed in ReAct flow when LLM lacks function calling#4698
Open
devin-ai-integration[bot] wants to merge 3 commits intomainfrom
Open
Conversation
…lacks function calling When an LLM does not support function calling (supports_function_calling() returns False), the executor falls back to the ReAct text-based pattern. Previously, response_model (set from task.output_pydantic) was still passed to get_llm_response in the ReAct path, which caused InternalInstructor to force structured output via instructor's TOOLS mode before the agent could reason through Action/Observation cycles. This fix sets response_model=None in both _invoke_loop_react and _ainvoke_loop_react, allowing the ReAct loop to work normally. The output schema is already embedded in the prompt text for guidance, and the final conversion to pydantic/json happens in task._export_output() after the agent finishes. Fixes #4695 Co-Authored-By: João <joao@crewai.com>
Contributor
Author
|
Prompt hidden (unlisted session) |
Contributor
Author
🤖 Devin AI EngineerI'll be helping with this pull request! Here's what you should know: ✅ I will automatically:
Note: I can only respond to comments from users who have write access to this repository. ⚙️ Control Options:
|
- Remove unused asyncio import from test file - Route FC-capable LLMs with no tools + response_model to _invoke_loop_native_no_tools (preserves structured output) - FC-capable LLMs with no tools and no response_model still fall through to ReAct path (no regression) - Add tests for both FC+no-tools routing scenarios Co-Authored-By: João <joao@crewai.com>
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
Address Bugbot concern: self.tools includes internal tools (delegation, human input) while self.original_tools only has user-defined tools. Only route to native_no_tools when there are truly no tools at all, so agents with internal tools still use the ReAct loop. Add test for FC+internal-tools scenario. Co-Authored-By: João <joao@crewai.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
fix: don't pass response_model in ReAct flow for non-FC LLMs
Summary
Fixes #4695. When an LLM does not support function calling (
supports_function_calling()returnsFalse), the executor falls back to the ReAct text-based pattern. Previously,response_model(derived fromtask.output_pydantic) was still passed toget_llm_responsein this path, which causedInternalInstructorto force structured output via instructor's TOOLS mode — before the agent could reason through Action/Observation cycles.The fix sets
response_model=Nonein both_invoke_loop_react()and_ainvoke_loop_react(). The output schema is already embedded in the prompt text for guidance (viabuild_task_prompt_with_schema), and the final conversion to pydantic/json happens downstream intask._export_output().This also removes the now-dead
if self.response_model is not Noneearly-exit block from both methods, simplifying the ReAct response handling to always go throughprocess_llm_response.Changes to
_invoke_loop/_ainvoke_looproutingThe routing logic was refactored to handle three cases explicitly:
_invoke_loop_native_toolsself.toolsempty) +response_model_invoke_loop_native_no_tools_invoke_loop_reactresponse_model=Noneto avoid forcing structured outputThe check uses
self.tools(which includes internal tools like delegation/human input) rather thanself.original_tools(user-defined only), so agents with delegation enabled still use the ReAct loop even if they have no user-defined tools.Updates since last revision
self.toolsinstead ofself.original_toolsfor no-tools routing check (flagged by Bugbot):self.original_toolsonly includes user-defined tools, butself.toolsalso includes internal tools (delegation, human input). Usingself.toolsensures agents with internal tools still use the ReAct loop for Action/Observation cycles.Review & Testing Checklist for Human
task._export_output()reliably converts ReAct string output to pydantic: The fix relies on downstreamconvert_to_model(which may re-call the LLM) to produce the pydantic output. The old code had an early-exit that accepted raw valid JSON matching the schema directly — this path is now removed. Test with a real non-FC LLM (e.g., Ollama) to confirm end-to-end pydantic output works.process_llm_responsewhich expects theFinal Answer:prefix. Confirm the prompt instructs the LLM to use this format, and that models don't produce raw JSON without the prefix.self.toolsvsself.original_toolsdistinction is correct: The routing usesnot self.toolsto check for truly no tools. Confirm thatself.toolsreliably includes all internal tools (delegation, human input, memory) and that there's no edge case whereself.toolsis empty but internal tools are still expected._invoke_loop_native_no_toolsdoes a single LLM call (no iteration loop). Confirm this is acceptable when an FC-capable LLM hasoutput_pydanticbut no tools.Suggested manual test: Run a crew with
output_pydanticset on a task, using an Ollama model (or any model wheresupports_function_calling()returnsFalse), and confirm the agent completes the ReAct loop and produces valid pydantic output.Notes
Note
Cursor Bugbot is generating a summary for commit c5d4384. Configure here.