[Bug] OpenAIMessageConverter produces assistant messages with null content when using vLLM + Qwen3 thinking mode
Environment
- AgentScope Java: 1.0.10
- Model: Qwen3.5-122B-A10B (via vLLM, OpenAI-compatible API)
- Model Client:
OpenAIChatModel
- Agent:
ReActAgent with enablePlan() and tool calling
Description
When using OpenAIChatModel with a vLLM-hosted Qwen3 model that has thinking/reasoning enabled, the API returns 400 Bad Request with:
Input error. Field required: input.messages.N.content
This happens because OpenAIMessageConverter.convertAssistantMessage() does not set the content field when the assistant message only contains ThinkingBlock + ToolUseBlock (no TextBlock).
Root Cause
In OpenAIMessageConverter.java (line ~267):
private OpenAIMessage convertAssistantMessage(Msg msg) {
OpenAIMessage.Builder builder = OpenAIMessage.builder().role("assistant");
String textContent = textExtractor.apply(msg);
if (textContent != null && !textContent.isEmpty()) {
builder.content(textContent); // Only set when non-empty
}
// ... handle ThinkingBlock, ToolUseBlock ...
return builder.build();
}
When the model responds with reasoning (ThinkingBlock) + tool calls (ToolUseBlock) but no text content, textContent is null, so content is never set on the builder. The resulting JSON message has no content field at all.
The OpenAI official API tolerates content: null for assistant messages with tool_calls, but vLLM's OpenAI-compatible endpoint requires content to be present (even as an empty string "").
Steps to Reproduce
- Deploy a Qwen3/Qwen3.5 model via vLLM with
--reasoning-parser qwen3
- Configure
OpenAIChatModel pointing to the vLLM endpoint
- Create a
ReActAgent with tools (e.g., enablePlan())
- Send a message that triggers tool calling
- After the first tool call + result round, the next LLM call fails with
Field required: input.messages.N.content
Actual Behavior
io.agentscope.core.model.exception.BadRequestException:
OpenAI API error in streaming response: Input error. Field required: input.messages.6.content
The error index varies (messages.2, messages.6, messages.22) depending on how many rounds of tool calling occurred before the failure.
Expected Behavior
The convertAssistantMessage method should ensure content is always set, at minimum as an empty string "", especially when tool_calls are present.
Suggested Fix
private OpenAIMessage convertAssistantMessage(Msg msg) {
OpenAIMessage.Builder builder = OpenAIMessage.builder().role("assistant");
String textContent = textExtractor.apply(msg);
if (textContent != null && !textContent.isEmpty()) {
builder.content(textContent);
} else {
// Ensure content is never null for vLLM/Qwen3 compatibility
builder.content("");
}
// ... rest unchanged ...
}
Additional Context
- This is a known issue with vLLM's Qwen3 reasoning parser. See: vLLM Forum: Qwen3.5 only output reasoning and no content — vLLM may return
content=None with all output in the reasoning field for Qwen3 series models.
- The same
ReActAgent configuration works correctly with Qwen3.5-122B-A10B-Thinking (the -Thinking variant), because that model's output format is correctly parsed by vLLM, producing non-null content.
- Even when vLLM correctly separates reasoning/content, the assistant message after a tool call may legitimately have empty text content (only ThinkingBlock + ToolUseBlock), which still triggers this bug.
Workaround
Use Qwen3.5-*-Thinking model variants which produce correctly formatted output with vLLM's reasoning parser.
[Bug] OpenAIMessageConverter produces assistant messages with null content when using vLLM + Qwen3 thinking mode
Environment
OpenAIChatModelReActAgentwithenablePlan()and tool callingDescription
When using
OpenAIChatModelwith a vLLM-hosted Qwen3 model that has thinking/reasoning enabled, the API returns400 Bad Requestwith:This happens because
OpenAIMessageConverter.convertAssistantMessage()does not set thecontentfield when the assistant message only containsThinkingBlock+ToolUseBlock(noTextBlock).Root Cause
In
OpenAIMessageConverter.java(line ~267):When the model responds with reasoning (ThinkingBlock) + tool calls (ToolUseBlock) but no text content,
textContentis null, socontentis never set on the builder. The resulting JSON message has nocontentfield at all.The OpenAI official API tolerates
content: nullfor assistant messages withtool_calls, but vLLM's OpenAI-compatible endpoint requirescontentto be present (even as an empty string"").Steps to Reproduce
--reasoning-parser qwen3OpenAIChatModelpointing to the vLLM endpointReActAgentwith tools (e.g.,enablePlan())Field required: input.messages.N.contentActual Behavior
The error index varies (messages.2, messages.6, messages.22) depending on how many rounds of tool calling occurred before the failure.
Expected Behavior
The
convertAssistantMessagemethod should ensurecontentis always set, at minimum as an empty string"", especially whentool_callsare present.Suggested Fix
Additional Context
content=Nonewith all output in thereasoningfield for Qwen3 series models.ReActAgentconfiguration works correctly withQwen3.5-122B-A10B-Thinking(the-Thinkingvariant), because that model's output format is correctly parsed by vLLM, producing non-nullcontent.Workaround
Use
Qwen3.5-*-Thinkingmodel variants which produce correctly formatted output with vLLM's reasoning parser.