Skip to content

Conversation

@jimoosciuc
Copy link
Collaborator

@jimoosciuc jimoosciuc commented Nov 11, 2025

CLOSES: #331

Motivation

  • Provide reliable, machine-parseable outputs for downstream systems (APIs, automations, UI forms) without brittle post-processing.
  • Reduce hallucinations by constraining generation to valid shapes (JSON Schema), formats (EBNF), or patterns (regex).
  • Unify a single feature across OpenAI-compatible endpoints and the SRT engine.
  • Support modern tool/function-calling by validating and constraining arguments, while preserving streaming and performance.

Modifications

  • Adds structured output via grammar-constrained decoding (JSON Schema, regex, EBNF, structural tags) powered by llguidance.
  • Introduces JAX/TPU-friendly vocab bitmasking and applies masks in the sampler for token filtering.
  • Extends scheduler pipeline: async grammar compilation + caching, timeout/invalid handling, grammar termination checks, and mask propagation (incl. overlap mode).
  • Updates OpenAI-serving path to accept response_format schemas and pass them into sampling params.
  • Adds utils to serialize JSON Schema, new server args to control JSON whitespace behavior.
  • Adds dependency llguidance~=1.3.0.

Accuracy Tests

Launch Server:

MODEL_NAME="Qwen/Qwen3-8B"
JAX_COMPILATION_CACHE_DIR=/tmp/jit_cache \
python3 -u -m sgl_jax.launch_server \
--model-path ${MODEL_NAME} \
--trust-remote-code \
--tp-size=4 \
--device=tpu \
--mem-fraction-static=0.8 \
--chunked-prefill-size=2048 \
--download-dir=/tmp \
--dtype=bfloat16 \
--max-running-requests 256 \
--skip-server-warmup \
--page-size=128 \
--disable-radix-cache

Result:
gsm8k (bsz=128)
image
mmlu (bsz=128)
image

Benchmarking and Profiling

testing methodology reference: qwen3_benchmark
Launch Server:

MODEL_NAME="Qwen/Qwen3-8B"
JAX_COMPILATION_CACHE_DIR=/tmp/jit_cache \
python3 -u -m sgl_jax.launch_server \
--model-path ${MODEL_NAME} \
--trust-remote-code \
--tp-size=4 \
--device=tpu \
--mem-fraction-static=0.8 \
--chunked-prefill-size=2048 \
--download-dir=/tmp \
--dtype=bfloat16 \
--max-running-requests 256 \
--skip-server-warmup \
--page-size=128 \
--disable-radix-cache

Result:

ISL/OSL Batch Size TTFT(ms)_SGL_JAX ITL(ms)_SGL_JAX Input_Throughput(tok/s)_SGL_JAX Output_Throughput(tok/s)_SGL_JAX
1024/1024 8 111.08 6.47 1211.24 1211.24
1024/1024 16 222.74 7.00 2211.26 2211.26
1024/1024 32 446.10 8.94 3401.68 3401.68
1024/1024 64 892.66 12.72 4699.88 4699.88
1024/1024 128 1783.95 17.67 6686.18 6686.18
1024/1024 256 3571.60 20.23 10217.53 10217.53
4096/1024 8 470.00 6.75 4434.91 1108.73
4096/1024 16 940.60 7.53 7552.57 1888.14
4096/1024 32 1881.49 9.63 11101.33 2775.33
4096/1024 64 3763.20 14.13 14027.13 3506.78
4096/1024 128 7526.70 25.14 14315.07 3578.77
4096/1024 256 15063.07 25.15 14730.04 3682.51
8192/1024 8 1031.10 6.70 8266.00 1033.25
8192/1024 16 2062.76 8.57 12063.21 1507.90
8192/1024 32 4125.87 13.37 14671.70 1833.96
8192/1024 64 8250.86 23.17 16355.67 2044.46
8192/1024 128 16500.94 24.04 16123.97 2015.50
8192/1024 256 33000.86 24.13 16150.66 2018.83

Test command

e2e test

python3 test/srt/openai_server/features/test_ebnf.py
python3 test/srt/openai_server/features/test_json_mode.py
python3 test/srt/openai_server/features/test_structural_tag.py

unit test

python3 python/sgl_jax/test/constrained/test_bitmask_ops.py

Checklist

  • Please use English, otherwise it will be closed.
  • The purpose of the PR, or link existing issues this PR will resolve.
  • The test plan, such as providing test command.
  • (Optional) The necessary documentation update.

@gemini-code-assist
Copy link

Summary of Changes

Hello @jimoosciuc, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant feature to support structured output generation, addressing the need for reliable and machine-parseable responses from language models. By integrating grammar-constrained decoding, the system can now enforce specific output formats like JSON, regex, or EBNF, which helps in reducing hallucinations and ensuring valid data structures for APIs, automations, and UI forms. The changes span across the core sampling logic, scheduler, and API entrypoints, with a focus on JAX/TPU compatibility and efficient vocabulary masking.

Highlights

  • Structured Output Support: Implemented grammar-constrained decoding using the llguidance library, enabling reliable, machine-parseable outputs for various downstream systems. This supports JSON Schema, regex, EBNF, and structural tags.
  • JAX/TPU Vocabulary Bitmasking: Introduced JAX-compatible vocabulary bitmasking operations for efficient token filtering on TPUs, ensuring that generated tokens adhere to the specified grammar constraints.
  • Enhanced Scheduler Pipeline: The scheduler now includes asynchronous grammar compilation with caching, robust timeout and invalid grammar handling, and grammar termination checks. It also propagates vocabulary masks to the sampler for constrained decoding.
  • OpenAI API Integration: The OpenAI-compatible serving path has been updated to accept response_format schemas (including JSON Schema and structural tags) and pass them into the sampling parameters, facilitating modern tool and function calling.
  • New Dependency: The project now includes llguidance~=1.3.0 as a new dependency to power the grammar-constrained decoding features.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@jimoosciuc jimoosciuc force-pushed the feat/support-structured-output branch 6 times, most recently from 92da5ee to 3b0402b Compare November 14, 2025 11:08
@jimoosciuc jimoosciuc requested a review from aolemila November 15, 2025 06:24
@jimoosciuc jimoosciuc force-pushed the feat/support-structured-output branch from 3b0402b to 7ef752b Compare November 17, 2025 06:41
@aolemila
Copy link
Collaborator

Hi, @jimoosciuc . I think we need to abort requests in grammar queue when calling def abort_request(self, recv_req: AbortReq):.

Refer to https://github.com/sgl-project/sglang/blob/1dcde5392857fef386d10c24e66b75dbe0551847/python/sglang/srt/managers/scheduler.py#L2455

@jimoosciuc jimoosciuc force-pushed the feat/support-structured-output branch from 7ef752b to d667eaf Compare November 17, 2025 07:32
@jimoosciuc
Copy link
Collaborator Author

jimoosciuc commented Nov 17, 2025

Hi, @jimoosciuc . I think we need to abort requests in grammar queue when calling def abort_request(self, recv_req: AbortReq):.

Refer to https://github.com/sgl-project/sglang/blob/1dcde5392857fef386d10c24e66b75dbe0551847/python/sglang/srt/managers/scheduler.py#L2455

nice catch!

@jimoosciuc jimoosciuc force-pushed the feat/support-structured-output branch from d667eaf to a2b7f3b Compare November 17, 2025 08:36
@jimoosciuc jimoosciuc merged commit 098aa6f into sgl-project:main Nov 17, 2025
16 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Structured output code development

2 participants