-
Notifications
You must be signed in to change notification settings - Fork 114
adding autogen dashbaord docs + pydantic dashboard docs edits #2185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
I'll analyze this and get back to you. |
@makeavish Claude Code is running for 2 hrs+, expected? |
|
Claude finished @gkarthi-signoz's task —— View job Review CompleteI've reviewed the documentation changes for the AutoGen dashboard and Pydantic AI dashboard edits. The changes are mostly good, with proper navigation setup and component integration. I've identified several issues that need attention: Summary of FindingsAutoGen Dashboard (
Pydantic AI Dashboard (
Navigation and Components (
Technical Accuracy Verification✅ AutoGen: Confirmed official Microsoft framework for agentic AI with agents and tools terminology ✅ Pydantic AI: Confirmed official Python agent framework (not SDK)
PR Checklist (from CONTRIBUTING.md)Reviewing against the Docs PR Checklist:
Recommendations
All inline comments have been posted to specific lines with detailed suggestions and code examples. |
| description: Monitor Autogen usage and performance including token consumption, model distribution, error rates, request volumes, and latency trends for optimal AI workload observability. | ||
| --- | ||
|
|
||
| This dashboard offers a clear view into Autogen usage and performance. It highlights key metrics such as model distribution, error rates, request volumes, and latency trends. Teams can also track detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing Prerequisites/Data Source Setup Link
Per CONTRIBUTING.md dashboard template guidelines (line 277-283), dashboard templates should "Always include a short prerequisite or info note near the top that links to setting up the data source and sending telemetry to SigNoz."
This dashboard needs a Prerequisites section or an Admonition note linking to the AutoGen observability/instrumentation guide. If an AutoGen observability guide doesn't exist yet, consider adding one first (similar to the Pydantic AI observability guide at /docs/pydantic-ai-observability-with-opentelemetry), or at minimum include links to:
- Generic OpenTelemetry Python instrumentation
- SigNoz Cloud ingestion endpoints: https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint
- SigNoz Cloud keys: https://signoz.io/docs/ingestion/signoz-cloud/keys/
Suggested addition after line 9:
## Prerequisites
Before using this dashboard, ensure you have:
- Instrumented your AutoGen applications with OpenTelemetry. See the [AutoGen observability guide](#) for setup instructions.
- Configured telemetry export to SigNoz using the appropriate [endpoint](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint) and [ingestion key](https://signoz.io/docs/ingestion/signoz-cloud/keys/).Or as an Admonition:
<Admonition type="info">
Before using this dashboard, instrument your AutoGen applications with OpenTelemetry and configure export to SigNoz. See the [AutoGen observability guide](#) for complete setup instructions, or refer to [SigNoz Cloud ingestion endpoints](https://signoz.io/docs/ingestion/signoz-cloud/overview/#endpoint) and [keys](https://signoz.io/docs/ingestion/signoz-cloud/keys/).
</Admonition>| --- | ||
|
|
||
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the DeepSeek API, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. | ||
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the Pydantic SDK, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing Prerequisites/Data Source Setup Link
Per CONTRIBUTING.md dashboard template guidelines (line 277-283), dashboard templates should include a link to the relevant instrumentation/observability guide near the top.
This dashboard is missing a link to the Pydantic AI observability guide. Add a Prerequisites section or Admonition note after the intro paragraph.
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the Pydantic SDK, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. | |
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the Pydantic SDK, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. | |
| <Admonition type="info"> | |
| Before using this dashboard, ensure you have instrumented your Pydantic AI applications with OpenTelemetry. See the [Pydantic AI observability guide](https://signoz.io/docs/pydantic-ai-observability-with-opentelemetry/) for complete setup instructions. | |
| </Admonition> |
Source: https://signoz.io/docs/pydantic-ai-observability-with-opentelemetry/
| @@ -1,19 +1,23 @@ | |||
| --- | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incorrect date in frontmatter
The date shows 2025-11-07 but today is actually November 7, 2025 in real time. However, looking at the git diff, the original date was 2025-10-16 (October 16, 2025).
The date field should represent when the content was originally created or last significantly updated. If this is just a minor edit (formatting and small text fixes), consider keeping the original date 2025-10-16 rather than updating it to today's date.
Only update the date if the content has been substantially revised or if it's standard practice in this repo to update dates for all edits.
| --- | ||
|
|
||
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the DeepSeek API, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. | ||
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the Pydantic SDK, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Terminology: "Pydantic SDK" vs "Pydantic AI"
The official Pydantic AI documentation consistently refers to it as a "Python agent framework" or "GenAI Agent Framework," not as an "SDK."
Suggested revision:
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging the Pydantic SDK, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. | |
| This dashboard offers a clear view into Pydantic AI usage and performance. It highlights key metrics such as token consumption, model distribution, error rates, request volumes, and latency trends. Teams can also track which services and languages are leveraging Pydantic AI, along with detailed records of errors, to better understand adoption patterns and optimize reliability and efficiency. |
Source: https://ai.pydantic.dev/
Claude API was down for sometime and then action got stuck. Restarting action worked. |


No description provided.