Skip to content

Conversation

@yike5460
Copy link

@yike5460 yike5460 commented Aug 10, 2025

Description

Introducing intelligent context management capabilities that optimize tool selection and context windows for multi-agent interactions.

Related Issues

Documentation PR

Architecture Overview

The context management system operates at the agent/workflow level and consists of 4 core components shown below:

DynamicToolManager - Intelligently selects tools based on:

  • Task relevance scoring
  • Historical performance metrics
  • Required capabilities matching
  • Configurable selection criteria

ContextOptimizer - Manages context windows by:

  • Pruning low-relevance content
  • Compressing large items
  • Maintaining required context
  • Optimizing for token limits

ToolUsageAnalytics - Tracks and analyzes:

  • Tool success rates
  • Execution times
  • Relevance scores
  • Context optimization metrics

RelevanceScorer - Provides flexible scoring with:

  • Multiple algorithms (Jaccard, Levenshtein)
  • Text and tool-specific scorers
  • Partial match support
  • Configurable thresholds

Key Design

Modular Architecture

  • Each component is independent and can be used separately
  • Clean interfaces allow easy extension and customization
  • No dependencies on Phase 1 memory management

Performance-First Design

  • Efficient algorithms with sub-second processing for 1000+ items
  • Lazy evaluation and caching where appropriate
  • Comprehensive performance benchmarks included

Flexible Scoring System

  • Pluggable relevance algorithms
  • Support for custom scoring implementations
  • Partial word matching for better results

Analytics-Driven Optimization

  • Real-time performance tracking
  • Historical data influences future selections
  • Continuous improvement through usage patterns

Type of Change

New feature

Testing

How have you tested the change? Verify that the changes do not break functionality or introduce warnings in consuming repositories: agents-docs, agents-tools, agents-cli

test coverage with new 78 unit tests and 100% pass rate (python -m pytest tests/strands/agent/context/ -v) across all components, all tests achieve 100% pass rate with thorough error handling and boundary condition coverage.

  • [Y] I ran hatch run prepare

Checklist

  • [Y] I have read the CONTRIBUTING document
  • [Y] I have added any necessary tests that prove my fix is effective or my feature works
  • [Y] I have updated the documentation accordingly
  • [Y] I have added an appropriate example to the documentation to outline the feature, or no new docs are needed
  • [Y] My changes generate no new warnings
  • [Y] Any dependent changes have been merged and published

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

- Add dynamic tool selection based on context relevance and performance
- Implement context window optimization with intelligent pruning
- Add tool usage analytics for performance tracking
- Support multiple relevance scoring algorithms (Jaccard, Levenshtein)
- Include comprehensive test suite with 93% code coverage

Key components:
- DynamicToolManager: Selects optimal tools based on task requirements
- ContextOptimizer: Manages context window size with compression
- ToolUsageAnalytics: Tracks performance metrics for optimization
- RelevanceScorer: Flexible scoring system for content matching
@yike5460
Copy link
Author

Sample code for your reference:

Basic Tool Selection

from strands.agent.context import DynamicToolManager, ToolSelectionCriteria

# Initialize manager
manager = DynamicToolManager()

# Define selection criteria
criteria = ToolSelectionCriteria(
    task_description="analyze customer data and generate insights",
    required_capabilities=["data", "analysis"],
    max_tools=5,
    min_relevance_score=0.3
)

# Select optimal tools
result = manager.select_tools(available_tools, criteria)

# Access results
for tool in result.selected_tools:
    print(f"{tool.tool_name}: {result.relevance_scores[tool.tool_name]:.2f}")

Context Window Optimization

from strands.agent.context import ContextOptimizer

# Initialize optimizer with 4K token limit
optimizer = ContextOptimizer(max_context_size=4096)

# Optimize context for specific task
context_window = optimizer.optimize_context(
    context_items={"doc1": large_document, "data": dataset},
    task_description="summarize key findings",
    required_keys=["data"]  # Ensure data is included
)

# Use optimized context
print(f"Optimized from {context_window.optimization_stats['original_items']} "
      f"to {len(context_window.items)} items")
print(f"Context utilization: {context_window.utilization:.1%}")

Performance Tracking

from strands.agent.context import ToolUsageAnalytics

# Track tool performance
analytics = ToolUsageAnalytics()

# Record usage
analytics.record_tool_usage(
    tool_name="data_analyzer",
    success=True,
    execution_time=0.5,
    relevance_score=0.85
)

# Get performance insights
rankings = analytics.get_tool_rankings(min_calls=5)
report = analytics.get_summary_report()

Custom Relevance Scoring

from strands.agent.context import TextRelevanceScorer, SimilarityMetric

# Use different similarity algorithms
jaccard_scorer = TextRelevanceScorer(metric=SimilarityMetric.JACCARD)
levenshtein_scorer = TextRelevanceScorer(metric=SimilarityMetric.LEVENSHTEIN)

# Score relevance
score1 = jaccard_scorer.score("machine learning model", "ML model training")
score2 = levenshtein_scorer.score("machine learning model", "ML model training")

@yike5460
Copy link
Author

yike5460 commented Sep 8, 2025

Close the PR for now until necessary attention received.

@yike5460 yike5460 closed this Sep 8, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant