An AI-powered conversational agent that provides comprehensive cryptocurrency analysis using real-time data, multiple analysis dimensions, and intelligent conversation management.
- Multi-dimensional Analysis: Fundamental, Price, Sentiment, and Technical analysis
- Real-time Data: Integration with CoinGecko API for live cryptocurrency data
- Conversational AI: Natural language interface with context memory using LangChain
- Autonomous Analysis Selection: Intelligently chooses relevant analysis types based on user queries
- Multi-turn Conversations: Maintains context across the conversation for follow-up questions
- Analysis Memory: Remembers previous analyses for comparisons and references
- Guardrails: Stays focused on cryptocurrency domain with polite redirection
- Ambiguity Handling: Asks clarifying questions when queries are unclear or ambiguous
- General Topic Support: Handles both specific cryptocurrency queries and general blockchain/crypto technology topics
- Multi-level Caching: Combines TTL caching, semantic caching, and in-memory analysis history to reduce latency and API cost
- LangSmith Integration: Optional workflow tracing and observability for debugging and monitoring
- Market capitalization and ranking
- Trading volume and liquidity metrics
- Circulating supply, total supply, and max supply
- Volume/Market Cap ratio analysis
- Fully Diluted Valuation (FDV)
- Tokenomics and supply inflation metrics
- Project description and overview
- Current price and 24h/7d/30d price changes
- Historical price trends and patterns
- Volatility assessment and classification
- Support and resistance levels
- All-Time High (ATH) and All-Time Low (ATL) analysis
- Price position analysis
- Social media metrics (Twitter, Reddit, Telegram)
- Community engagement indicators
- Fear & Greed Index integration
- Sentiment score calculation
- Market sentiment classification
- Social media activity analysis
- Simple Moving Averages (SMA 7, 20, 50)
- Relative Strength Index (RSI)
- MACD (Moving Average Convergence Divergence)
- Moving average crossover signals
- Overbought/oversold conditions
- Technical indicator summary
The system is built with a modular architecture:
┌─────────────────────────────────────────────────────────────┐
│ User (CLI Interface) │
└───────────────────────────┬─────────────────────────────────┘
│
┌───────────────────────────▼─────────────────────────────────┐
│ LangChain Agent │
│ - Conversation Memory │
│ - Tool Selection & Orchestration │
│ - Context Management │
└─────────┬────────────────────────────────────────────────────┘
│
│ Uses Tools
▼
┌─────────────────────────────────────────────────────────────┐
│ Analysis Tools │
│ ┌───────────────┐ ┌───────────────┐ ┌─────────────────┐ │
│ │ Fundamental │ │ Price │ │ Sentiment │ │
│ │ Analyzer │ │ Analyzer │ │ Analyzer │ │
│ └───────┬───────┘ └───────┬───────┘ └────────┬────────┘ │
│ │ │ │ │
│ ┌───────▼───────┐ ┌───────▼────────────────────▼────────┐ │
│ │ Technical │ │ Coin Repository & API Clients │ │
│ │ Analyzer │ │ - CoinGecko client │ │
│ └───────┬───────┘ │ - Fear & Greed client │ │
│ │ │ - NewsAPI client │ │
│ └──────────┴──────────────────────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────┐
│ External APIs │
│ - CoinGecko │
│ - Fear & Greed │
│ - NewsAPI │
│ - LangSmith* │
└─────────────────────┘
* LangSmith is used for tracing/observability rather than user-facing analysis
- CLI Interface (
src/ui/cli.py): Rich console interface with markdown support - Agent (
src/agents/agent.py): LangChain-based conversational agent with OpenAI Functions and LangSmith tracing support - Repository (
src/repositories/coin_repository.py): API integration and data retrieval with caching - Analyzers (
src/analyzers/): Specialized analysis modules for each dimension:fundamental_analyzer.py: Fundamental analysisprice_analyzer.py: Price and trend analysissentiment_analyzer.py: Sentiment and social metrics analysistechnical_analyzer.py: Technical indicators analysis
- Memory System: Conversation history and analysis history for context retention
- LangSmith Tracing: Optional observability layer for monitoring agent workflows, tool calls, and performance
See ARCHITECTURE.md for detailed architecture documentation.
- Python 3.8 or higher
- OpenAI API key (GPT-4 or GPT-3.5-turbo)
- Internet connection for API access
# Create virtual environment
python -m venv venv
# Activate virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activatepip install -r requirements.txtCreate a .env file in the project root:
# Copy the template
cp env.template .env
# Edit .env and add your OpenAI API key
OPENAI_API_KEY=sk-your-actual-api-key-hereRequired:
OPENAI_API_KEY: Your OpenAI API key (get it from https://platform.openai.com/api-keys)
Optional:
NEWSAPI_KEY: NewsAPI key for cryptocurrency and blockchain news features (get it from https://newsapi.org/)OPENAI_MODEL: Model to use (default: gpt-4o-mini)CACHE_TTL: Cache duration in seconds (default: 300)SEMANTIC_CACHE_ENABLED: Enable semantic caching (default: true)SEMANTIC_CACHE_THRESHOLD: Similarity threshold for cache matching (default: 0.85)SEMANTIC_CACHE_SIZE: Maximum cache entries (default: 1000)SEMANTIC_CACHE_TTL: Cache time-to-live in seconds (default: 3600)LANGSMITH_ENABLED: Enable LangSmith tracing (default: false)LANGSMITH_API_KEY: LangSmith API key for workflow tracing (get it from https://smith.langchain.com/)LANGSMITH_PROJECT: Project name for organizing traces (optional)- CoinGecko API is free and doesn't require a key for basic usage
python main.pyYou: Tell me about Bitcoin
Agent: [Performs comprehensive analysis across all dimensions]
- Fundamental metrics
- Price trends and volatility
- Sentiment indicators
- Technical analysis
You: What's the price trend of Ethereum?
Agent: [Focuses on price and technical analysis]
- Price changes and volatility
- Support/resistance levels
- Technical indicators
You: Tell me about Bitcoin
Agent: [Provides Bitcoin analysis]
You: Now analyze Ethereum
Agent: [Provides Ethereum analysis]
You: How does Ethereum's sentiment compare to Bitcoin?
Agent: [Retrieves previous analyses and compares sentiment metrics]
You: What's the RSI for Bitcoin?
Agent: [Performs technical analysis and provides RSI value]
You: What about Ethereum?
Agent: [Understands context and provides Ethereum's RSI]
You: Tell me about ETH
Agent: I'd be happy to help! When you say 'Tell me about ETH', would you like:
- A comprehensive analysis (fundamental, price, sentiment, technical)?
- Just the current price and market data?
- Recent news articles?
- A specific type of analysis?
Please let me know what you're most interested in!
You: Comprehensive analysis
Agent: [Performs comprehensive analysis]
You: Provide me some news about blockchain
Agent: [Fetches general blockchain technology news articles]
You: What's happening with DeFi?
Agent: [Searches for DeFi-related news and information]
-
CoinGecko API: Primary data source for cryptocurrency information
- No API key required for basic usage
- Rate limits: ~50 calls/minute for free tier
- Provides: price data, market data, historical data, community metrics
-
Alternative.me API: Fear & Greed Index
- Free public API
- No authentication required
-
NewsAPI: News articles for cryptocurrencies and blockchain topics
- Optional API key (set
NEWSAPI_KEYin.env) - Provides: Recent news articles, media coverage, blockchain technology news
- Supports both specific cryptocurrency queries and general blockchain topics
- Optional API key (set
-
LangSmith: Observability and tracing platform (optional)
- Optional API key (set
LANGSMITH_API_KEYin.env) - Provides: Workflow tracing, performance monitoring, debugging tools
- Enables comprehensive observability for agent operations and tool calls
- Optional API key (set
The project includes a comprehensive evaluation framework to assess agent performance and accuracy.
Evaluates agent performance across multiple dimensions:
- Component Testing: Individual analyzer components (fundamental, price, sentiment, technical)
- Integration Testing: Agent conversational responses and tool selection
- Memory Testing: Conversation context and analysis history retention
- Performance Metrics: Response times, success rates, and overall scores
Run Performance Evaluation:
python run_evaluation.py --type performanceGenerates evaluation_report.json with detailed metrics including:
- Analyzer success rates and response times
- Agent query handling performance
- Memory persistence verification
- Overall performance score
Uses an LLM-as-a-Judge architecture to evaluate response quality:
- Test Questions: Common questions and edge cases from
evaluation/test_questions.json - Evaluation Criteria: Accuracy, completeness, relevance, tool selection, clarity
- Scoring: 0.0-1.0 scale (Poor to Excellent)
- Multi-turn Scenarios: Tests conversation context maintenance
Run Accuracy Evaluation:
python run_evaluation.py --type accuracy
# Or run both evaluations
python run_evaluation.py --type both
# Use different judge model
python run_evaluation.py --type accuracy --judge-model gpt-4Generates accuracy_evaluation_report.json with:
- Individual question scores and feedback
- Category breakdown (comprehensive_analysis, price_query, etc.)
- Performance tier distribution
- Overall accuracy statistics
Both evaluations generate detailed JSON reports:
evaluation_report.json: Performance metrics and component testsaccuracy_evaluation_report.json: Accuracy scores and LLM judge feedback
See evaluation/README.md and evaluation/README_ACCURACY.md for detailed documentation.
The agent includes semantic caching to reduce API costs and improve response times by reusing similar responses.
- Query Embedding: Converts queries to embedding vectors using OpenAI's embedding model
- Similarity Search: Finds similar cached queries using cosine similarity
- Threshold Matching: Returns cached response if similarity exceeds threshold (default: 0.85)
- Expiration Validation: Each cache entry has a 1-hour expiration deadline (configurable via
SEMANTIC_CACHE_TTL) - Response Validation: Invalid responses (errors, API failures) are automatically filtered and not cached
- Persistence: Cache is saved to
semantic_cache.jsonand persists across restarts
- Cost Savings: Reduces OpenAI API calls for similar queries
- Faster Responses: Cached responses return instantly
- Data Quality: Only valid responses are cached (errors and API failures are filtered)
- Automatic Expiration: Entries expire after 1 hour to ensure data freshness
- Persistence: Cache survives restarts - automatically loaded on startup
- Automatic: Works transparently without code changes
Add to your .env file:
# Enable/disable semantic cache
SEMANTIC_CACHE_ENABLED=true
# Similarity threshold (0.0-1.0)
# Higher = more strict matching, Lower = more lenient matching
SEMANTIC_CACHE_THRESHOLD=0.85
# Maximum number of cached entries
SEMANTIC_CACHE_SIZE=1000
# Time-to-live in seconds (default: 1 hour)
SEMANTIC_CACHE_TTL=3600
# Optional: Custom cache file path
# SEMANTIC_CACHE_FILE=./cache/semantic_cache.jsonSemantic cache is automatically enabled when SEMANTIC_CACHE_ENABLED=true. It works transparently:
-
First Query: "Tell me about Bitcoin"
- Cache miss → LLM call → Response cached
-
Similar Query: "What can you tell me about Bitcoin?"
- Cache hit → Instant response (no LLM call)
-
Different Query: "What's the price of Ethereum?"
- Cache miss → LLM call → New entry cached
View cache statistics in the CLI:
> cache
Shows cache size, total hits, average hits per entry, and configuration settings.
Cache Behavior:
- Cache retrieval is only used for standalone queries (first query in a conversation) to maintain context accuracy
- All queries are cached (not just the first one) for future use
- Each cache entry includes an expiration timestamp (1 hour by default)
- Invalid responses (errors, API failures) are automatically filtered and not cached
- Expired entries are automatically removed when accessed
Cache File: The cache file semantic_cache.json is automatically created in the project root when the semantic cache is initialized. It stores query-response pairs with embeddings, timestamps, expiration deadlines, and hit counts.
You: Tell me about Ethereum
Agent: I'll perform a comprehensive analysis of Ethereum (ETH) for you...
**Fundamental Analysis:**
- Market Cap: $221B (Rank #2)
- Trading Volume: $15.2B
- Circulating Supply: 120M ETH
- Volume/Market Cap: 6.8% (healthy liquidity)
**Price Analysis:**
- Current Price: $1,847
- 24h Change: -2.1%
- 7d Change: +3.2%
- Volatility: Moderate
- Support: $1,780 | Resistance: $1,920
**Sentiment Analysis:**
- Sentiment Score: 64/100 (Positive)
- Twitter Followers: 3.2M
- Reddit Subscribers: 1.5M
- Fear & Greed Index: 58 (Greed)
**Technical Analysis:**
- RSI (14): 62 (Neutral to slightly overbought)
- 7-day SMA: Above current price (Bullish)
- MACD: Positive (Bullish momentum)
You: Compare Bitcoin and Ethereum sentiment
Agent: Let me retrieve the sentiment data for both...
Bitcoin Sentiment:
- Score: 71/100 (Very Positive)
- Social mentions up 18%
- Fear & Greed: 62
Ethereum Sentiment:
- Score: 64/100 (Positive)
- Social mentions up 23%
- Fear & Greed: 58
Comparison:
Bitcoin shows slightly stronger positive sentiment, but Ethereum has
faster-growing social engagement...
crypto-analysis-agent/
├── main.py # Application entry point
├── requirements.txt # Python dependencies
├── env.template # Environment variable template
├── .gitignore # Git ignore rules
├── README.md # This file
├── ARCHITECTURE.md # Detailed architecture documentation
├── run_evaluation.py # Evaluation runner script
├── semantic_cache.json # Semantic cache persistence file (auto-generated)
├── evaluation/ # Evaluation framework
│ ├── evaluate_agent.py # Performance evaluation
│ ├── evaluate_accuracy.py # Accuracy evaluation (LLM-as-a-Judge)
│ ├── llm_judge.py # LLM judge implementation
│ ├── test_questions.json # Test questions for accuracy evaluation
│ ├── evaluation_scenarios.json # Test scenarios
│ ├── README.md # Evaluation documentation
│ └── README_ACCURACY.md # Accuracy evaluation documentation
└── src/
├── __init__.py # Package initialization
├── agents/ # Agent implementation
│ ├── agent.py # LangChain agent class
│ ├── tools.py # Agent tools and functions
│ └── prompts.py # System prompts
├── analyzers/ # Analysis modules
│ ├── fundamental_analyzer.py
│ ├── price_analyzer.py
│ ├── sentiment_analyzer.py
│ └── technical_analyzer.py
├── api/ # API clients
│ ├── coingecko_client.py
│ ├── fear_greed_client.py
│ └── newsapi_client.py
├── repositories/ # Data access layer
│ └── coin_repository.py
├── services/ # Business logic
│ ├── coin_service.py
│ └── analysis_service.py
├── ui/ # User interface
│ └── cli.py
└── core/ # Core utilities
├── cache.py # TTL-based cache
├── semantic_cache.py # Semantic caching with embeddings
├── exceptions.py
├── interfaces.py
├── logging_config.py
└── progress.py
The agent supports LangSmith integration for workflow tracing and observability. LangSmith provides comprehensive monitoring and debugging capabilities for LangChain applications.
- Workflow Visualization: See the complete execution flow of agent interactions with detailed step-by-step breakdown
- Performance Monitoring: Track latency, token usage, and costs for each operation
- Debugging: Inspect tool calls, LLM requests, responses, and intermediate states
- Analytics: Analyze patterns in agent behavior, tool usage frequency, and success rates
- Error Tracking: Monitor errors and exceptions across all operations
-
Get LangSmith API Key:
- Sign up at https://smith.langchain.com/
- Navigate to Settings → API Keys
- Create a new API key
-
Configure in
.env:
# Enable LangSmith tracing
LANGSMITH_ENABLED=true
# Your LangSmith API key (required when enabled)
LANGSMITH_API_KEY=lsv2_pt_xxxxxxxxxxxxxxxxxxxxx
# Optional: Project name for organizing traces
# If not set, traces will go to your default project
LANGSMITH_PROJECT=crypto-analysis-agent- View Traces:
- Once enabled, all agent interactions are automatically traced
- Visit https://smith.langchain.com/ to view your traces
- Filter by project, date, or search for specific queries
- Agent Invocations: Complete agent execution flow
- Tool Calls: All tool invocations including:
get_coin_info- Coin identificationfundamental_analysis- Fundamental analysisprice_analysis- Price trend analysissentiment_analysis- Sentiment metricstechnical_analysis- Technical indicatorsget_previous_analysis- Historical analysis retrieval
- LLM Requests: All OpenAI API calls with prompts and responses
- Token Usage: Input/output token counts and costs
- Latency Metrics: Response times for each operation
- Error Handling: Exceptions and error messages
- Debug Issues: Quickly identify where problems occur in the agent workflow
- Optimize Performance: Find bottlenecks and optimize slow operations
- Monitor Costs: Track token usage and API costs over time
- Improve Quality: Analyze which tool combinations work best for different queries
- API Rate Limits: CoinGecko free tier has rate limits (~50 calls/minute)
- Data Accuracy: Real-time data depends on external API availability
- Historical Data: Limited to data available from CoinGecko
- Sentiment Analysis: Based on available social metrics (may not include all platforms)
- Semantic Cache: Requires OpenAI API key for embedding generation (small cost per query)
- Cache Expiration: Cached responses expire after 1 hour (configurable) to ensure data freshness
- Not Financial Advice: This tool is for educational purposes only