A language processing system with modular workflows and LLM-agnostic architecture for product information handling.
# Clone repository
git clone https://github.com/your-repo/product-workflows.git
cd product-workflows
# Create virtual environment
python3 -m venv venv
source venv/bin/activate
# Install dependencies
pip install -e .
# Install langgraph development tools (optional)
pip install "langgraph[dev]"Create .env file in project root:
PROJECT_ID="your-gcp-project"
LOCATION="your-gcp-region"
STAGING_BUCKET="gs://your-bucket"
LLM_PROVIDER="vertexai" # or "openai"from core.workflows.app import SimpleLangGraphApp
# Initialize with VertexAI
app = SimpleLangGraphApp(
project="your-project",
location="us-central1",
provider="vertexai"
)
app.set_up()
# Execute product query
response = app.query("Get details for premium headphones")
print(response)src/
├── core/
│ ├── nodes/ # Processing components
│ │ ├── routing.py # Message routing logic
│ │ ├── tools/ # Custom tools
│ │ └── output.py # Response formatting
│ └── workflows/ # Workflow definitions
│ └── product_workflow.py # Main workflow graph
├── providers/ # LLM integrations
│ └── llm_factory.py # Provider switching
├── config/ # Environment setup
├── main.py # Main entry point for CLI execution
└── agent.py # Entry point for langgraph dev server
1. Input Reception → 2. Model Routing
↓
4. Output Formatting ← 3. Tool Execution
Key processing stages:
- Model Routing: LLM decides between direct response or tool usage
- Tool Execution: Product details retrieval from knowledge base
- Response Formatting: Final output standardization
- LLM Flexibility: Switch between VertexAI/OpenAI via config
- Modular Design: Add new nodes without affecting existing flows
- Structured Outputs: Consistent formatting for API consumption
- Tool System: Extendable product information base
- Local Testing: Console interface for workflow validation
# Run main application (CLI mode)
python -m src.main
# Run interactive development server
langgraph dev src.agent:graph
# Execute tests
pytest tests/ -v
# Generate coverage report
coverage run -m pytest tests/
coverage reportThe project has two main entry points:
-
src/main.py: Command-line interface for running predefined test queries
- Use this for quick testing and batch processing
- Run with
python -m src.main
-
src/agent.py: Exposes the workflow graph for the LangGraph development server
- Use this for interactive debugging and visualization
- Run with
langgraph dev src.agent:graph - Access the web interface at http://localhost:3000
Apache 2.0 - See LICENSE for details