Go-based AI Server with Comprehensive Debug Logging
golang-ai-server is a server application written in Go that provides AI processing capabilities with Ollama integration. It supports multi-modal input including text prompts and images, with comprehensive debug logging for development, troubleshooting, and monitoring.
- Ollama Integration: Direct integration with Ollama API for AI processing
- Multi-modal Input: Supports text prompts and image file processing
- Comprehensive Logging: Structured debug logging with multiple levels and formats
- File Processing: Base64 encoding for images and PDFs
- Performance Monitoring: Request timing and performance metrics
- Development-Friendly: Extensive debugging information for development
- Go (version 1.21+ recommended for slog support)
- Ollama running locally on port 11434
- Images or files to process (optional)
Clone the repository:
git clone https://github.com/bittelc/golang-ai-server.git
cd golang-ai-serverInstall dependencies:
go mod tidyRun with default settings:
go run main.goRun with debug logging:
LOG_LEVEL=DEBUG go run main.goRun with JSON logging format:
LOG_LEVEL=INFO LOG_FORMAT=json go run main.goThis application features a comprehensive logging system for debugging, monitoring, and development purposes.
- DEBUG: Detailed debugging information, file operations, HTTP details
- INFO: General application flow and status (default)
- WARN: Warning conditions and incomplete responses
- ERROR: Error conditions and failures
- text: Human-readable text format (default)
- json: Structured JSON format for log aggregation
# Set log level
export LOG_LEVEL=DEBUG # DEBUG, INFO, WARN, ERROR
# Set log format
export LOG_FORMAT=json # text, json-
Application Lifecycle
- Startup and shutdown events
- Configuration loading
- Total execution time
-
User Input Processing
- Prompt collection and validation
- Image path parsing and validation
- File reading and encoding operations
-
File Operations
- File opening, reading, and encoding
- File size tracking and performance
- Base64 encoding progress
-
HTTP Operations
- Ollama API requests and responses
- Request/response timing and size
- HTTP status codes and headers
-
Error Handling
- Detailed error context and stack traces
- Operation failure points
- Recovery attempts
Run the interactive logging demonstration:
./demo_logging.shView detailed logs during development:
LOG_LEVEL=DEBUG go run main.go 2>&1 | tee app.logFor production-like monitoring:
LOG_LEVEL=INFO LOG_FORMAT=json go run main.goSee LOGGING.md for complete logging documentation.
-
Start the application:
go run main.go
-
Enter your prompt when asked
-
Optionally provide image paths (comma-separated, max 5):
/path/to/image1.jpg, /path/to/image2.png -
View the AI response and processing logs
$ LOG_LEVEL=INFO go run main.go
User prompt: Describe this image
Path to images, separated by commas, limit of 5 (optional): test_image.txt
[Processing logs will appear here]
[AI response will appear here]
Completed in 2.5s- main.go: Application entry point and orchestration
- input/: User input handling and file processing
- ollama/: Ollama API client and request handling
- logger/: Centralized logging utilities and configuration
golang-ai-server/
├── main.go # Main application
├── input/
│ └── input.go # User input and file processing
├── ollama/
│ └── server.go # Ollama API client
├── logger/
│ └── logger.go # Logging utilities
├── LOGGING.md # Logging documentation
├── demo_logging.sh # Logging demonstration
└── test_image.txt # Test file for logging demo
When adding new functionality, use the logging utilities:
// Log processing steps
logger.LogProcessingStep("operation_name", map[string]interface{}{
"param1": value1,
"param2": value2,
})
// Log errors with context
logger.LogError("operation_name", err, map[string]interface{}{
"context1": value1,
"context2": value2,
})
// Log file operations
logger.LogFileOperation("read_file", filePath, fileSize)