An AI-powered video editing application with a Qt6/C++ frontend and Python backend, featuring node-based workflows and agent-driven automation.
- Timeline Editor: Multi-track timeline with drag-and-drop support
- Video Player: Hardware-accelerated playback with frame-accurate seeking
- Clip Management: Split, trim, move, and arrange clips with ease
- Undo/Redo System: Full command pattern implementation for all operations
- Visual Node Graph: Create complex video processing pipelines
- AI-Powered Nodes: Text-to-video, image-to-video, style transfer, and more
- Custom Nodes: Extensible architecture for adding new processing nodes
- Real-time Preview: See changes as you build your workflow
- Natural Language Commands: Edit videos using plain English instructions
- Smart Suggestions: AI-powered recommendations for edits and effects
- Workflow Automation: Let the agent build node graphs automatically
- Speech-to-Text: Integrated Whisper model for transcription
- Proxy Media: Automatic proxy generation for smooth 4K+ editing
- Preview Cache: Intelligent caching for real-time playback
- Audio Waveform: Visual audio editing with waveform display
- Performance Profiler: Built-in profiling for optimization
- Qt6 - Cross-platform GUI framework
- C++17 - High-performance core
- QMediaPlayer - Video playback
- QGraphicsView - Node graph and timeline rendering
- Python 3.8+ - AI and processing backend
- FastAPI - REST API server
- Whisper - Speech recognition
- Diffusers - AI model inference
- CMake 3.16+ - Build configuration
- Ninja - Fast build tool
.
├── src/
│ ├── ui/ # Qt/C++ frontend components
│ │ ├── mainwindow.cpp/h
│ │ ├── timeline_widget.cpp/h
│ │ ├── video_player.cpp/h
│ │ ├── node_widget.cpp/h
│ │ └── agent_console.cpp/h
│ ├── engine/ # Core engine components
│ │ ├── timeline_model.cpp/h
│ │ ├── video_decoder.cpp/h
│ │ ├── render_pipeline.cpp/h
│ │ └── undo_manager.cpp/h
│ └── ai_nodes/ # Python backend
│ ├── node_server.py
│ ├── requirements.txt
│ └── nodes/ # AI processing nodes
├── workflows/ # Pre-built workflow templates
├── build/ # Build output (gitignored)
└── CMakeLists.txt # CMake configuration
- Qt6 (Core, Widgets, Multimedia)
- CMake 3.16+
- Python 3.8+
- C++17 compatible compiler
# Clone the repository
git clone https://github.com/Hemanthc-dotcom/Kdenagent.git
cd Kdenagent
# Create build directory
mkdir build
cd build
# Configure with CMake
cmake .. -G Ninja
# Build
ninja
# Run
.\AgenticEditor.exe# Clone the repository
git clone https://github.com/Hemanthc-dotcom/Kdenagent.git
cd Kdenagent
# Create build directory
mkdir build && cd build
# Configure with CMake
cmake .. -G Ninja
# Build
ninja
# Run
./AgenticEditor# Create virtual environment
python -m venv .venv
# Activate (Windows)
.venv\Scripts\activate
# Activate (Linux/macOS)
source .venv/bin/activate
# Install dependencies
pip install -r src/ai_nodes/requirements.txt
# The backend starts automatically when you run the application- Import Media: Drag and drop videos or use File > Import
- Add to Timeline: Drag clips from the media pool to the timeline
- Edit: Use the toolbar tools to cut, trim, and arrange clips
- Preview: Play back your edit in the video player
- Export: Render your project to various formats
- Open the Node Editor from the View menu
- Add nodes from the palette (right-click in the canvas)
- Connect nodes by dragging between ports
- Configure node parameters in the properties panel
- Execute the workflow to process your media
- Open the Agent Console from the View menu or press
Ctrl+Shift+A - Type natural language commands like:
- "Add a fade-in effect to the first clip"
- "Create a text overlay saying 'Hello World'"
- "Generate B-roll for this section"
- The AI will interpret your command and execute the appropriate actions
| Shortcut | Action |
|---|---|
Space |
Play/Pause |
J |
Rewind |
K |
Pause |
L |
Forward |
I |
Set In Point |
O |
Set Out Point |
Ctrl+Z |
Undo |
Ctrl+Shift+Z |
Redo |
Ctrl+S |
Save Project |
Delete |
Delete Selected |
Ctrl+Shift+A |
Open Agent Console |
- Text2Video: Generate video from text descriptions
- Image2Video: Animate static images
- Video2Video: Transform video style using AI
- StyleTransfer: Apply artistic styles to footage
- SuperResolution: Upscale video resolution
- LipSync: Synchronize lips to audio
- AudioReactive: Create visuals that react to audio
- SceneDetection: Automatically detect scene changes
- HighlightDetection: Find exciting moments in footage
- CaptionGenerator: Auto-generate subtitles
- BrollGenerator: Generate contextual B-roll footage
- MagicEraser: Remove unwanted objects from video
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Qt Framework for the excellent cross-platform GUI toolkit
- OpenAI Whisper for speech recognition
- Hugging Face Diffusers for AI model inference
- The open-source community for various libraries and tools
For issues, questions, or suggestions, please open an issue on GitHub.
Made with ❤️ by Hemanth