Real-time AI video processing. Open source.

Process live video streams with AI models in real-time. WebRTC pipeline with 200-400ms latency for live video effects, face detection, and custom ML models.
Quick Start View on GitHub

Simple architecture: WebRTC + Redis + RabbitMQ



Ooblex Architecture Diagram


What You Get


Real-Time Processing: 200-400ms latency via WebRTC
Built-In Effects: 10 OpenCV effects (no model downloads needed)
Bring Your Own Models: TensorFlow, PyTorch, ONNX, OpenVINO, TensorRT
Horizontal Scaling: Add workers = more throughput
Docker Ready: One command deployment
Production Guides: AWS, GCP, Azure, Kubernetes
Security Patched: All critical CVEs fixed (November 2024)
Open Source: Apache 2.0 license


Get Started in 5 Minutes


Zero-friction demo with OpenCV effects (no AI models required):

docker compose -f docker-compose.simple.yml up

Open http://localhost:8800 and see real-time face detection, background blur, edge detection, and more.

All effects run on CPU at 30-100+ FPS.


Use Cases


🎥 Live Video Effects: Apply AI to streaming (Twitch, YouTube)
📹 Video Calls: Real-time background blur, filters
🔒 Security Cameras: Person/vehicle detection with low latency
🤖 Robotics: AI-augmented vision for remote control
🧠 Custom ML: Integrate your own models

View Documentation