Multimodal AI Platform with Dynamic Routing & Assistant Framework
✅ Supported Platforms:
- OpenAI (GPT-4o, DALL-E 3, Whisper, TTS)
- Anthropic (Claude 3.5 models)
- Google Gemini (Vision & Language)
- Local Models via Ollama (Llama3, Phi-3, Mistral, etc.)
- Groq (Llama3-70B, Mixtral)
- Cohere (Command R+)
- OpenRouter
✨ Core Capabilities:
- Dynamic conversation routing with SemanticRouter
- Multi-modal interactions (Text/Image/Audio)
- Assistant framework with code interpretation
- Real-time response streaming
- Cross-provider model switching
- Local model support with Ollama integration
- Python 3.11+
- Ollama (for local models) - Install Guide
git clone https://github.com/vinhnx/VT.ai.git
cd VT.ai
# Create virtual environment
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# .venv\Scripts\activate # Windows
pip install -r requirements.txt
cp .env.example .env
Populate .env
with your API keys:
OPENAI_API_KEY=sk-your-key
GEMINI_API_KEY=your-gemini-key
COHERE_API_KEY=your-cohere-key
ANTHROPIC_API_KEY=your-claude-key
# Local Models
OLLAMA_HOST=http://localhost:11434
# Train semantic router (recommended)
python src/router/trainer.py
# Launch interface
chainlit run src/app.py -w
Shortcut | Action |
---|---|
Ctrl+/ | Switch model provider |
Ctrl+, | Open settings |
Ctrl+L | Clear conversation history |
- Multi-LLM conversations
- Dynamic model switching
- Image generation & analysis
- Audio transcription
# Example assistant capabilities
async def solve_math_problem(problem: str):
assistant = MinoAssistant()
return await assistant.solve(problem)
- Code interpreter for complex calculations
- File attachments (PDF/CSV/Images)
- Persistent conversation threads
- Custom tool integrations
VT.ai/
├── src/
│ ├── assistants/ # Custom AI assistant implementations
│ ├── router/ # Semantic routing configuration
│ ├── utils/ # Helper functions & configs
│ └── app.py # Main application entrypoint
├── public/ # Static assets
├── requirements.txt # Python dependencies
└── .env.example # Environment template
Category | Models |
---|---|
Chat | GPT-4o, Claude 3.5, Gemini 1.5, Llama3-70B, Mixtral 8x7B |
Vision | GPT-4o, Gemini 1.5 Pro, Llama3.2 Vision |
Image Gen | DALL-E 3 |
TTS | OpenAI TTS-1, TTS-1-HD |
Local | Llama3, Phi-3, Mistral, Deepseek R1 series |
# Install development tools
pip install -r requirements-dev.txt
# Run tests
pytest tests/
# Format code
black .
- Fork the repository
- Create feature branch (
git checkout -b feature/amazing-feature
) - Add Type hints for new functions
- Update documentation
- Open Pull Request
MIT License - See LICENSE for full text.
- Inspired by Chainlit for chat interface
- Powered by LiteLLM for model abstraction
- Semantic routing via SemanticRouter