CoHost.AI

A sophisticated AI streaming companion that provides real-time interaction through voice recognition, AI responses, and text-to-speech synthesis. Built for professional streaming environments.

CoHost.AI streaming interface

🌟 Key Features

  • Real-time Twitch chat integration via Streamer.bot
  • Character-driven AI responses using Ollama
  • High-quality Google Cloud Text-to-Speech
  • Push-to-talk voice recognition
  • Automatic OBS Studio scene management
  • Intelligent TTS caching for performance
  • Beautiful CLI interface with live monitoring

🧠 Tech Stack

🐍 Python🧠 Ollama🗣️ Google Cloud TTS📺 OBS Studio🎮 Streamer.bot🎤 Speech Recognition

🏗️ System Architecture

CoHost.AI operates as a real-time streaming companion with multiple integrated components:

Streamer.bot
CoHost.AI
OBS Studio
(UDP Sender) → (Main System) → (Scene Management)
Google Cloud TTS
Audio Output
~2-3s
TTS Response Time
(~0.1s with caching)
<100ms
Audio Latency
(for cached responses)
50-100MB
Memory Usage
(typical operation)

🎭 AI Character Customization

CoHost.AI uses a simple text file to define your AI character's personality and behavior. Create anything from a sarcastic gaming buddy to a professional streaming assistant.

🎮 Sarcastic Gaming Buddy

"You are SnarkyBot, a witty gaming co-host who's seen it all. You're sarcastic but never mean-spirited, and you love roasting bad gameplay."

💼 Professional Assistant

"You are StreamAssistant, a professional and knowledgeable co-host. You provide helpful information and maintain a polished stream environment."

⚙️ Quick Setup

Getting started with CoHost.AI requires a few key components:

# Clone and setup
git clone https://github.com/tompravetz/cohost.ai
cd cohost.ai
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt

# Setup Ollama
ollama pull mistral

# Configure environment
cp .env.example .env
# Edit .env with your Google Cloud credentials

# Run the application
python run.py

🔮 Future Enhancements