AI-Powered Mock Interview Platform β Voice, Behavioral, Technical & Coding Tracks
Features β’ Architecture β’ Installation β’ Usage
MockFlow-AI is a full-stack AI interview coach. It conducts realistic, voice-based mock interviews across four formats β from general intro calls to live coding challenges with a Monaco editor. Built on LiveKit's real-time infrastructure with a BYOK (Bring Your Own Keys) model.
Launch Video: Watch on YouTube Full Interview Demo: Watch on YouTube
Get your API keys to use the live site:
- OpenAI API key β platform.openai.com
- Deepgram API key β console.deepgram.com
- LiveKit credentials β cloud.livekit.io
Configure them at mockflow-ai.onrender.com β Settings.
- Speech-to-Text: Deepgram Nova-2 for accurate transcription
- Language Model: OpenAI GPT-4o-mini β context-aware, adaptive responses
- Text-to-Speech: OpenAI TTS with natural voice synthesis
- Voice Activity Detection: Silero VAD for turn-taking
| Track | Stages | Focus |
|---|---|---|
| Intro Call | Welcome β Self-intro β Experience β Company Fit β Closing | General background, motivation, culture fit |
| Behavioral | Welcome β Intro β Questions (STAR) β Closing | Leadership frameworks (Amazon / Google / Meta / Generic), configurable follow-up depth |
| Technical Voice | Welcome β Intro β Experience β Concepts β Closing | Topic-based conceptual questions; resume-aware topic suggestions |
| Technical Coding | Welcome β Warm-up β Problem 1 β Problem 2 β Closing | Live Monaco editor, LLM-generated problems, 15-min timer, 3-attempt retry, real-time code evaluation |
- FSM-driven stages: Explicit state transitions, fallback timers, skip controls
- Resume + JD aware: Uploads parsed and injected into context
- Adaptive follow-ups: Agent probes based on candidate answers
- Speech analytics: Filler word count, WPM, per-turn pace β included in feedback
- Live Monaco code editor (Python, JavaScript, Java, C++, Go)
- LLM-generated problems tailored to role + experience level + difficulty
- Per-problem countdown timer with auto-submit
- Up to 3 attempts per problem; real-time AI evaluation with verbal feedback
- Copy/paste disabled for integrity
- Scored breakdown: communication, technical depth, relevance, confidence
- Speech analytics section: filler words, pace (WPM)
- Track-specific evaluation (coding: approach quality, edge cases, complexity)
- Exportable feedback per session
- Users supply their own OpenAI, Deepgram, and LiveKit keys
- Keys encrypted at rest in Supabase; never logged
- Per-session ephemeral worker β keys used only during the interview
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Web Browser β
β βββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Landing ββ β Form Page ββ βInterview Roomβ β
β β Page β β (4 tracks) β β (LiveKit) β β
β βββββββββββββββ ββββββββββββββββ βββββββββ¬βββββββ β
ββββββββββββββββββββββββββββββββββββββββββββββΌββββββββββββββββ
β WebRTC
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Flask Web Server β
β β’ HTML templates, OAuth, token generation β
β β’ Per-session worker spawning (worker_manager.py) β
β β’ /api/coding/submit, /api/feedback, /api/extract-topics β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β LiveKit Agent Worker β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β Multi-Track ββ βInterview Agentββ βState Verifierβ β
β β FSM β β (Tools) β β (Fallback) β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β
β Voice Pipeline: STT (Deepgram) β LLM (OpenAI) β TTS β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
| File | Purpose |
|---|---|
app.py |
Flask server β OAuth, token generation, worker spawning, feedback endpoints |
agent_worker.py |
LiveKit agent β FSM-driven tools, voice pipeline, coding evaluation |
fsm.py |
Multi-track FSM β stage enums, time limits, transition logic for all 4 tracks |
prompts.py |
Stage instructions, feedback prompts, CODE_EVALUATOR, speech analytics |
tracks/ |
Per-track config (stage sequences, time limits, availability) |
supabase_client.py |
Encrypted API key storage, interview persistence, coding submissions |
speech_analytics.py |
Filler word detection, WPM calculation, per-turn pace |
document_processor.py |
Resume parsing (PDF, DOCX, TXT) with cache |
audio_cache.py |
Pre-generated welcome audio per track |
- Python 3.9+ (< 3.14)
- LiveKit Cloud or self-hosted instance
- OpenAI and Deepgram API keys
git clone https://github.com/PranavMishra17/MockFlow-AI.git
cd MockFlow-AI
pip install -r requirements.txtCreate .env:
LIVEKIT_URL=wss://your-livekit-server.livekit.cloud
LIVEKIT_API_KEY=your_api_key
LIVEKIT_API_SECRET=your_api_secret
OPENAI_API_KEY=sk-your-openai-api-key
DEEPGRAM_API_KEY=your-deepgram-api-keypython app.py
# Visit http://localhost:5000In BYOK mode, agent workers are spawned automatically per-session. No separate agent process needed.
gunicorn app:app --workers 1 --timeout 120Use --workers 1 β the app manages agent workers via subprocess spawning.
- Visit the site β Sign in with Google OAuth
- Settings page β Add your API keys:
- LiveKit URL, API Key, API Secret
- OpenAI API Key
- Deepgram API Key
- Keys are encrypted and stored β never re-entered
- Start Interview β Select a track:
- Intro Call: General background and fit
- Behavioral: Choose framework + follow-up depth, optional custom questions
- Technical Voice: Select topics (or auto-suggest from resume)
- Technical Coding: Choose language, problem count, difficulty
- Optionally upload resume / paste job description
- Connect β interview runs in real time
- After ending β Generate Feedback for scored report
- Use headphones to prevent echo
- Speak naturally β the AI handles conversational pauses
- For coding: type your solution, click Submit when ready (up to 3 attempts)
| Issue | Fix |
|---|---|
| "Connection failed" | Verify all API keys in Settings; check LiveKit server is reachable |
| Agent doesn't respond | Confirm API keys have credits; check server logs |
| Stage doesn't transition | Wait for fallback timer (~2β5 min per stage); check LOG_LEVEL=DEBUG |
| Coding problem not showing | Click "I'm Ready" button after agent greeting |
# Enable debug logs
LOG_LEVEL=DEBUG python app.py
# Health check
curl http://localhost:5000/api/healthSupabase migrations are in supabase-backend/:
# Patch 1 β adds track, track_config columns to interviews table
patch1_migration.sql
# Patch 2 β adds coding_submissions table with RLS
patch2_migration.sqlRun these in the Supabase SQL editor before deploying.
- Fork β feature branch β
git checkout -b feature/your-feature - Follow coding standards in
.claude/rules.md - Test thoroughly β especially FSM stage transitions
- Submit PR with clear description
SAOUL License β see LICENSE.
- LiveKit β Real-time communication infrastructure
- OpenAI β LLM and TTS
- Deepgram β Speech-to-text
- Silero VAD β Voice activity detection
- Monaco Editor β Code editor
|
Built with best practices from industry-leading voice agent architectures

