Talexis is an AI-powered interview practice platform that helps candidates prepare for technical interviews with:
- Real-time gesture analysis (attention, eye contact, posture)
- Integrated code editor for solving coding problems
- AI-generated adaptive questions and response evaluation
- ATS Resume Checker for optimizing resumes for applicant tracking systems
- Pipeline Builder for organizing interview prep workflows
- Real-time Video Interview Simulation: Practice with a realistic interview interface
- Gesture Analysis: AI-powered analysis of your:
- Attention: Tracks face presence and motion to detect if you're focused
- Eye Contact: Monitors face position to ensure you're looking at the camera
- Posture: Analyzes face size to determine if you're too close or too far from camera
- Code Editor: Integrated Monaco code editor for solving coding problems
- Voice Input: Use speech-to-text to answer questions verbally
- Response Tracking: Practice answering behavioral questions
- Parse and analyze resumes against job descriptions
- Get AI-powered suggestions to improve your resume
- Match your skills with job requirements
- View your interview history with scores
- Track your progress over time
- Access all features from a centralized location
- Build and customize your interview prep pipeline
- Organize questions by category and difficulty
- React 18 - UI framework
- TypeScript - Type safety
- Vite - Build tool
- Tailwind CSS - Styling
- shadcn/ui - UI component library
- Monaco Editor - Code editor
- Recharts - Data visualization
- React Router - Client-side routing
- Framer Motion - Animations
- TanStack Query - Data fetching
- Python - Programming language
- FastAPI - Web framework
- OpenCV - Computer vision
- WebSockets - Real-time communication
- NumPy - Numerical computing
- Supabase - Authentication & Database
- Gemini AI - LLM for question generation and evaluation
- Piston API - Code execution engine
- Web Speech API - Voice input for answers
-
Speech-to-Text Integration: Implementing voice input required careful handling of browser permissions and Web Speech API compatibility across different browsers.
-
Code Execution Security: Running user-submitted code safely required integrating an external code execution service (Piston API) with proper error handling and timeout management.
-
Real-time Gesture Analysis: Processing video frames in real-time for gesture analysis demanded optimization to maintain performance without blocking the main UI thread.
-
AI Question Generation: Ensuring the Gemini API generates relevant, adaptive questions based on user responses required careful prompt engineering and fallback mechanisms.
-
Database Integration for Interview History: Properly saving and retrieving interview results from Supabase to display history on the dashboard required consistent data schema and error handling.
-
Cross-Browser Compatibility: Ensuring features like voice input, camera access, and code editor work consistently across Chrome, Firefox, and Safari.
-
Performance Optimization: Balancing AI API calls, video processing, and code execution while maintaining a smooth user experience during interviews.
-
Responsive Design: Creating a seamless experience across different screen sizes for the video interview interface, code editor, and dashboard.
- Multi-language Support - Expand voice input and AI evaluation to support multiple languages for global users
- Mock Interview Sessions - Add peer-to-peer mock interviews with real-time feedback
- Industry-specific Question Banks - Create specialized question sets for different industries (Finance, Healthcare, Tech)
- Video Recording & Playback - Allow users to review their interview recordings for self-improvement
- AI Coach Assistant - Implement a chat-based AI coach for real-time interview tips and guidance
- Microservices Architecture - Split backend into separate services (AI service, Vision service, User service)
- Caching Layer - Implement Redis for caching AI responses and reducing API costs
- CDN Integration - Use content delivery networks for faster media delivery globally
- Database Optimization - Add read replicas and implement database sharding for high traffic
- WebSocket Scaling - Implement WebSocket clustering for handling more concurrent interviews
- Team Management - Allow companies to create teams and manage candidate pipelines
- Analytics Dashboard - Provide detailed performance analytics for recruiters and hiring managers
- Custom Branding - White-label solution for enterprise clients
- Integration APIs - API endpoints for ATS integration (Greenhouse, Lever, Workday)
- Interview Scheduling - Calendar integration for scheduling live mock interviews
- Fine-tuned Models - Train custom models for better domain-specific question generation
- Sentiment Analysis - Add emotional tone detection to evaluate confidence levels
- Object Detection - Expand gesture analysis to detect hand gestures and facial expressions
- Auto-grading - Automatic code grading with test case validation
- Freemium Model - Basic features free, premium features paid
- Subscription Plans - Monthly/yearly plans for individuals and teams
- Pay-per-interview - Credit-based system for pay-as-you-go usage
- Enterprise Licensing - Custom pricing for large organizations
talexis/
├── src/
│ ├── components/
│ │ ├── ui/ # Reusable UI components (shadcn/ui)
│ │ ├── auth/ # Authentication components
│ │ ├── CodeEditor.tsx # Monaco code editor
│ │ └── Layout.tsx # Main layout
│ ├── pages/
│ │ ├── Index.tsx # Landing page
│ │ ├── Login.tsx # User login
│ │ ├── Signup.tsx # User registration
│ │ ├── Dashboard.tsx # User dashboard
│ │ ├── Interview.tsx # Main interview interface
│ │ ├── InterviewSetup.tsx # Interview configuration
│ │ ├── ATSChecker.tsx # Resume analyzer
│ │ └── PipelineBuilder.tsx # Interview pipeline builder
│ ├── lib/
│ │ ├── gestureAnalysis.ts # Client-side gesture analysis
│ │ ├── aiService.ts # AI service integration (Gemini)
│ │ └── supabaseClient.ts # Supabase client
│ ├── hooks/
│ │ ├── useAuth.ts # Authentication hook
│ │ └── useUserStats.ts # User statistics hook
│ └── main.tsx # Application entry point
├── backend/
│ └── vision_service/
│ ├── app.py # FastAPI vision service
│ ├── requirements.txt # Python dependencies
│ └── README.md # Vision service docs
├── package.json # Node.js dependencies
├── tailwind.config.ts # Tailwind configuration
└── vite.config.ts # Vite configuration
- Node.js (v18 or higher)
- Python (v3.10 or higher)
- npm or bun for package management
git clone <repository-url>
cd talexisInstall dependencies:
npm installOr if using bun:
bun installCreate a virtual environment (optional but recommended):
cd backend/vision_service
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activateInstall Python dependencies:
pip install -r requirements.txtThe project uses the following environment variables. These are already configured in the .env file:
# Gemini API Key for ATS analysis
# Get your API key from: https://aistudio.google.com/app/apikey
VITE_GEMINI_API_KEY=your_api_key_here
# Supabase Configuration
VITE_SUPABASE_URL=https://your-project.supabase.co
VITE_SUPABASE_ANON_KEY=your_anon_key_here
# Vision Service WebSocket URL
VITE_GESTURE_WS_URL=ws://localhost:8001/ws/gestureTo use your own credentials:
- Create a project at Supabase
- Get your API key from Google AI Studio
- Update the
.envfile with your values
You need to run two services in separate terminals:
npm run devThe frontend will start at http://localhost:8080
cd backend/vision_service
uvicorn app:app --host 0.0.0.0 --port 8001 --reloadThe vision service will be available at http://localhost:8001
If you don't need gesture analysis:
npm run devThe frontend will start at http://localhost:8080
Build the frontend:
npm run buildThe production build will be in the dist/ directory. You can serve it with any static file server:
npm run preview- Open the application in your browser
- Log in or sign up (if not already authenticated)
- Navigate to the Dashboard
- Click "Start Interview" or "Practice Interview"
- Configure your interview settings:
- Select interview type
- Choose difficulty level
- Set time limits
- Grant camera and microphone permissions
- Click "Start Interview"
- Video Section: Shows your camera feed with gesture analysis overlay
- Code Editor: Solve coding problems in the integrated editor
- Questions Panel: Answer behavioral questions
- Gesture Analysis: Real-time feedback on:
- Attention (focus level)
- Eye Contact (looking at camera)
- Posture (distance from camera)
| Metric | Good | Warning | Poor |
|---|---|---|---|
| Attention | Looking at camera, minimal movement | Slight movement | Looking away, excessive movement |
| Eye Contact | Face centered (dx < 0.08) | Slightly off-center (dx < 0.15) | Not looking at camera |
| Posture | Optimal distance (face area 6-18%) | Slightly too close/far | Too close or too far |
- Navigate to the ATS Checker page
- Paste your resume content
- Paste the job description
- Click "Analyze"
- Review the AI-generated feedback and suggestions
rm -rf node_modules package-lock.json
npm install# Find and kill the process using port 8080
lsof -i :8080
kill -9 <PID># Clear browser cache or use hard refresh
Ctrl+Shift+R (Windows/Linux)
Cmd+Shift+R (Mac)pip install opencv-python-headlesspkill -f "uvicorn.*8001"
cd backend/vision_service
uvicorn app:app --host 0.0.0.0 --port 8001- Ensure the vision service is running:
uvicorn app:app --host 0.0.0.0 --port 8001 - Check the WebSocket URL in
.envmatches:VITE_GEMINI_WS_URL=ws://localhost:8001/ws/gesture
- Grant camera permissions in browser settings
- Ensure no other application is using the camera
- Try using a different browser
- Ensure vision service is running on port 8001
- Check browser console for errors
- Try hard refreshing the page
Real-time gesture analysis WebSocket endpoint.
Connection:
const ws = new WebSocket('ws://localhost:8001/ws/gesture');Message Format (send):
{
"image": "data:image/jpeg;base64,..."
}Message Format (receive):
{
"attention": "good",
"eyeContact": "good",
"posture": "good",
"faceDetected": true,
"confidence": 0.85,
"details": ["motion=0.012", "center_dx=0.032", "face_area=0.095"]
}Health check endpoint.
Response:
{
"status": "ok"
}Interactive API documentation (Swagger UI).
- Fork the repository
- Create a feature branch
- Make your changes
- Submit a pull request
The frontend can be deployed to Vercel for free.
# Install Vercel CLI
npm i -g vercel
# Deploy
vercelFollow the prompts:
- Set up and deploy? Yes
- Which scope? Your Vercel username
- Link to existing project? No
- Project name: talexis
- Directory? ./
- Want to modify settings? No
- Push your code to GitHub
- Go to Vercel Dashboard
- Click "Add New Project"
- Import your GitHub repository
- Configure:
- Framework Preset: Vite
- Build Command:
npm run build - Output Directory:
dist
- Add environment variables:
VITE_GEMINI_API_KEYVITE_SUPABASE_URLVITE_SUPABASE_ANON_KEYVITE_GESTURE_WS_URL(see below)
- Deploy
The vision service needs to run on a server with Python. Options:
- Push code to GitHub
- Go to Railway
- Create new project from GitHub
- Select the repository
- Add the following in Railway dashboard:
- Build Command:
pip install -r backend/vision_service/requirements.txt - Start Command:
uvicorn app:app --host 0.0.0.0 --port $PORT
- Build Command:
- Deploy
- Push code to GitHub
- Go to Render Dashboard
- Create new Web Service
- Connect your GitHub repository
- Configure:
- Build Command:
pip install -r backend/vision_service/requirements.txt - Start Command:
uvicorn app:app --host 0.0.0.0 --port $PORT
- Build Command:
- Deploy
- Install Fly CLI
- Create
Dockerfileinbackend/vision_service/:
FROM python:3.11-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
EXPOSE 8001
CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "8001"]- Deploy:
fly launch
fly deployAfter deploying, update your frontend's environment variable:
# For Vercel, set this to your vision service URL
VITE_GESTURE_WS_URL=wss://your-vision-service.railway.app/ws/gesture ┌─────────────────┐
│ Vercel │
│ (Frontend) │
└────────┬────────┘
│
┌──────────────┼──────────────┐
│ │ │
▼ ▼ ▼
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Supabase │ │ Gemini │ │ Railway │
│ (Auth/ │ │ AI │ │ (Vision │
│ DB) │ │ │ │ Service) │
└──────────┘ └──────────┘ └──────────┘
Create deploy.sh:
#!/bin/bash
# Build frontend
echo "Building frontend..."
npm run build
# Deploy to Vercel (requires CLI)
echo "Deploying to Vercel..."
vercel --prod
echo "Deployment complete!"Run:
chmod +x deploy.sh
./deploy.shMIT License
For issues and questions:
- Open an issue on GitHub
- Check the troubleshooting section above