I transform challenges into opportunities, pushing the boundaries of what's possible with AI.
Building intelligent systems that learn, adapt, and solve real-world problems
- π M.Tech in AI @ IIT Jodhpur | B.Tech in CSE @ CIT Kokrajhar
- π 3x Hackathon Winner
- π» AWS Certified Solutions Architect - Associate
- π¬ Researching: Machine Unlearning, Audio Deepfake Detection, Multimodal Learning
| π₯ Winner | π₯ Winner | π₯ Winner |
|---|---|---|
| Tradl AI Hackathon LangGraph multi-agent system |
Darwix AI Hackathon Built in <90 minutes |
Crowdera Hack4RealGood Social impact solution |
| π₯ Top 5 | π 5th Position | β A Grade* |
|---|---|---|
| CLASH-OF-T-AI-TANS Computer Vision |
HackerRush IIT Jodhpur Γ HackerRank |
GenAI & Foundation Models 90+ across all evaluations |
graph TB
subgraph "Data Layer"
A[Raw Data] --> B[Data Processing]
B --> C[Feature Engineering]
end
subgraph "Model Development"
C --> D[Model Training<br/>PyTorch/TensorFlow]
D --> E[Model Evaluation]
E --> F[Hyperparameter Tuning]
end
subgraph "MLOps & Deployment"
F --> G[Model Registry<br/>MLflow]
G --> H[Containerization<br/>Docker]
H --> I[Orchestration<br/>Kubernetes]
I --> J[Cloud Deployment<br/>AWS/GCP]
end
subgraph "Production"
J --> K[API Gateway<br/>FastAPI/Flask]
K --> L[Monitoring & Logging]
L --> M[Model Retraining]
M --> D
end
style D fill:#EE4C2C
style G fill:#0194E2
style H fill:#2496ED
style I fill:#326CE5
style J fill:#232F3E
π Architecture Details & Design Decisions
Modular Architecture: Each layer is decoupled, enabling independent scaling and updates. This follows microservices principles adapted for ML systems.
Key Trade-offs:
-
Model Registry (MLflow) vs. Git-based versioning
- Chosen: MLflow for metadata tracking, experiment management, and model lineage
- Trade-off: Additional infrastructure overhead vs. comprehensive experiment tracking
- Rationale: Research-heavy workflow requires detailed experiment comparison
-
Containerization Strategy
- Chosen: Docker for consistency, Kubernetes for orchestration
- Trade-off: Complexity vs. scalability and portability
- Rationale: Multi-cloud deployments require container abstraction
-
API Gateway Pattern
- Chosen: FastAPI for async performance, Flask for lightweight services
- Trade-off: Framework diversity vs. operational simplicity
- Rationale: Different endpoints have different latency requirements
-
Monitoring & Observability
- Chosen: Custom metrics + CloudWatch/GCP Monitoring
- Trade-off: Vendor lock-in vs. native integration benefits
- Rationale: Cloud-native monitoring provides better integration with auto-scaling
- Model Serving: Batch inference for throughput, async endpoints for latency
- Caching: Redis for frequently accessed models and embeddings
- Data Pipeline: Parallel processing with Dask/Ray for large-scale feature engineering
- Model Optimization: Quantization and pruning for edge deployment scenarios
graph TD
A[Research Problem] --> B{Literature Review}
B --> C[Experimental Design]
C --> D[Implementation]
D --> E[Evaluation]
E --> F{Results}
F -->|Success| G[Publication/Deployment]
F -->|Iterate| C
G --> H[Production System]
style A fill:#BF91F3
style D fill:#EE4C2C
style E fill:#38BDAE
style G fill:#70A5FD
| Domain | Focus | Technologies |
|---|---|---|
| Machine Unlearning | Graph-based algorithms, 25Γ speedup | PyTorch, GNNs, Transformers |
| Audio Deepfake Detection | Multilingual benchmark, 20 languages | Demucs, PyAnnote, Statistical Analysis |
| Multimodal Learning | Vision+Text, Vision+Audio | CLIP, BLIP, Audio-Visual Transformers |
| MLOps | Model deployment, monitoring | MLflow, Docker, Kubernetes, AWS |
π Currently Learning
Self-Supervised Learning β’ Contrastive Learning β’ Large Language Models β’ Generative AI β’ Deep Reinforcement Learning β’ Neural Architecture Search β’ Meta-Learning β’ 3D Computer Vision β’ Federated Learning β’ Robust AI β’ Multimodal Learning β’ AI Ethics β’ Time-Series Forecasting
β¨ Feel free to reach out for any collaboration or AI-related discussions!
Building the future, one algorithm at a time π



