A real-time weed and crop detection system using YOLOv8 and edge computing.
The AI model runs entirely on your PC — no cloud, no internet required during scanning.
Point your phone camera at a field and WeedScan instantly detects and classifies plants as weeds (orange boxes) or crops (green boxes) in real time. All inference happens on your PC over your local WiFi network.
| Live Detection | Analytics | Scan Report |
|---|---|---|
| Camera feed with bounding boxes | Charts, stats, insights | Full report with PDF export |
| Component | Minimum | Recommended |
|---|---|---|
| OS | Windows 10 | Windows 11 |
| CPU | Intel Core i3 / AMD Ryzen 3 | Intel Core i5 or better |
| RAM | 4 GB | 8 GB |
| Storage | 2 GB free | 5 GB free |
| Python | 3.9 – 3.14 | 3.11 |
| Node.js | 18+ | 20+ |
| Requirement | Details |
|---|---|
| OS | Android 8.0+ or iOS 14+ |
| Browser | Chrome 90+ (Android) / Safari 14+ (iOS) |
| Camera | Any rear camera |
| Network | Same WiFi as the PC |
| RAM | 2 GB minimum |
| Storage | No app install needed — runs in browser |
Note: The phone only needs a browser. No app installation required.
Camera access requires enabling the insecure origin flag in Chrome (see Setup below).
Weed detection system/
├── start.bat ← Double-click to start everything
├── merge_and_train.py ← Retrain model with new data
│
├── backend/
│ ├── main.py ← FastAPI inference server
│ ├── best.onnx ← Trained YOLOv8s model
│ └── scans.db ← Local scan history (SQLite)
│
├── dataset/ ← Training dataset (crop & weed images)
│ ├── train/
│ └── valid/
│
└── weedscan-web/ ← React frontend
├── src/
│ ├── App.jsx
│ ├── api.js
│ └── screens/
│ ├── LiveDetection.jsx
│ ├── Analytics.jsx
│ ├── CaptureReport.jsx
│ ├── History.jsx
│ └── ScanReports.jsx
└── package.json
Python packages:
pip install fastapi uvicorn onnxruntime pillow numpy python-multipartNode.js packages:
cd weedscan-web
npm installSimply double-click start.bat — it starts both the backend and frontend automatically.
Or start manually:
# Terminal 1 — Backend
cd backend
python -m uvicorn main:app --host 0.0.0.0 --port 8000 --reload
# Terminal 2 — Frontend
cd weedscan-web
npm run devhttp://localhost:5173
Connect your phone to the same WiFi as your PC, then open:
http://192.168.1.8:5173
Replace
192.168.1.8with your PC's actual WiFi IP.
Find it by runningipconfigin PowerShell and looking for the WiFi IPv4 address.
Chrome blocks camera access on plain http:// addresses for security.
Do this once to enable it:
- Open Chrome on your phone
- Go to:
chrome://flags/#unsafely-treat-insecure-origin-as-secure - In the text box, enter:
http://192.168.1.8:5173 - Set the dropdown to Enabled
- Tap Relaunch
- Open
http://192.168.1.8:5173and tap Allow when asked for camera
- Real-time camera feed with bounding boxes
- Orange boxes = weeds, Green boxes = crops
- Live FPS counter and inference time
- Animated scan line and corner bracket overlay
- Recent scans accessible without leaving the camera
- Time filters: Today / 7 Days / 30 Days
- Stat cards: total weeds, avg confidence, inference speed, total scans
- Bar chart: weeds vs crops per day
- Auto-generated insights based on scan history
- Freeze frame with bounding boxes drawn on snapshot
- Full metrics: weed count, crop count, density %, confidence %, inference time
- Export to PDF (opens browser print dialog → Save as PDF)
- Share report as text
- Save annotated image to device
- Full history of all past scans
- Search and filter by weed/crop presence
- Expandable cards with full snapshot and stats
- Delete individual scans with confirmation
| Property | Value |
|---|---|
| Architecture | YOLOv8s |
| Format | ONNX |
| Input size | 640 × 640 |
| Classes | crop, weed |
| Training images | ~1,300 labeled images |
| Confidence threshold | 0.25 |
| Inference time | ~120–160 ms/frame on CPU |
To improve accuracy with more data:
- Add new labeled datasets (YOLOv8 format) to the
dataset/folder - Run the training script:
python merge_and_train.py
- The script automatically merges datasets, trains, exports to ONNX, and copies the model to
backend/best.onnx - Restart the backend — it picks up the new model automatically
For faster training, use Google Colab with a free T4 GPU (~15 min vs ~8 hours on CPU).
| Layer | Technology |
|---|---|
| AI Model | YOLOv8s (Ultralytics) |
| Inference | ONNX Runtime |
| Backend | FastAPI + Uvicorn |
| Frontend | React + Vite |
| Charts | Recharts |
| Database | SQLite (via Python) |
| Networking | Local WiFi (no internet needed) |
Camera not working on phone
→ Follow the Chrome flags setup above. Camera requires the insecure origin flag on HTTP.
"Site can't be reached" on phone
→ Make sure phone and PC are on the same WiFi network. Check your PC's IP with ipconfig.
No detections showing
→ Ensure the backend is running (start.bat). Point camera at real plants in good lighting.
High inference time (>300ms)
→ Normal on low-end CPUs. Close other applications to free up CPU resources.
PDF not downloading
→ The PDF button opens a print dialog. Select "Save as PDF" as the printer destination.
For educational and research use.