Skip to content

Alok-28/WeedScan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🌿 WeedScan — AI-Powered Weed Detection System

A real-time weed and crop detection system using YOLOv8 and edge computing.
The AI model runs entirely on your PC — no cloud, no internet required during scanning.


What it does

Point your phone camera at a field and WeedScan instantly detects and classifies plants as weeds (orange boxes) or crops (green boxes) in real time. All inference happens on your PC over your local WiFi network.


Screenshots

Live Detection Analytics Scan Report
Camera feed with bounding boxes Charts, stats, insights Full report with PDF export

System Requirements

PC (runs the AI backend)

Component Minimum Recommended
OS Windows 10 Windows 11
CPU Intel Core i3 / AMD Ryzen 3 Intel Core i5 or better
RAM 4 GB 8 GB
Storage 2 GB free 5 GB free
Python 3.9 – 3.14 3.11
Node.js 18+ 20+

Mobile Phone (accesses the app)

Requirement Details
OS Android 8.0+ or iOS 14+
Browser Chrome 90+ (Android) / Safari 14+ (iOS)
Camera Any rear camera
Network Same WiFi as the PC
RAM 2 GB minimum
Storage No app install needed — runs in browser

Note: The phone only needs a browser. No app installation required.
Camera access requires enabling the insecure origin flag in Chrome (see Setup below).


Project Structure

Weed detection system/
├── start.bat                 ← Double-click to start everything
├── merge_and_train.py        ← Retrain model with new data
│
├── backend/
│   ├── main.py               ← FastAPI inference server
│   ├── best.onnx             ← Trained YOLOv8s model
│   └── scans.db              ← Local scan history (SQLite)
│
├── dataset/                  ← Training dataset (crop & weed images)
│   ├── train/
│   └── valid/
│
└── weedscan-web/             ← React frontend
    ├── src/
    │   ├── App.jsx
    │   ├── api.js
    │   └── screens/
    │       ├── LiveDetection.jsx
    │       ├── Analytics.jsx
    │       ├── CaptureReport.jsx
    │       ├── History.jsx
    │       └── ScanReports.jsx
    └── package.json

Setup & Running

1. Install dependencies

Python packages:

pip install fastapi uvicorn onnxruntime pillow numpy python-multipart

Node.js packages:

cd weedscan-web
npm install

2. Start the app

Simply double-click start.bat — it starts both the backend and frontend automatically.

Or start manually:

# Terminal 1 — Backend
cd backend
python -m uvicorn main:app --host 0.0.0.0 --port 8000 --reload

# Terminal 2 — Frontend
cd weedscan-web
npm run dev

3. Open on PC

http://localhost:5173

4. Open on Phone

Connect your phone to the same WiFi as your PC, then open:

http://192.168.1.8:5173

Replace 192.168.1.8 with your PC's actual WiFi IP.
Find it by running ipconfig in PowerShell and looking for the WiFi IPv4 address.


Phone Camera Setup (Chrome on Android)

Chrome blocks camera access on plain http:// addresses for security.
Do this once to enable it:

  1. Open Chrome on your phone
  2. Go to: chrome://flags/#unsafely-treat-insecure-origin-as-secure
  3. In the text box, enter: http://192.168.1.8:5173
  4. Set the dropdown to Enabled
  5. Tap Relaunch
  6. Open http://192.168.1.8:5173 and tap Allow when asked for camera

Features

📷 Live Detection

  • Real-time camera feed with bounding boxes
  • Orange boxes = weeds, Green boxes = crops
  • Live FPS counter and inference time
  • Animated scan line and corner bracket overlay
  • Recent scans accessible without leaving the camera

📊 Analytics Dashboard

  • Time filters: Today / 7 Days / 30 Days
  • Stat cards: total weeds, avg confidence, inference speed, total scans
  • Bar chart: weeds vs crops per day
  • Auto-generated insights based on scan history

📸 Capture & Report

  • Freeze frame with bounding boxes drawn on snapshot
  • Full metrics: weed count, crop count, density %, confidence %, inference time
  • Export to PDF (opens browser print dialog → Save as PDF)
  • Share report as text
  • Save annotated image to device

📋 Scan Reports

  • Full history of all past scans
  • Search and filter by weed/crop presence
  • Expandable cards with full snapshot and stats
  • Delete individual scans with confirmation

Model Details

Property Value
Architecture YOLOv8s
Format ONNX
Input size 640 × 640
Classes crop, weed
Training images ~1,300 labeled images
Confidence threshold 0.25
Inference time ~120–160 ms/frame on CPU

Retraining the Model

To improve accuracy with more data:

  1. Add new labeled datasets (YOLOv8 format) to the dataset/ folder
  2. Run the training script:
    python merge_and_train.py
  3. The script automatically merges datasets, trains, exports to ONNX, and copies the model to backend/best.onnx
  4. Restart the backend — it picks up the new model automatically

For faster training, use Google Colab with a free T4 GPU (~15 min vs ~8 hours on CPU).


Tech Stack

Layer Technology
AI Model YOLOv8s (Ultralytics)
Inference ONNX Runtime
Backend FastAPI + Uvicorn
Frontend React + Vite
Charts Recharts
Database SQLite (via Python)
Networking Local WiFi (no internet needed)

Troubleshooting

Camera not working on phone
→ Follow the Chrome flags setup above. Camera requires the insecure origin flag on HTTP.

"Site can't be reached" on phone
→ Make sure phone and PC are on the same WiFi network. Check your PC's IP with ipconfig.

No detections showing
→ Ensure the backend is running (start.bat). Point camera at real plants in good lighting.

High inference time (>300ms)
→ Normal on low-end CPUs. Close other applications to free up CPU resources.

PDF not downloading
→ The PDF button opens a print dialog. Select "Save as PDF" as the printer destination.


License

For educational and research use.

About

Real-time weed and crop detection using YOLOv8s + FastAPI + React. Runs fully on-device over local WiFi — no cloud required. Point your phone camera at a field and get instant AI detections with analytics, scan history, and PDF reports.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors