Skip to content

b14ucky/liquor-expert

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Liquor Expert

Liquor Expert is a containerized, full-stack AI application that acts as a professional liquor tasting assistant. It uses a locally hosted LLM (via Ollama), retrieval-augmented generation (RAG) with a vector database of liquor reviews, and a simple web-based chat interface.

The system retrieves relevant expert reviews and combines them with the model’s own knowledge to generate informed, expert-style answers about alcoholic beverages.


Dataset

This project uses the Wine, Beer, and Liquor Reviews dataset from the Datafiniti product database, which contains several thousand online reviews of beer, liquor, and wine products. The dataset provides text reviews and metadata for a variety of alcoholic beverages and serves as the basis for the vector database used for semantic retrieval in this app.


Architecture Overview

The project consists of three main services, orchestrated with Docker Compose:

1. Ollama (LLM + Embeddings)

  • Runs the Ollama server
  • Hosts:
    • gemma3:1b – language model for text generation
    • nomic-embed-text – embedding model for vector search

2. Backend (FastAPI)

  • Exposes a REST API (/generate)
  • Uses:
    • LangChain
    • ChromaDB for vector storage
    • Ollama embeddings for semantic search
  • Retrieves relevant liquor reviews and injects them into the prompt

3. Frontend (Nginx + Vanilla JS)

  • Simple chat-style UI
  • Communicates with the backend via /api/generate
  • Renders Markdown responses from the model

Project Structure

liquor-expert/
├── README.md
├── docker-compose.yml
├── Dockerfile
├── backend/
│   ├── Dockerfile
│   ├── main.py
│   ├── vector_db.py
│   └── requirements.txt
└── frontend/
    ├── Dockerfile
    ├── nginx.conf
    ├── index.html
    ├── script.js
    └── style.css

Getting Started

Prerequisites

  • Docker
  • Docker Compose (v2 recommended)

1. Clone the repository

git clone https://github.com/b14ucky/liquor-expert.git
cd liquor-expert

2. Build and run the application

docker-compose up --build

On first startup, Ollama will download the required models (gemma3:1b and nomic-embed-text). This may take a few minutes.


Usage

Once everything is running, open your browser and go to:

http://localhost

You’ll see a web-based chat interface.

How it works

  1. Enter a question related to alcoholic beverages
  2. The backend retrieves relevant liquor reviews from the vector database
  3. The LLM generates an expert-style response using both:
    • Retrieved reviews
    • Its own general knowledge

Vector Database (Liquor Reviews)

This project uses the Wine, Beer, and Liquor Reviews dataset from Datafiniti as the source of truth for review data.

Although the original dataset contains many columns, only a subset is used to build the vector database for semantic retrieval.

Columns Used from the Dataset

The following columns are extracted directly from the Datafiniti CSV files inside vector_db.py:

  • brand
  • name
  • reviews.text
  • reviews.rating
  • reviews.date

These fields are combined to form natural-language documents of the form:

{brand}, {name}: {review text}

The remaining dataset columns are intentionally ignored.


Creating the Vector Database

After downloading and extracting the dataset CSV file, run:

python backend/vector_db.py

This script:

  1. Loads the Datafiniti review data
  2. Extracts the required columns
  3. Converts each review into a LangChain Document
  4. Generates embeddings using nomic-embed-text
  5. Stores the vectors persistently in ChromaDB

The vector database is stored locally at:

backend/vector_db/

Once created, the FastAPI backend uses this database to retrieve the top-5 most relevant reviews for each user query.


API Reference

POST /generate

Request body:

{
  "question": "Your liquor-related question"
}

Response:

{
  "response": "Expert-generated answer"
}

Technologies Used

  • Ollama
  • LangChain
  • ChromaDB
  • FastAPI
  • Nginx
  • Docker & Docker Compose
  • Vanilla JavaScript, HTML, and CSS

Notes

  • All services communicate over an internal Docker bridge network
  • The frontend proxies API requests through Nginx (/api/* → FastAPI)
  • CORS is fully enabled for development simplicity

License

This project is licensed under the MIT License. See the LICENSE file for details.

About

AI-powered web app that answers questions about alcoholic beverages with expert-style responses.

Topics

Resources

License

Stars

Watchers

Forks

Contributors