Skip to content

ahealyf5/integration-examples

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 

Repository files navigation

CalypsoAI API Integration Examples

This repository contains reference implementations for integrating CalypsoAI into applications. It showcases both inline (scan before the LLM runs) and out-of-band (scan both user input and model output) patterns across Python scripts and Streamlit demos.

🚀 Purpose

Help developers understand how to use CalypsoAI's API to:

  • Secure prompts and responses from large language models (LLMs)
  • Integrate real-time or post-processing moderation
  • Protect against prompt injection, PII leaks, and unsafe content

🧑‍💻 Who This Is For

These examples are aimed at general developers looking to:

  • Experiment with CalypsoAI API features
  • Integrate CalypsoAI moderation into GenAI Applications
  • Understand inline vs out-of-band scanning

📂 Contents at a Glance

Path Technique What it Demonstrates
examples/prompt_api_inline.py Inline Call CalypsoAI PromptAPI, forward to a chosen provider only if cleared.
examples/scans_api_out_of_band.py Out-of-band Scan prompts with ScanAPI, optionally forward the safe ones to OpenAI.
examples/chatbot_inline.py Inline (Streamlit) One-click chatbot that routes prompts through PromptAPI before calling OpenAI.
examples/chatbot_inline_multi_model.py Inline + multi-provider Streamlit chatbot that lets you pick providers (e.g. GPT-4o, BioNeMo) and scans inline.
examples/chatbot_out-of-band.py Out-of-band (Streamlit) Scan both user prompts and OpenAI responses via ScanAPI.
examples/chatbot_out-of-band_multi_model.py Out-of-band + multi-provider Streamlit chatbot that scans input/output while switching between OpenAI and NVIDIA endpoints.
examples/extract_log_data.py Data export Utility script to pull historical scanner logs into CSV for offline analysis.

🔧 Setup

1. Clone the Repo

git clone https://gitlab.com/ahealy-calypsoai/calypsoai-api-integration-examples.git
cd calypsoai-api-integration-examples

Prefer Conda? Create and activate a new environment with conda create -n calypsoai-examples python=3.11 and conda activate calypsoai-examples.

2. Install Dependencies

These examples are intentionally lightweight. Install the few packages we depend on (Streamlit is required only for the chatbot demos):

pip install -U requests streamlit openai python-dotenv pandas

3. Set API Keys

Most scripts expect CalypsoAI plus whichever downstream provider you plan to call:

export CALYPSO_API_KEY="your_calypso_api_key"          # PromptAPI + most Streamlit demos
export CALYPSO_API_TOKEN="$CALYPSO_API_KEY"            # Some ScanAPI samples look for _TOKEN
export OPENAI_API_KEY="your_openai_api_key"            # Needed for OpenAI-backed chatbots/scripts
export NVIDIA_API_KEY="your_nvidia_api_key"            # Needed only for the NVIDIA multi-model demo

Tip: drop the same values into an .env file and the scripts will auto-load them via python-dotenv.


▶️ Running the Examples

CLI Scripts

Each script lives under examples/. Run them with plain Python once your env vars are set.

# Call PromptAPI inline and print provider response if cleared
python examples/prompt_api_inline.py

# Call ScanAPI to vet prompts before forwarding to OpenAI
python examples/scans_api_out_of_band.py

# Export historical logs to CSV (see --help for options)
python examples/extract_log_data.py "17/10/25 00:00:00" "17/10/25 23:59:59" \
  --out prompt_logs_oct17.csv --max-records 100

Streamlit Chatbots

Launch any of the Streamlit demos from the repo root. Keep the terminal open while you test in the browser.

# Inline PromptAPI moderation with OpenAI
streamlit run examples/chatbot_inline.py

# Inline PromptAPI moderation with provider picker (OpenAI / BioNeMo)
streamlit run examples/chatbot_inline_multi_model.py

# Out-of-band ScanAPI moderation (prompt + response)
streamlit run examples/chatbot_out-of-band.py

# Out-of-band with OpenAI/NVIDIA selector (requires NVIDIA_API_KEY)
streamlit run examples/chatbot_out-of-band_multi_model.py

Each app walks through the pattern it demonstrates:

  • Inline chatbots call CalypsoAI before sending a prompt to the downstream LLM.
  • Out-of-band chatbots scan both user input and the model’s reply before displaying it.

Working with the Exporter

The exporter accepts day/time ranges in UTC (DD/MM/YY HH:MM:SS) and a handful of optional arguments:

python examples/extract_log_data.py \
  "01/10/25 00:00:00" \
  "01/10/25 23:59:59" \
  --max-records 50 \
  --only-user \
  --out logs_oct01_me.csv

Use smaller pulls while testing (--max-records) and remove the cap when you’re satisfied.


🤔 Inline vs Out-of-Band Cheat Sheet

Pattern When to Use Try It
Inline moderation You want CalypsoAI to decide if a prompt should reach the provider at all. prompt_api_inline.py, chatbot_inline.py, chatbot_inline_multi_model.py
Out-of-band moderation You want the downstream model’s answer but still need CalypsoAI to validate both sides. scans_api_out_of_band.py, chatbot_out-of-band.py, chatbot_out-of-band_multi_model.py

🛠 Troubleshooting

  • 401 / 403 responses: confirm the CalypsoAI token you’re using has access to the ScanAPI/PromptAPI routes and that the correct env var is set.
  • Streamlit can’t see your env vars: ensure you exported them in the same shell before running streamlit run …, or place them in a .env file.
  • NVIDIA demo errors: double-check NVIDIA_API_KEY and verify your account has access to the model listed in chatbot_out-of-band_multi_model.py.
  • SSL issues behind strict firewalls: set REQUESTS_CA_BUNDLE or use a corporate proxy that trusts Calypso endpoints.

📄 License

Copyright F5, Inc. 2026. Licensed under the Apache License, Version 2.0. See LICENSE.

Have ideas for additional integrations? Open an issue or MR and we’ll add them to the gallery. Cheers! 🎉

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors