Fine-Tuning GPT-2, RoBERTa, and PPO Architecture for Enhancing Reliability of General-Purpose Chatbot
exploratory-data-analysis chatbot fine-tune actor-critic supervised-machine-learning adamax kl-divergence backbone-models ppo mean-square-error off-policy roberta accuracy-metrics gpt-2 adamw-optimizer reward-modeling llm-judge sparse-categorial-crossentropy margin-ranking-loss
-
Updated
Apr 27, 2026 - Python