An AI-powered Neovim plugin that provides intelligent code explanations using various LLM providers.
- Code Explanations: Get AI-powered explanations for your code
- Multiple LLM Providers: Support for Groq, OpenAI, Ollama, and Anthropic
- Keyboard Shortcuts: Quick access to explain functionality
- Customizable: Configure models, providers, and behavior
- Neovim >= 0.7.0
- curl (for API requests)
- An LLM provider account or local Ollama setup
Using Lazy.nvim
{
"syedowais312/alton.nvim",
config = function()
require("alton").setup({
-- Configuration options here
})
end,
}NOTE:
make sure to add the below code to the config path ~/.config/nvim/lua/plugins/alton.lua (usually) or addto the path where you have you config
{
"syedowais312/alton.nvim",
config = function()
require("alton").setup({
provider = "groq",
groq = {
api_key = "your_groq_api_key_here",
model = "llama-3.1-8b-instant",
},
})
end,
}{
"syedowais312/alton.nvim",
config = function()
require("alton").setup({
provider = "ollama",
ollama = {
model = "qwen2.5-coder:1.5b", -- Your preferred model
url = "http://localhost:11434/api/generate", -- Default Ollama URL
},
})
end,
}{
"syedowais312/alton.nvim",
config = function()
require("alton").setup({
provider = "openai",
openai = {
api_key = "your_openai_api_key_here",
model = "gpt-3.5-turbo",
},
})
end,
}- Sign up at Groq
- Get your API key from the dashboard
- Set the
api_keyin your configuration
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh - Pull a model:
ollama pull qwen2.5-coder:1.5b - Start Ollama service:
ollama serve - Configure the plugin with your model name
- Sign up at OpenAI
- Get your API key from the dashboard
- Set the
api_keyin your configuration
Once configured, use the following keyboard shortcuts:
F2: Explain the current line/selectionF3: Custom explanation prompt
The plugin will:
- Analyze the current line or selected code
- Send it to your configured LLM provider
- Display the explanation in a floating window
require("alton").setup({
provider = "groq", -- "groq", "openai", "ollama", "anthropic"
-- Provider-specific configurations
groq = {
api_key = "your_api_key",
model = "llama-3.1-8b-instant",
},
ollama = {
model = "codellama",
url = "http://localhost:11434/api/generate",
},
openai = {
api_key = "your_api_key",
model = "gpt-3.5-turbo",
},
})- Groq: Requires API key, offers fast inference
- Ollama: Local models, no API key needed
- OpenAI: Requires API key, high-quality responses
- Anthropic: Requires API key, good for code analysis
- Ensure your configuration is correct
- Check that your API key is valid
- Verify network connectivity for cloud providers
- For Ollama, ensure the service is running
- Check your API key and model name
- Verify the model is available (for Ollama:
ollama list) - Check internet connection for cloud providers
- Try a different model
Add debug messages to see what's happening:
{
"syedowais312/alton.nvim",
config = function()
vim.notify("Setting up alton", vim.log.levels.INFO)
require("alton").setup({
-- your config here
})
end,
}Check messages with :messages in Neovim.
- Fork the repository
- Create a feature branch
- Make your changes
- Test thoroughly
- Submit a pull request
# Clone your repository
git clone https://github.com/syedowais312/alton.nvim.git
cd alton.nvim
# Link it in your Neovim config
{
dir = "/path/to/alton.nvim",
config = function()
require("alton").setup({
-- your config
})
end,
}MIT License
Created by Syed Owais
If you encounter issues:
- Check the troubleshooting section
- Verify your configuration
- Open an issue on GitHub with details about your setup