Providers Configuration
AiderDesk supports multiple Large Language Model (LLM) providers to power your AI coding assistant. You can configure these providers in the Settings → Providers tab. Each provider has specific configuration requirements, and most support environment variables for secure credential management.
Table of Contents
- Anthropic
- OpenAI
- Gemini
- Vertex AI
- Deepseek
- Groq
- Bedrock
- OpenAI Compatible
- Ollama
- LM Studio
- OpenRouter
- Requesty
Anthropic
Anthropic provides powerful AI models like Claude that excel at coding and reasoning tasks.
Configuration Parameters
- API Key: Your Anthropic API key for authentication
- Environment variable:
ANTHROPIC_API_KEY
- Get your API key from Anthropic Console
- Environment variable:
Setup
- Go to Anthropic Console
- Create a new API key
- Enter the API key in the Settings → Providers → Anthropic section
- Or set the
ANTHROPIC_API_KEY
environment variable
OpenAI
OpenAI provides advanced language models including GPT-4 series with enhanced reasoning capabilities.
Configuration Parameters
- API Key: Your OpenAI API key for authentication
- Environment variable:
OPENAI_API_KEY
- Get your API key from OpenAI API Keys
- Environment variable:
- Reasoning Effort: Control the level of reasoning for supported models
- Low: Minimal reasoning, faster responses
- Medium: Balanced reasoning and speed (default)
- High: Maximum reasoning, more thorough but slower
Setup
- Go to OpenAI API Keys
- Create a new API key
- Enter the API key in the Settings → Providers → OpenAI section
- Configure the Reasoning Effort based on your needs
- Or set the
OPENAI_API_KEY
environment variable
Gemini
Google's Gemini models offer versatile AI capabilities with advanced features like thinking budgets and search grounding.
Configuration Parameters
- API Key: Your Gemini API key for authentication
- Environment variable:
GEMINI_API_KEY
- Get your API key from Google AI Studio
- Environment variable:
- Custom Base URL: Optional custom endpoint URL
- Environment variable:
GEMINI_API_BASE_URL
- Environment variable:
- Thinking Budget: Maximum tokens for internal reasoning (0-24576)
- Include Thoughts: Enable to see the model's internal reasoning process
- Use Search Grounding: Enable to allow the model to use Google Search for factual grounding
Setup
- Go to Google AI Studio
- Create a new API key
- Enter the API key in the Settings → Providers → Gemini section
- Configure optional parameters based on your needs
- Or set appropriate environment variables
Vertex AI
Google Cloud's Vertex AI provides enterprise-grade AI models with advanced configuration options.
Configuration Parameters
- Project: Your Google Cloud project ID
- Location: The region/zone where your Vertex AI resources are located
- Google Cloud Credentials JSON: Service account credentials in JSON format
- Thinking Budget: Maximum tokens for internal reasoning (0-24576)
- Include Thoughts: Enable to see the model's internal reasoning process
Setup
- Create a Google Cloud project if you don't have one
- Enable the Vertex AI API
- Create a service account with Vertex AI permissions
- Download the service account credentials JSON
- Enter the project ID, location, and credentials in the Settings → Providers → Vertex AI section
- Configure thinking budget and thoughts inclusion as needed
Deepseek
Deepseek provides powerful AI models optimized for coding and technical tasks.
Configuration Parameters
- API Key: Your Deepseek API key for authentication
- Environment variable:
DEEPSEEK_API_KEY
- Get your API key from Deepseek Platform
- Environment variable:
Setup
- Go to Deepseek Platform
- Create a new API key
- Enter the API key in the Settings → Providers → Deepseek section
- Or set the
DEEPSEEK_API_KEY
environment variable
Groq
Groq offers ultra-fast inference with specialized hardware acceleration.
Configuration Parameters
- API Key: Your Groq API key for authentication
- Environment variable:
GROQ_API_KEY
- Get your API key from Groq Console
- Environment variable:
- Models: List of available models to use (comma-separated)
Setup
- Go to Groq Console
- Create a new API key
- Enter the API key in the Settings → Providers → Groq section
- Add the models you want to use (e.g.,
llama3-70b-8192
,mixtral-8x7b-32768
) - Or set the
GROQ_API_KEY
environment variable
Bedrock
Amazon Bedrock provides access to foundation models from leading AI companies through AWS.
Configuration Parameters
- Region: AWS region where Bedrock is available
- Environment variable:
AWS_REGION
- Default:
us-east-1
- Environment variable:
- Access Key ID: Your AWS access key ID
- Environment variable:
AWS_ACCESS_KEY_ID
- Environment variable:
- Secret Access Key: Your AWS secret access key
- Environment variable:
AWS_SECRET_ACCESS_KEY
- Environment variable:
- Session Token: Optional temporary session token
- Environment variable:
AWS_SESSION_TOKEN
- Environment variable:
Setup
- Ensure you have an AWS account with appropriate permissions
- Enable Bedrock in your desired AWS region
- Create an IAM user with Bedrock access permissions
- Enter the AWS credentials in the Settings → Providers → Bedrock section
- Or set the appropriate AWS environment variables
OpenAI Compatible
Configure any OpenAI-compatible API endpoint to use custom models or self-hosted solutions.
Configuration Parameters
- Base URL: The API endpoint URL
- Environment variable:
OPENAI_API_BASE
- Environment variable:
- API Key: Your API key for the compatible service
- Environment variable:
OPENAI_API_KEY
- Environment variable:
- Models: List of available models (comma-separated)
Setup for Agent Mode
- Obtain the base URL and API key from your OpenAI-compatible service provider
- Enter the base URL, API key, and available models in the Settings → Providers → OpenAI Compatible section
- Or set the
OPENAI_API_BASE
andOPENAI_API_KEY
environment variables - Use
openai-compatible/
prefix in Agent mode model selector
Setup for Aider Modes (Code, Ask, Architect, Context)
To use OpenAI Compatible providers in Aider modes, you need to configure environment variables:
-
Set Environment Variables in Settings → Aider → Environment Variables:
OPENAI_API_BASE=[your_provider_base_url]
OPENAI_API_KEY=[your_api_key] -
Use Model Prefix: In the Aider model selector, use the
openai/
prefix:openai/gpt-4
openai/claude-3-sonnet-20240229
Important Notes
- Agent Mode: Use
openai-compatible/
prefix and configure in Providers section - Aider Modes: Use
openai/
prefix and configure environment variables in Aider section - API Compatibility: Aider treats all OpenAI-compatible providers as OpenAI, hence the
openai/
prefix in Aider modes
Ollama
Ollama allows you to run open-source models locally on your machine.
Configuration Parameters
- Base URL: Your Ollama server endpoint
- Environment variable:
OLLAMA_API_BASE
- Default:
http://localhost:11434
- Environment variable:
Setup
- Install and run Ollama on your local machine
- Ensure Ollama is running and accessible
- Enter the base URL in the Settings → Providers → Ollama section
- Or set the
OLLAMA_API_BASE
environment variable
LM Studio
LM Studio provides a user-friendly interface for running local language models.
Configuration Parameters
- Base URL: Your LM Studio server endpoint
- Environment variable:
LMSTUDIO_API_BASE
- Default:
http://localhost:1234
- Environment variable:
Setup
- Install and run LM Studio on your local machine
- Start a local server in LM Studio
- Enter the base URL in the Settings → Providers → LM Studio section
- Or set the
LMSTUDIO_API_BASE
environment variable
OpenRouter
OpenRouter provides access to multiple models from various providers through a single API.
Configuration Parameters
- API Key: Your OpenRouter API key for authentication
- Environment variable:
OPENROUTER_API_KEY
- Get your API key from OpenRouter Keys
- Environment variable:
- Models: List of models to use (auto-populated when API key is provided)
- Advanced Settings: Additional configuration options:
- Require Parameters: Enforce parameter requirements
- Order: Model preference order
- Only: Restrict to specific models
- Ignore: Exclude specific models
- Allow Fallbacks: Enable model fallback
- Data Collection: Allow or deny data collection
- Quantizations: Preferred quantization levels
- Sort: Sort models by price or throughput
Setup
- Go to OpenRouter Keys
- Create a new API key
- Enter the API key in the Settings → Providers → OpenRouter section
- Select your preferred models from the auto-populated list
- Configure advanced settings as needed
- Or set the
OPENROUTER_API_KEY
environment variable
Requesty
Requesty provides optimized model routing and caching for improved performance and cost efficiency.
Configuration Parameters
- API Key: Your Requesty API key for authentication
- Environment variable:
REQUESTY_API_KEY
- Get your API key from Requesty API Keys
- Environment variable:
- Models: List of available models (auto-populated when API key is provided)
- Auto Cache: Enable automatic response caching for improved performance
- Reasoning Effort: Control the level of reasoning for supported models
- None: No reasoning
- Low: Minimal reasoning
- Medium: Balanced reasoning
- High: Enhanced reasoning
- Max: Maximum reasoning
Setup for Agent Mode
- Go to Requesty API Keys
- Create a new API key
- Enter the API key in the Settings → Providers → Requesty section
- Select your preferred models from the auto-populated list
- Configure auto cache and reasoning effort as needed
- Or set the
REQUESTY_API_KEY
environment variable - Use
requesty/
prefix in Agent mode model selector
Setup for Aider Modes (Code, Ask, Architect, Context)
To use Requesty models in Aider modes, you need to configure environment variables:
-
Set Environment Variables in Settings → Aider → Environment Variables:
OPENAI_API_BASE=https://router.requesty.ai/v1
OPENAI_API_KEY=[your_requesty_api_key] -
Use Model Prefix: In the Aider model selector, use the
openai/
prefix:openai/anthropic/claude-3-sonnet-20240229
openai/gpt-4-turbo
Important Notes
- Agent Mode: Use
requesty/
prefix and configure in Providers section - Aider Modes: Use
openai/
prefix and configure environment variables in Aider section - API Compatibility: Requesty appears as OpenAI-compatible to Aider, hence the
openai/
prefix in Aider modes
Agent Mode vs Aider Mode Prefix Differences
Provider | Agent Mode Prefix | Aider Mode Prefix | Notes |
---|---|---|---|
Requesty | requesty/ | openai/ | Requesty appears as OpenAI-compatible to Aider |
OpenAI Compatible | openai-compatible/ | openai/ | Aider treats all compatible providers as OpenAI |
All Others | [provider_name]/ | [provider_name]/ | Same prefix for both modes |
Important Notes
- Environment Variables: Aider modes require environment variables to be set in Settings → Aider → Environment Variables, not in the Providers section
- Model Selection: Always use the correct prefix based on the mode you're using
- API Compatibility: Requesty and OpenAI Compatible providers appear as OpenAI to Aider, hence the
openai/
prefix in Aider modes - Configuration Location: Agent mode uses the Providers configuration, while Aider modes use environment variables in the Aider configuration section