Providers Configuration
AiderDesk supports multiple Large Language Model (LLM) providers to power your AI coding assistant. You can configure these providers in the Model Library (accessible via the top bar icon). Each provider has specific configuration requirements, and most support environment variables for secure credential management.
Table of Contents
- Anthropic
- OpenAI
- Azure
- Gemini
- Vertex AI
- Deepseek
- Groq
- Bedrock
- OpenAI Compatible
- Ollama
- LM Studio
- OpenRouter
- Requesty
Anthropic
Anthropic provides powerful AI models like Claude that excel at coding and reasoning tasks.
Configuration Parameters
- API Key: Your Anthropic API key for authentication
- Environment variable:
ANTHROPIC_API_KEY
- Get your API key from Anthropic Console
- Environment variable:
Setup
- Go to Anthropic Console
- Create a new API key
- Enter the API key in the Model Library Anthropic configuration
- Or set the
ANTHROPIC_API_KEY
environment variable
Azure
Azure OpenAI provides enterprise-grade AI models with enhanced security, compliance, and regional deployment options.
Configuration Parameters
- API Key: Your Azure OpenAI API key for authentication
- Environment variable:
AZURE_API_KEY
- Get your API key from Azure Portal
- Environment variable:
- Resource Name: Your Azure OpenAI resource name
- Environment variable:
AZURE_RESOURCE_NAME
- Found in your Azure OpenAI resource overview page
- Environment variable:
- API Version: The API version to use
- Environment variable:
AZURE_API_VERSION
- Default:
2025-01-01-preview
- Environment variable:
Setup
- Go to Azure Portal and create an Azure OpenAI resource
- Navigate to your resource and find the Keys and Endpoint section
- Copy your API key and resource name
- Enter the API key, resource name, and optionally API version in the Model Library Azure configuration
- Or set the appropriate environment variables
Important Notes
- Custom Models Required: Azure models are not automatically discovered. You need to add custom models manually through the Model Library
- Resource Name Format: Use only the resource name (e.g.,
my-openai-resource
), not the full endpoint URL - Regional Deployment: Models are deployed to specific Azure regions, ensure your resource is in the desired region
OpenAI
OpenAI provides advanced language models including GPT-4 series with enhanced reasoning capabilities.
Configuration Parameters
- API Key: Your OpenAI API key for authentication
- Environment variable:
OPENAI_API_KEY
- Get your API key from OpenAI API Keys
- Environment variable:
- Reasoning Effort: Control the level of reasoning for supported models
- Low: Minimal reasoning, faster responses
- Medium: Balanced reasoning and speed (default)
- High: Maximum reasoning, more thorough but slower
Setup
- Go to OpenAI API Keys
- Create a new API key
- Enter the API key in the Model Library OpenAI configuration
- Configure the Reasoning Effort based on your needs
- Or set the
OPENAI_API_KEY
environment variable
Gemini
Google's Gemini models offer versatile AI capabilities with advanced features like thinking budgets and search grounding.
Configuration Parameters
- API Key: Your Gemini API key for authentication
- Environment variable:
GEMINI_API_KEY
- Get your API key from Google AI Studio
- Environment variable:
- Custom Base URL: Optional custom endpoint URL
- Environment variable:
GEMINI_API_BASE_URL
- Environment variable:
- Thinking Budget: Maximum tokens for internal reasoning (0-24576)
- Include Thoughts: Enable to see the model's internal reasoning process
- Use Search Grounding: Enable to allow the model to use Google Search for factual grounding
Setup
- Go to Google AI Studio
- Create a new API key
- Enter the API key in the Model Library Gemini configuration
- Configure optional parameters based on your needs
- Or set appropriate environment variables
Vertex AI
Google Cloud's Vertex AI provides enterprise-grade AI models with advanced configuration options.
Configuration Parameters
- Project: Your Google Cloud project ID
- Location: The region/zone where your Vertex AI resources are located
- Google Cloud Credentials JSON: Service account credentials in JSON format
- Thinking Budget: Maximum tokens for internal reasoning (0-24576)
- Include Thoughts: Enable to see the model's internal reasoning process
Setup
- Create a Google Cloud project if you don't have one
- Enable the Vertex AI API
- Create a service account with Vertex AI permissions
- Download the service account credentials JSON
- Enter the project ID, location, and credentials in the Model Library Vertex AI configuration
- Configure thinking budget and thoughts inclusion as needed
Deepseek
Deepseek provides powerful AI models optimized for coding and technical tasks.
Configuration Parameters
- API Key: Your Deepseek API key for authentication
- Environment variable:
DEEPSEEK_API_KEY
- Get your API key from Deepseek Platform
- Environment variable:
Setup
- Go to Deepseek Platform
- Create a new API key
- Enter the API key in the Model Library Deepseek configuration
- Or set the
DEEPSEEK_API_KEY
environment variable
Groq
Groq offers ultra-fast inference with specialized hardware acceleration.
Configuration Parameters
- API Key: Your Groq API key for authentication
- Environment variable:
GROQ_API_KEY
- Get your API key from Groq Console
- Environment variable:
- Models: List of available models to use (comma-separated)
Setup
- Go to Groq Console
- Create a new API key
- Enter the API key in the Model Library Groq configuration
- Add the models you want to use (e.g.,
llama3-70b-8192
,mixtral-8x7b-32768
) - Or set the
GROQ_API_KEY
environment variable
Bedrock
Amazon Bedrock provides access to foundation models from leading AI companies through AWS.
Configuration Parameters
- Region: AWS region where Bedrock is available
- Environment variable:
AWS_REGION
- Default:
us-east-1
- Environment variable:
- Access Key ID: Your AWS access key ID
- Environment variable:
AWS_ACCESS_KEY_ID
- Environment variable:
- Secret Access Key: Your AWS secret access key
- Environment variable:
AWS_SECRET_ACCESS_KEY
- Environment variable:
- Session Token: Optional temporary session token
- Environment variable:
AWS_SESSION_TOKEN
- Environment variable:
Setup
- Ensure you have an AWS account with appropriate permissions
- Enable Bedrock in your desired AWS region
- Create an IAM user with Bedrock access permissions
- Enter the AWS credentials in the Model Library Bedrock configuration
- Or set the appropriate AWS environment variables
OpenAI Compatible
Configure any OpenAI-compatible API endpoint to use custom models or self-hosted solutions.
Configuration Parameters
- Base URL: The API endpoint URL
- Environment variable:
OPENAI_API_BASE
- Environment variable:
- API Key: Your API key for the compatible service
- Environment variable:
OPENAI_API_KEY
- Environment variable:
- Models: List of available models (comma-separated)
Setup
- Obtain the base URL and API key from your OpenAI-compatible service provider
- Enter the base URL and API key in the Model Library OpenAI Compatible configuration
- Or set the
OPENAI_API_BASE
andOPENAI_API_KEY
environment variables - Use
openai-compatible/
prefix in the model selector
Important Notes
- Unified Prefix: Both Agent and Aider modes use the same
openai-compatible/
prefix - Model Library: Use the Model Library for advanced configuration and custom model management
- API Compatibility: Configure all settings in the Model Library for unified experience across all modes
Ollama
Ollama allows you to run open-source models locally on your machine.
Configuration Parameters
- Base URL: Your Ollama server endpoint
- Environment variable:
OLLAMA_API_BASE
- Default:
http://localhost:11434
- Environment variable:
Setup
- Install and run Ollama on your local machine
- Ensure Ollama is running and accessible
- Enter the base URL in the Model Library Ollama configuration
- Or set the
OLLAMA_API_BASE
environment variable
LM Studio
LM Studio provides a user-friendly interface for running local language models.
Configuration Parameters
- Base URL: Your LM Studio server endpoint
- Environment variable:
LMSTUDIO_API_BASE
- Default:
http://localhost:1234
- Environment variable:
Setup
- Install and run LM Studio on your local machine
- Start a local server in LM Studio
- Enter the base URL in the Model Library LM Studio configuration
- Or set the
LMSTUDIO_API_BASE
environment variable
OpenRouter
OpenRouter provides access to multiple models from various providers through a single API.
Configuration Parameters
- API Key: Your OpenRouter API key for authentication
- Environment variable:
OPENROUTER_API_KEY
- Get your API key from OpenRouter Keys
- Environment variable:
- Models: List of models to use (auto-populated when API key is provided)
- Advanced Settings: Additional configuration options:
- Require Parameters: Enforce parameter requirements
- Order: Model preference order
- Only: Restrict to specific models
- Ignore: Exclude specific models
- Allow Fallbacks: Enable model fallback
- Data Collection: Allow or deny data collection
- Quantizations: Preferred quantization levels
- Sort: Sort models by price or throughput
Setup
- Go to OpenRouter Keys
- Create a new API key
- Enter the API key in the Model Library OpenRouter configuration
- Select your preferred models from the auto-populated list
- Configure advanced settings as needed
- Or set the
OPENROUTER_API_KEY
environment variable
Requesty
Requesty provides optimized model routing and caching for improved performance and cost efficiency.
Configuration Parameters
- API Key: Your Requesty API key for authentication
- Environment variable:
REQUESTY_API_KEY
- Get your API key from Requesty API Keys
- Environment variable:
- Models: List of available models (auto-populated when API key is provided)
- Auto Cache: Enable automatic response caching for improved performance
- Reasoning Effort: Control the level of reasoning for supported models
- None: No reasoning
- Low: Minimal reasoning
- Medium: Balanced reasoning
- High: Enhanced reasoning
- Max: Maximum reasoning
Setup
- Go to Requesty API Keys
- Create a new API key
- Enter the API key in the Model Library Requesty configuration
- Select your preferred models from the auto-populated list
- Configure auto cache and reasoning effort as needed
- Or set the
REQUESTY_API_KEY
environment variable - Use
requesty/
prefix in the model selector
Important Notes
- Unified Prefix: Both Agent and Aider modes use the same
requesty/
prefix - Model Library: Use the Model Library for advanced configuration and custom model management
- API Compatibility: Configure all settings in the Providers section for unified experience across all modes
Model Library Integration
The Model Library provides advanced provider and model management capabilities beyond basic provider configuration:
- Multiple Profiles: Create multiple profiles for the same provider (e.g., work and personal OpenAI accounts)
- Custom Models: Add custom models that aren't automatically discovered (e.g., Azure models)
- Cost Configuration: Set custom pricing and token limits for models
- Model Management: Hide irrelevant models, organize by provider profiles
- Advanced Configuration: Configure multiple OpenAI-compatible providers with different prefixes
For comprehensive provider and model management, see Model Library.
Unified Model Prefix System
AiderDesk now uses a unified model prefix system across all modes (Agent, Code, Ask, Architect, Context):
Provider | Model Prefix |
---|---|
Anthropic | anthropic/ |
OpenAI | openai/ |
Azure | azure/ |
Gemini | gemini/ |
Vertex AI | vertex_ai/ |
Deepseek | deepseek/ |
Groq | groq/ |
Bedrock | bedrock/ |
OpenAI Compatible | openai-compatible/ |
Ollama | ollama/ |
LM Studio | lmstudio/ |
OpenRouter | openrouter/ |
Requesty | requesty/ |
Important Notes
- Unified Configuration: Configure all providers in the Model Library for consistent behavior across all modes
- Model Selection: Use the same model prefix regardless of the mode you're using
- Environment Variables: Environment variables are supported as fallbacks but primary configuration is through the Model Library
- Model Library: For advanced management of multiple profiles and custom models, use the Model Library