AI Nodes Overview
Use AI nodes to add intelligent processing to your workflows -- text generation, image analysis, speech processing, and more, all powered by your own API keys.
AI nodes bring artificial intelligence directly into your Buildorado workflows. Instead of building separate integrations with AI providers, you drag an AI node onto the canvas, connect your API key, and configure the model -- no code required. AI nodes run in the background after a form submission, processing data and passing results to downstream nodes just like any other action.
Buildorado supports eight AI node types spanning text, image, audio, and vector operations. Each node connects to one or more third-party AI providers using your own API keys (BYOK -- Bring Your Own Key), so you have full control over costs, rate limits, and data privacy.
The Eight AI Node Types
Buildorado provides the following AI nodes, each designed for a specific modality:
| Node | Description | Supported Providers |
|---|---|---|
| Agent | Autonomous AI agent with tools, memory, and multi-turn reasoning | OpenAI, Anthropic, Groq, Together AI, Fireworks AI, DeepSeek, xAI, OpenRouter, Mistral AI |
| Vision | Analyze and describe images using multimodal models | OpenAI, Anthropic |
| Image Generation | Generate images from text descriptions | OpenAI (DALL-E), Stability AI |
| Speech to Text | Transcribe audio files to text | OpenAI (Whisper), Deepgram |
| Text to Speech | Convert text to spoken audio | OpenAI, ElevenLabs |
| Image Edit | Edit existing images with AI instructions and masks | OpenAI (DALL-E 2), Stability AI (Stable Image Core) |
| OCR | Extract text from images, PDFs, and document scans | OpenAI, Anthropic |
| Embedding | Generate vector embeddings for semantic search and similarity | OpenAI |
How AI Nodes Work
AI nodes appear under the AI tab in the builder sidebar. They behave like other action nodes on the canvas, but instead of connecting to a SaaS integration, they call an AI provider's API.
Execution Model
All AI nodes use background execution. They do not run during the form-filling experience -- they execute after the user submits the form, as part of the workflow's server-side processing. This means:
- AI processing does not slow down the user's form experience.
- Results from AI nodes are available to downstream nodes in the workflow (emails, Slack messages, spreadsheet rows, etc.).
- If an AI node fails (rate limit, invalid key, network error), the workflow's error handling catches it.
Canvas Appearance
On the canvas, AI nodes render as system nodes with a solid border and a configuration status badge. The one exception is the Agent node, which uses a distinct agent node appearance to reflect its more complex, multi-turn behavior.
Adding an AI Node to Your Workflow
- Open the builder sidebar and click the AI tab.
- Drag the desired AI node type onto the canvas.
- Connect it to the upstream node (a form node, branch node, or another action node) by drawing an edge from the source output handle to the AI node's input handle.
- Click the AI node to open its configuration panel.
- Select a provider (e.g., OpenAI, Anthropic).
- Select a model (e.g., GPT-4.1, Claude Sonnet 4.6).
- Choose a saved credential or create a new one.
- Configure the node-specific settings (prompt, image URL, temperature, etc.).
- Connect the AI node's output to downstream nodes that consume the result.
Credential Management (BYOK)
Buildorado uses a Bring Your Own Key model for all AI nodes. You provide your own API keys from each provider, and Buildorado stores them securely.
Setting Up Credentials
- Go to Settings in the top navigation.
- Click Credentials.
- Click Add Credential.
- Select the provider (OpenAI, Anthropic, Stability AI, Deepgram, ElevenLabs, etc.).
- Paste your API key.
- Give the credential a descriptive name (e.g., "OpenAI Production Key").
- Click Save.
Security
- All API keys are encrypted at rest using AES-256 encryption with AWS KMS-managed keys.
- Keys are never exposed in the browser after initial entry. The configuration panel shows only the credential name, not the key value.
- Keys are decrypted only at execution time on the server, used for the API call, and immediately discarded from memory.
- Credentials are scoped to your workspace. Team members with appropriate permissions can use saved credentials without seeing the raw key.
Using Credentials in AI Nodes
When configuring an AI node, the Credential dropdown lists all saved credentials that match the selected provider. If you select OpenAI as the provider, only OpenAI credentials appear. You can add a new credential directly from the node configuration panel without navigating to Settings.
Template Variables in AI Nodes
Most AI node text fields -- prompts, system messages, input text -- support template variables. These let you inject dynamic data from earlier nodes in the workflow.
For example, you can reference a form field value in an Agent's user prompt to process the user's submission dynamically. Template variables are inserted using the variable picker in the text field, which shows all available upstream outputs.
Cost Considerations
Because AI nodes call third-party APIs using your own keys, all usage costs are billed directly by the provider -- not by Buildorado. Keep the following in mind:
- Token-based pricing: Text models (GPT-4.1, Claude, etc.) charge per input and output token. Longer prompts and responses cost more.
- Per-call pricing: Image generation, speech-to-text, and text-to-speech typically charge per API call or per second of audio.
- Rate limits: Each provider enforces rate limits on your API key. High-traffic workflows may need higher-tier API plans.
- Model selection matters: Using GPT-4.1 Nano or Claude Haiku instead of full-size models can reduce costs by 10-50x for simpler tasks.
- Max tokens: Setting a reasonable max tokens limit prevents unexpectedly long (and expensive) responses.
Buildorado does not add any markup or surcharge on top of provider pricing. The API calls are made directly to the provider using your key.
Provider and Model Reference
OpenAI
OpenAI is the most broadly supported provider across AI node types. Models available:
| Model | Node Types | Notes |
|---|---|---|
| GPT-4.1 | Agent, Vision, OCR | Flagship model, best quality |
| GPT-4.1 Mini | Agent, Vision, OCR | Good balance of quality and cost |
| GPT-4.1 Nano | Agent | Fastest, lowest cost |
| GPT-4o | Vision, OCR | Previous generation, still capable |
| o3 | Agent | Reasoning model |
| o3 Mini | Agent | Smaller reasoning model |
| o4 Mini | Agent | Latest small reasoning model |
| DALL-E 3 | Image Generation | Best image quality, one image per call |
| DALL-E 2 | Image Generation, Image Edit | Supports editing and masks |
| Whisper v3 (whisper-1) | Speech to Text | Industry-standard transcription |
| text-embedding-3-small | Embedding | 1536 dimensions, cost-effective |
| text-embedding-3-large | Embedding | 3072 dimensions, highest quality |
| text-embedding-ada-002 | Embedding | Legacy model |
| TTS-1 | Text to Speech | Standard quality voice |
| TTS-1 HD | Text to Speech | High-definition voice |
Anthropic
| Model | Node Types | Notes |
|---|---|---|
| Claude Opus 4.6 | Agent | Most capable Anthropic model |
| Claude Sonnet 4.6 | Agent, Vision, OCR | Great balance of speed and quality |
| Claude Sonnet 4.5 | Agent, Vision, OCR | Previous generation |
| Claude Haiku 4.5 | Agent, Vision, OCR | Fastest, lowest cost |
Stability AI
| Model | Node Types | Notes |
|---|---|---|
| SD3.5 Large Turbo | Image Generation | Fast, high quality |
| SD3.5 Large | Image Generation | Highest quality |
| SD3.5 Medium | Image Generation | Balanced |
| Stable Image Core | Image Generation, Image Edit | Cost-effective, supports editing |
Deepgram
| Model | Node Types | Notes |
|---|---|---|
| Nova-3 | Speech to Text | Latest, most accurate |
| Nova-3 Medical | Speech to Text | Optimized for medical terminology |
| Nova-2 | Speech to Text | Previous generation |
| Nova-2 General | Speech to Text | General-purpose |
ElevenLabs
| Model | Node Types | Notes |
|---|---|---|
| Eleven v3 | Text to Speech | Latest, highest quality |
| Multilingual v2 | Text to Speech | Multi-language support |
| Turbo v2.5 | Text to Speech | Low-latency, fast generation |
Other Providers (Agent Only)
The following providers are available for Agent nodes. They accept free-text model IDs, allowing access to any model the provider offers:
- Groq -- Ultra-fast inference for Llama and Mixtral models
- Together AI -- Open-source model hosting (Llama, DeepSeek, Qwen, etc.)
- Fireworks AI -- High-speed inference for open-source models
- DeepSeek -- Specialized reasoning and coding models (V3, R1)
- xAI -- Grok models
- OpenRouter -- Gateway to many AI providers with a single API key
- Mistral AI -- Mistral, Mixtral, and Codestral models
Error Handling
AI nodes can fail for several reasons: invalid API key, rate limiting, network timeouts, content policy violations, or provider outages. Buildorado handles these gracefully:
- Failed AI nodes report the error in the workflow execution log.
- You can connect an Error Handler node downstream to catch AI failures and route to a fallback path (e.g., send a notification, retry with a different model, or skip the AI step).
- Provider error messages are sanitized before logging to prevent accidental exposure of API keys in error output.
Best Practices
- Start with smaller models for prototyping (GPT-4.1 Nano, Claude Haiku), then upgrade to larger models once your prompts are finalized.
- Set max tokens on every agent node to avoid runaway costs.
- Use structured output (JSON mode) when downstream nodes need to parse the AI response programmatically.
- Test with preview mode before publishing. Preview executes the full workflow including AI nodes so you can verify prompts and outputs.
- Monitor your provider dashboard for usage spikes, especially if your workflow handles high submission volumes.
- Use template variables to make prompts dynamic. Hardcoded prompts produce the same output for every submission.
- Add error handlers downstream of AI nodes in production workflows to handle rate limits and outages gracefully.