Sign In

AI Nodes Overview

Use AI nodes to add intelligent processing to your workflows -- text generation, image analysis, speech processing, and more, all powered by your own API keys.

AI nodes bring artificial intelligence directly into your Buildorado workflows. Instead of building separate integrations with AI providers, you drag an AI node onto the canvas, connect your API key, and configure the model -- no code required. AI nodes run in the background after a form submission, processing data and passing results to downstream nodes just like any other action.

Buildorado supports eight AI node types spanning text, image, audio, and vector operations. Each node connects to one or more third-party AI providers using your own API keys (BYOK -- Bring Your Own Key), so you have full control over costs, rate limits, and data privacy.

The Eight AI Node Types

Buildorado provides the following AI nodes, each designed for a specific modality:

NodeDescriptionSupported Providers
AgentAutonomous AI agent with tools, memory, and multi-turn reasoningOpenAI, Anthropic, Groq, Together AI, Fireworks AI, DeepSeek, xAI, OpenRouter, Mistral AI
VisionAnalyze and describe images using multimodal modelsOpenAI, Anthropic
Image GenerationGenerate images from text descriptionsOpenAI (DALL-E), Stability AI
Speech to TextTranscribe audio files to textOpenAI (Whisper), Deepgram
Text to SpeechConvert text to spoken audioOpenAI, ElevenLabs
Image EditEdit existing images with AI instructions and masksOpenAI (DALL-E 2), Stability AI (Stable Image Core)
OCRExtract text from images, PDFs, and document scansOpenAI, Anthropic
EmbeddingGenerate vector embeddings for semantic search and similarityOpenAI

How AI Nodes Work

AI nodes appear under the AI tab in the builder sidebar. They behave like other action nodes on the canvas, but instead of connecting to a SaaS integration, they call an AI provider's API.

Execution Model

All AI nodes use background execution. They do not run during the form-filling experience -- they execute after the user submits the form, as part of the workflow's server-side processing. This means:

  • AI processing does not slow down the user's form experience.
  • Results from AI nodes are available to downstream nodes in the workflow (emails, Slack messages, spreadsheet rows, etc.).
  • If an AI node fails (rate limit, invalid key, network error), the workflow's error handling catches it.

Canvas Appearance

On the canvas, AI nodes render as system nodes with a solid border and a configuration status badge. The one exception is the Agent node, which uses a distinct agent node appearance to reflect its more complex, multi-turn behavior.

Adding an AI Node to Your Workflow

  1. Open the builder sidebar and click the AI tab.
  2. Drag the desired AI node type onto the canvas.
  3. Connect it to the upstream node (a form node, branch node, or another action node) by drawing an edge from the source output handle to the AI node's input handle.
  4. Click the AI node to open its configuration panel.
  5. Select a provider (e.g., OpenAI, Anthropic).
  6. Select a model (e.g., GPT-4.1, Claude Sonnet 4.6).
  7. Choose a saved credential or create a new one.
  8. Configure the node-specific settings (prompt, image URL, temperature, etc.).
  9. Connect the AI node's output to downstream nodes that consume the result.

Credential Management (BYOK)

Buildorado uses a Bring Your Own Key model for all AI nodes. You provide your own API keys from each provider, and Buildorado stores them securely.

Setting Up Credentials

  1. Go to Settings in the top navigation.
  2. Click Credentials.
  3. Click Add Credential.
  4. Select the provider (OpenAI, Anthropic, Stability AI, Deepgram, ElevenLabs, etc.).
  5. Paste your API key.
  6. Give the credential a descriptive name (e.g., "OpenAI Production Key").
  7. Click Save.

Security

  • All API keys are encrypted at rest using AES-256 encryption with AWS KMS-managed keys.
  • Keys are never exposed in the browser after initial entry. The configuration panel shows only the credential name, not the key value.
  • Keys are decrypted only at execution time on the server, used for the API call, and immediately discarded from memory.
  • Credentials are scoped to your workspace. Team members with appropriate permissions can use saved credentials without seeing the raw key.

Using Credentials in AI Nodes

When configuring an AI node, the Credential dropdown lists all saved credentials that match the selected provider. If you select OpenAI as the provider, only OpenAI credentials appear. You can add a new credential directly from the node configuration panel without navigating to Settings.

Template Variables in AI Nodes

Most AI node text fields -- prompts, system messages, input text -- support template variables. These let you inject dynamic data from earlier nodes in the workflow.

For example, you can reference a form field value in an Agent's user prompt to process the user's submission dynamically. Template variables are inserted using the variable picker in the text field, which shows all available upstream outputs.

Cost Considerations

Because AI nodes call third-party APIs using your own keys, all usage costs are billed directly by the provider -- not by Buildorado. Keep the following in mind:

  • Token-based pricing: Text models (GPT-4.1, Claude, etc.) charge per input and output token. Longer prompts and responses cost more.
  • Per-call pricing: Image generation, speech-to-text, and text-to-speech typically charge per API call or per second of audio.
  • Rate limits: Each provider enforces rate limits on your API key. High-traffic workflows may need higher-tier API plans.
  • Model selection matters: Using GPT-4.1 Nano or Claude Haiku instead of full-size models can reduce costs by 10-50x for simpler tasks.
  • Max tokens: Setting a reasonable max tokens limit prevents unexpectedly long (and expensive) responses.

Buildorado does not add any markup or surcharge on top of provider pricing. The API calls are made directly to the provider using your key.

Provider and Model Reference

OpenAI

OpenAI is the most broadly supported provider across AI node types. Models available:

ModelNode TypesNotes
GPT-4.1Agent, Vision, OCRFlagship model, best quality
GPT-4.1 MiniAgent, Vision, OCRGood balance of quality and cost
GPT-4.1 NanoAgentFastest, lowest cost
GPT-4oVision, OCRPrevious generation, still capable
o3AgentReasoning model
o3 MiniAgentSmaller reasoning model
o4 MiniAgentLatest small reasoning model
DALL-E 3Image GenerationBest image quality, one image per call
DALL-E 2Image Generation, Image EditSupports editing and masks
Whisper v3 (whisper-1)Speech to TextIndustry-standard transcription
text-embedding-3-smallEmbedding1536 dimensions, cost-effective
text-embedding-3-largeEmbedding3072 dimensions, highest quality
text-embedding-ada-002EmbeddingLegacy model
TTS-1Text to SpeechStandard quality voice
TTS-1 HDText to SpeechHigh-definition voice

Anthropic

ModelNode TypesNotes
Claude Opus 4.6AgentMost capable Anthropic model
Claude Sonnet 4.6Agent, Vision, OCRGreat balance of speed and quality
Claude Sonnet 4.5Agent, Vision, OCRPrevious generation
Claude Haiku 4.5Agent, Vision, OCRFastest, lowest cost

Stability AI

ModelNode TypesNotes
SD3.5 Large TurboImage GenerationFast, high quality
SD3.5 LargeImage GenerationHighest quality
SD3.5 MediumImage GenerationBalanced
Stable Image CoreImage Generation, Image EditCost-effective, supports editing

Deepgram

ModelNode TypesNotes
Nova-3Speech to TextLatest, most accurate
Nova-3 MedicalSpeech to TextOptimized for medical terminology
Nova-2Speech to TextPrevious generation
Nova-2 GeneralSpeech to TextGeneral-purpose

ElevenLabs

ModelNode TypesNotes
Eleven v3Text to SpeechLatest, highest quality
Multilingual v2Text to SpeechMulti-language support
Turbo v2.5Text to SpeechLow-latency, fast generation

Other Providers (Agent Only)

The following providers are available for Agent nodes. They accept free-text model IDs, allowing access to any model the provider offers:

  • Groq -- Ultra-fast inference for Llama and Mixtral models
  • Together AI -- Open-source model hosting (Llama, DeepSeek, Qwen, etc.)
  • Fireworks AI -- High-speed inference for open-source models
  • DeepSeek -- Specialized reasoning and coding models (V3, R1)
  • xAI -- Grok models
  • OpenRouter -- Gateway to many AI providers with a single API key
  • Mistral AI -- Mistral, Mixtral, and Codestral models

Error Handling

AI nodes can fail for several reasons: invalid API key, rate limiting, network timeouts, content policy violations, or provider outages. Buildorado handles these gracefully:

  • Failed AI nodes report the error in the workflow execution log.
  • You can connect an Error Handler node downstream to catch AI failures and route to a fallback path (e.g., send a notification, retry with a different model, or skip the AI step).
  • Provider error messages are sanitized before logging to prevent accidental exposure of API keys in error output.

Best Practices

  • Start with smaller models for prototyping (GPT-4.1 Nano, Claude Haiku), then upgrade to larger models once your prompts are finalized.
  • Set max tokens on every agent node to avoid runaway costs.
  • Use structured output (JSON mode) when downstream nodes need to parse the AI response programmatically.
  • Test with preview mode before publishing. Preview executes the full workflow including AI nodes so you can verify prompts and outputs.
  • Monitor your provider dashboard for usage spikes, especially if your workflow handles high submission volumes.
  • Use template variables to make prompts dynamic. Hardcoded prompts produce the same output for every submission.
  • Add error handlers downstream of AI nodes in production workflows to handle rate limits and outages gracefully.

On this page

AI Nodes Overview | Buildorado