AI Agent
Build autonomous AI agents with tool use, memory, and multi-turn reasoning to process form submissions intelligently.
The AI Agent node is the most powerful AI node in Buildorado. It can reason over multiple steps, call tools, maintain conversation memory, and produce structured output. It is ideal for complex tasks like lead qualification, document analysis, multi-step data extraction, and dynamic content generation.
Agents work by receiving a prompt (typically incorporating form submission data via template variables), reasoning about the task, optionally calling tools to gather information or perform calculations, and returning a final response. The agent autonomously decides when to use tools and when it has enough information to respond.
Supported Providers and Models
The Agent node supports the broadest range of providers across all AI nodes:
| Provider | Models | Notes |
|---|---|---|
| OpenAI | GPT-4.1, GPT-4.1 Mini, GPT-4.1 Nano, o3, o3 Mini, o4 Mini | GPT-4.1 recommended for complex tasks |
| Anthropic | Claude Opus 4.6, Claude Sonnet 4.6, Claude Sonnet 4.5, Claude Haiku 4.5 | Opus for highest quality, Haiku for speed |
| Groq | Llama 3.3 70B, Llama 3.1 8B Instant, Mixtral 8x7B, Gemma 2 9B | Ultra-fast inference. Free-text model input. |
| Together AI | Llama 3.3 70B Turbo, Llama 3.1 8B Turbo, DeepSeek V3, Qwen 2.5 72B Turbo | Cost-effective open-source hosting. Free-text model input. |
| Fireworks AI | Llama 3.3 70B, Mixtral 8x22B, Qwen 2.5 72B | High-speed inference. Free-text model input. |
| DeepSeek | DeepSeek V3, DeepSeek R1 | Strong at structured reasoning. Free-text model input. |
| xAI | Grok 2, Grok 2 Mini | General-purpose. Free-text model input. |
| OpenRouter | Any model ID | Gateway to many providers. Free-text model input. |
| Mistral AI | Mistral Large, Mistral Small, Codestral, Mixtral 8x22B | European-hosted option. Free-text model input. |
Providers marked "Free-text model input" accept any model ID string, so you can use models beyond the suggested defaults.
Configuration
Open the Agent node's configuration panel by clicking on it on the canvas. The following settings are available:
Provider
Select the AI provider from the dropdown. Your choice determines which models are available in the Model selector. Changing the provider resets the model and credential selection.
Model
Choose the specific model to use. For OpenAI and Anthropic, a dropdown lists the available models. For other providers (Groq, Together AI, Fireworks AI, DeepSeek, xAI, OpenRouter, Mistral AI), a free-text input lets you type any model ID, with suggestions shown from common models.
Larger models produce higher-quality output but cost more and run slower. For most workflows, start with a mid-tier model (GPT-4.1 Mini or Claude Sonnet 4.6) and upgrade only if the output quality is insufficient.
Credential
Select a saved API key for the chosen provider. Only credentials matching the selected provider appear in the dropdown. If you have not saved a credential yet, click Add Credential to create one without leaving the configuration panel. See Credential Management for details on how keys are stored.
System Prompt
The system prompt provides persistent instructions that guide the agent's behavior across the entire conversation. Use it to define the agent's role, output format, tone, and constraints.
Example system prompt:
You are a lead qualification assistant for a B2B SaaS company.
Analyze the submitted form data and classify the lead as
"hot", "warm", or "cold" based on company size, budget, and
timeline. Provide a brief justification for your classification.
Always respond in valid JSON format.The system prompt supports template variables, so you can inject dynamic context from upstream nodes.
User Prompt
The user prompt is the main message sent to the agent. This is where you typically include the form submission data that the agent should process.
Example user prompt using template variables:
Please analyze this lead submission:
Company: [Company Name variable]
Budget: [Budget variable]
Timeline: [Timeline variable]
Message: [Message variable]
Classify this lead and provide a recommendation.Use the variable picker in the text field to insert references to form fields and outputs from upstream nodes.
Temperature
Controls the randomness of the agent's responses. The range depends on the provider:
- Anthropic: 0.0 to 1.0
- All other providers: 0.0 to 2.0
The default is 0.7.
| Value | Behavior | Best For |
|---|---|---|
| 0.0 | Deterministic, consistent output | Classification, data extraction, structured output |
| 0.3 - 0.5 | Balanced | General-purpose analysis, summaries |
| 0.7 - 1.0 | Creative, varied output | Content generation, brainstorming |
For most workflow automation tasks, a temperature between 0.0 and 0.3 is recommended to ensure consistent, reliable output.
Max Tokens
Sets the maximum length of the agent's response in tokens (roughly 4 characters per token). The default is 4096. The maximum allowed value is 128000. This prevents unexpectedly long responses and controls costs. If the agent's response is cut off, increase this value.
Common settings:
- 256 -- Short classifications or labels
- 512 -- Brief summaries or single-paragraph responses
- 1024 -- Detailed analysis or multi-paragraph responses
- 2048+ -- Long-form content generation
Max Iterations
When the agent uses tools, it may need multiple back-and-forth cycles: call a tool, read the result, decide the next step, call another tool, and so on. Max Iterations caps the number of these cycles to prevent infinite loops or runaway costs.
- Default: 10 -- Sufficient for most tool-using tasks.
- 1 -- Effectively disables multi-turn tool use (the agent gets one tool call, then must respond).
- Range: 1 to 50.
If the agent has not produced a final response after reaching the max iterations, it returns whatever partial result it has.
Memory
When enabled, the agent maintains conversation memory across executions. This is useful for workflows where the same agent processes multiple related submissions and needs to remember prior interactions.
Memory settings:
- Session Key -- A template variable (e.g., a user ID or session ID) that determines which memory is loaded. Each unique session key gets its own conversation history.
- Window (messages) -- The number of recent messages to load. Default: 20. Range: 1 to 100.
- Expiry (days) -- How many days to keep memory before it expires. Default: 30. Range: 1 to 365.
Memory constraints:
- 40 message cap -- During execution, the conversation history (including memory and current messages) is trimmed to the most recent 40 messages to stay within context window limits.
- 100KB per message -- Individual messages exceeding 100KB are truncated to prevent context window overflow.
- Orphan tool messages at the window boundary are automatically dropped to maintain a valid conversation structure.
For most single-submission workflows, memory is not needed. Enable it when the agent needs continuity across multiple workflow executions.
Response Format
Choose between two output formats:
- Text -- The agent returns a free-text response. Use this for summaries, descriptions, and human-readable output.
- Structured JSON -- The agent returns valid JSON matching a schema you define. When this mode is selected, you can define output fields with a name, type (string, number, boolean, object, array), and description. The agent is constrained to return JSON matching this schema.
When structured JSON is enabled and output fields are defined, each field becomes available as a separate template variable in downstream nodes (e.g., fieldName).
Tools
Tools extend the agent's capabilities beyond text generation. The agent decides when to call a tool based on the task requirements.
Docked Tools (Integration Tools)
Drag integration tools from the sidebar AI tab onto the agent node to give it access to external services. Each integration group gets a single shared credential. You can add or remove individual actions within each integration.
Built-in Tools
- Calculator -- Performs arithmetic calculations. The agent can evaluate mathematical expressions when processing numeric form data.
- Date/Time -- Returns current date and time information. Useful for time-sensitive classifications or deadline calculations.
Custom Tools
You can define custom HTTP or Code tools inline:
- HTTP Tool -- Makes HTTP requests to external APIs. Configure the method (GET, POST, PUT, PATCH, DELETE), URL, and optional timeout (5-300 seconds).
- Code Tool -- Runs custom JavaScript code. Write inline code that receives input from the agent and returns a result.
Each custom tool requires a name (snake_case), description (shown to the AI), and parameter definitions.
Output
The Agent node produces the following output, available to downstream nodes via template variables:
| Field | Type | Description |
|---|---|---|
content | string | The agent's final text or JSON response |
steps | array | Execution history with tool calls and results per step |
iterations | number | Total number of reasoning iterations |
maxIterationsReached | boolean | Whether the max iterations limit was hit |
usage | object | Token usage: inputTokens, outputTokens, totalTokens |
cost | object | Cost information: creditsUsed, usd |
model | string | The model that was used |
provider | string | The provider that was used |
When Structured JSON response format is used with output fields, the parsed JSON fields are also spread into the output as top-level keys (without overwriting the reserved keys listed above).
Examples
Lead Qualification
Configure the agent to classify form submissions:
- System prompt: "You are a lead scoring system. Classify leads as hot, warm, or cold. Return JSON."
- User prompt: Include company name, size, budget, and timeline from form fields.
- Temperature: 0.0 (consistent classification)
- Response format: Structured JSON
- Connect the output to a Branch node that routes based on the classification.
Customer Support Triage
Automatically categorize and prioritize support requests:
- System prompt: "Categorize support tickets by type (bug, feature request, question, billing) and priority (P1-P4)."
- User prompt: Include the customer's message and account details.
- Tools: Enable Calculator for SLA deadline calculations, Date/Time for urgency assessment.
- Route the output to different Slack channels based on category and priority.
Content Generation
Generate personalized follow-up emails:
- System prompt: "Write a professional follow-up email based on the form submission. Match the tone to the industry."
- User prompt: Include all form field values.
- Temperature: 0.5 (some creative variation)
- Max tokens: 1024
- Feed the output into a Send Email action node.
Document Summarization
Summarize uploaded documents:
- System prompt: "Summarize the following text in 3-5 bullet points. Focus on key findings and action items."
- User prompt: Include the OCR output from a preceding OCR node or text extracted from a file upload.
- Temperature: 0.1
- Send the summary to Google Sheets or Slack.
Best Practices
- Be specific in system prompts. Vague instructions like "analyze this data" produce inconsistent results. Specify the exact output format, fields, and criteria you expect.
- Use structured JSON output when the response feeds into conditional logic or data mapping. Free-text responses are harder to parse programmatically.
- Set temperature to 0.0 for classification, extraction, and routing tasks where consistency matters more than creativity.
- Limit max iterations to the minimum needed. Each iteration is an additional API call that costs tokens and adds latency.
- Test prompts in preview mode before publishing. Small changes to system prompts can significantly affect output quality.
- Add an error handler downstream to catch API failures, rate limits, or content policy violations.
- Use the right model for the job. GPT-4.1 Nano and Claude Haiku handle simple classification and extraction well at a fraction of the cost of larger models. Reserve GPT-4.1 and Claude Opus for tasks that genuinely need advanced reasoning.
Limitations
- Agent execution is subject to a 120-second timeout per AI provider call. If the agent has not completed within this window, the node fails and can be caught by an error handler.
- The agent cannot access the internet, browse websites, or call external APIs directly (unless you configure a custom HTTP tool). Use the HTTP Request node for external API calls and pass the results to the agent via template variables.
- Tool definitions must be valid JSON Schema. Malformed tool definitions cause the agent to ignore the tool or produce errors.
- Memory is per-agent-node, not shared across different agent nodes in the same workflow.
AI Nodes Overview
Use AI nodes to add intelligent processing to your workflows -- text generation, image analysis, speech processing, and more, all powered by your own API keys.
Vision Analysis
Analyze images with AI vision models to describe content, extract information, detect objects, and answer questions about visual data.