Sign In

Text Embeddings

Generate vector embeddings for semantic search, similarity matching, and clustering using OpenAI embedding models.

The Embedding node converts text into a vector of numbers (an embedding) that captures the semantic meaning of the text. These vectors enable semantic search, similarity comparison, clustering, and classification based on meaning rather than exact keyword matching. Embeddings are the foundation of modern search and recommendation systems.

What Are Embeddings?

An embedding is a list of floating-point numbers (a vector) that represents a piece of text in a high-dimensional space. Texts with similar meanings produce vectors that are close together in this space, even if they use completely different words.

For example, the phrases "How do I reset my password?" and "I forgot my login credentials" would produce similar embedding vectors because they express similar intent, despite sharing no words.

Common applications:

  • Semantic search -- Find documents that match a query by meaning, not just keywords.
  • Similarity matching -- Compare form submissions to find duplicates or related entries.
  • Clustering -- Group similar responses together automatically.
  • Classification -- Categorize text by comparing its embedding to known category embeddings.
  • Recommendation -- Suggest similar content based on embedding proximity.

Supported Providers and Models

ProviderModelsDefault DimensionsNotes
OpenAItext-embedding-3-small1536Cost-effective, fast
OpenAItext-embedding-3-large3072Highest quality, better for fine distinctions
OpenAItext-embedding-ada-0021536Legacy model

Model Comparison

Featuretext-embedding-3-smalltext-embedding-3-largetext-embedding-ada-002
Default dimensions153630721536
Custom dimensionsYes (lower)Yes (lower)No
QualityGoodBestGood
SpeedFasterSlightly slowerModerate
CostLowerHigherModerate
Best forGeneral-purpose, high-volumePrecision-critical, nuanced similarityLegacy compatibility

Choose text-embedding-3-small for most use cases. It provides excellent quality at a lower cost and works well for search, similarity, and classification tasks.

Choose text-embedding-3-large when you need the highest possible accuracy for fine-grained distinctions, such as differentiating between very similar documents or building precision-critical search systems.

Choose text-embedding-ada-002 only for backward compatibility with existing vector databases using this model.

Configuration

Provider

Currently only OpenAI is available for embeddings.

Model

Select the embedding model:

  • text-embedding-3-small -- 1536-dimensional vectors by default. Recommended starting point.
  • text-embedding-3-large -- 3072-dimensional vectors by default. Use when accuracy is paramount.
  • text-embedding-ada-002 -- Legacy model. 1536-dimensional vectors.

Credential

Select a saved OpenAI API key. See Credential Management for setup instructions.

Text Input

The text to convert into an embedding vector. This field supports template variables, so you can embed form submission data, outputs from other nodes, or any dynamic text.

Example inputs:

[Customer feedback textarea variable]
[Product description variable] - [Category variable]

Input guidelines:

  • Shorter, focused text generally produces better embeddings than very long text.
  • If you need to embed a long document, consider splitting it into paragraphs and embedding each separately.
  • Remove irrelevant formatting, HTML tags, and boilerplate text before embedding.
  • The maximum input length depends on the model's context window. Both v3 models support up to 8191 tokens (roughly 6000-7000 words).

Dimensions

Optionally override the default vector dimensions. Default is 1536. Maximum is 3072. You can reduce the dimensions below the model's default to save storage space and speed up similarity calculations, at the cost of some accuracy.

Settingtext-embedding-3-smalltext-embedding-3-large
Default15363072
Minimum useful~256~256
Trade-offLower dimensions = faster search, less accurateSame

When to reduce dimensions:

  • You are storing thousands of embeddings and need to minimize storage costs.
  • Your similarity search needs to be extremely fast and slight accuracy loss is acceptable.
  • Your downstream vector database has dimension limits.

When to keep defaults:

  • You need the highest accuracy.
  • Storage and search speed are not constraints.
  • You are comparing embeddings within the same workflow rather than storing them.

If not specified, the model's default dimensions are used.

Output

The Embedding node produces:

FieldTypeDescription
embeddingsarrayArray of embedding vectors (each vector is an array of floating-point numbers)
dimensionsnumberThe number of dimensions in each vector
totalTokensnumberThe number of tokens processed
modelstringThe model that was used
providerstringThe provider that was used

The embedding vectors are available to downstream nodes via template variables.

Use Cases

Semantic Search for Support Tickets

Match incoming support requests to a knowledge base:

  • A customer submits a support form describing their issue.
  • The Embedding node converts the issue description into a vector.
  • An HTTP Request node sends the vector to a vector database (Pinecone, Weaviate, Qdrant) to find similar past tickets or knowledge base articles.
  • The top matches are included in a response email or passed to an Agent node for answer generation.

Duplicate Detection

Find duplicate or near-duplicate form submissions:

  • Each new submission's text fields are embedded.
  • The embedding is compared against previously stored embeddings via a vector database.
  • If a highly similar submission exists (cosine similarity above a threshold), the workflow flags it as a potential duplicate.
  • Duplicates are routed to a review queue instead of normal processing.

Content Recommendation

Suggest related content based on semantic similarity:

  • A user submits a topic or question through a form.
  • The Embedding node converts the topic into a vector.
  • The vector is compared against embeddings of existing content (articles, products, courses).
  • The most similar items are returned as recommendations.

Automated Classification

Categorize text without explicit rules:

  • Pre-compute embeddings for a set of category descriptions (e.g., "technical support request", "billing inquiry", "feature request").
  • When a new submission arrives, embed it and compare its vector to each category embedding.
  • The closest category becomes the classification.
  • This approach adapts to new categories without rewriting conditional logic.

Survey Response Clustering

Group similar survey responses:

  • Each free-text survey response is embedded.
  • Embeddings are stored in a vector database.
  • Periodic batch analysis clusters similar responses to identify common themes.
  • Theme summaries are generated by an Agent node processing each cluster.

Feedback Deduplication

Consolidate similar feedback entries:

  • Customer feedback from multiple channels (forms, emails, chat) is embedded.
  • Similar feedback is grouped by embedding proximity.
  • An Agent node summarizes each group into a single consolidated feedback item.
  • The consolidated feedback is pushed to a product management tool.

Working with Vector Databases

Embeddings are most useful when stored in a vector database that supports similarity search. Common vector databases include:

DatabaseIntegration MethodNotes
PineconeHTTP Request nodeManaged vector database, easy to set up
WeaviateHTTP Request nodeOpen-source, self-hosted or cloud
QdrantHTTP Request nodeOpen-source, high performance
Supabase pgvectorHTTP Request nodePostgreSQL extension, familiar SQL interface
ChromaDBHTTP Request nodeLightweight, Python-focused

To store and search embeddings, use the HTTP Request node to call the vector database's API:

  1. Store: After the Embedding node produces a vector, use an HTTP Request node to upsert the vector into the database along with metadata (submission ID, timestamp, category).
  2. Search: To find similar items, embed the search query and use an HTTP Request node to query the vector database for the nearest neighbors.

Understanding Similarity Scores

When comparing embeddings, the most common metric is cosine similarity, which produces a score between -1 and 1:

Score RangeInterpretation
0.9 - 1.0Nearly identical meaning
0.7 - 0.9Highly similar, same topic
0.5 - 0.7Moderately related
0.3 - 0.5Loosely related
Below 0.3Unrelated

These thresholds are approximate and should be calibrated for your specific use case.

Best Practices

  • Use text-embedding-3-small unless you have a specific need for higher precision. It provides excellent quality at significantly lower cost.
  • Clean input text before embedding. Remove HTML tags, excessive whitespace, and boilerplate content that does not contribute to meaning.
  • Keep inputs focused. A single clear sentence or paragraph embeds better than a long, rambling document. If you need to embed a long document, split it into chunks.
  • Be consistent with models. Always compare embeddings generated by the same model. Vectors from text-embedding-3-small and text-embedding-3-large are not compatible for comparison.
  • Reduce dimensions only when storage or speed is a genuine constraint. The default dimensions provide the best accuracy.
  • Store metadata alongside embeddings. When inserting vectors into a database, include the original text, submission ID, timestamp, and any relevant labels for retrieval.
  • Calibrate similarity thresholds for your use case. Run a set of known similar and dissimilar pairs through the system and adjust thresholds based on the results.

Limitations

  • Only OpenAI models are supported for embeddings. Other providers are not available through this node.
  • The node embeds a single text input per execution. For batch embedding, use a Loop node.
  • Embeddings are not human-readable. They are arrays of numbers that only become useful through mathematical comparison (cosine similarity, dot product).
  • The node does not perform similarity search itself. You need a vector database or custom comparison logic in downstream nodes.
  • Maximum input length is 8191 tokens (roughly 6000-7000 words). Longer text is truncated.
  • Embedding models are not generative. They do not produce text, only numerical vectors.
  • Execution is subject to a 60-second timeout.

On this page

Text Embeddings | Buildorado