Azure AI Search vector stores are now available in n8n
TL;DR
n8n now natively supports Azure AI Search as a verified vector store, enabling you to store, retrieve, and query embeddings directly in your automation workflows. Get semantic hybrid search and reranking for better relevance—out of the box.
This integration leverages the same Azure AI Search capabilities that power Microsoft Foundry IQ, with agentic retrieval features coming soon to n8n.
Figure 1: n8n Azure AI Search RAG workflow demo
Why This Matters
Retrieval Augmented Generation (RAG) isn't just about storing vectors—it's about surfacing the right context when your AI needs it.
Azure AI Search combines vector similarity, keyword search, and semantic reranking into a single query, delivering better answer quality than vector-only approaches. These are the same core search primitives that power Foundry IQ—Microsoft's knowledge layer for agents—and we're working to bring Foundry IQ's agentic retrieval capabilities to n8n in the future.
What's New in n8n
- Three query modes: Vector, hybrid, and semantic hybrid search
- Semantic reranking: Intelligent reordering built directly into the platform
-
OData filters: Pre-filter results by metadata (e.g., metadata/category eq 'technology')
- Auto-index creation: Optimized HNSW configuration applied automatically
- Bring your existing indexes: Connect to your current Azure AI Search indexes—no need to rebuild from scratch
- Flexible operation: Use as a node, tool, or retriever in RAG chains
How It Works
- Create credentials – Add your Azure AI Search endpoint and API key in n8n
- Add the Vector Store node – Select Azure AI Search Vector Store from the node library
- Configure your index – Specify an index name (automatically created with optimized settings if it doesn't exist)
- Connect your embedding model – Integrate with OpenAI, Azure OpenAI, or other supported providers
- Choose your query mode – Select vector, hybrid, or semantic hybrid based on your use case
- Apply filters (optional) – Add OData metadata filters to refine results
- Integrate into your workflow – Use as a retriever, tool, or storage node
- Deploy and test – Your documents are indexed and searchable with semantic understanding