analytics
37 TopicsIntroducing Lakeflow Connect Free Tier, now available on Azure Databricks
We're excited to introduce the Lakeflow Connect Free Tier on Azure Databricks, so you can easily bring your enterprise data into your lakehouse to build analytics and AI applications faster. Modern applications require reliable access to operational data, especially for training analytics and AI agents, but connecting and gathering data across silos can be challenging. With this new release, you can seamlessly ingest all of your enterprise data from SaaS and database sources to unlock data intelligence for your AI agents. Ingest millions of records per day, per workspace for free This new Lakeflow Connect Free Tier provides 100 DBUs per day, per workspace, which allows you to ingest approximately 100 million records* from many popular data sources**, including SaaS applications and databases. Unlock your enterprise data for free with Lakeflow Connect This new offering provides all the benefits of Lakeflow Connect, eliminating the heavy lifting so your teams can focus on unlocking data insights instead of managing infrastructure. In the past year, Databricks has continued rolling out several fully managed connectors, supporting popular data sources. The free tier supports popular SaaS applications (Salesforce, ServiceNow, Google Analytics, Workday, Microsoft Dynamics 365), and top-used databases (SQL Server, Oracle, Teradata, PostgreSQL, MySQL, Snowflake, Redshift, Synapse, and BigQuery). Lakeflow Connect benefits include: Simple UI: Avoid complex setups and architectural overhead, these fully managed connectors provide a simple UI and API to democratize data access. Automated features also help simplify pipeline maintenance with minimal overhead. Efficient ingestion: Increase efficiency and accelerate time to value. Optimized incremental reads and writes and data transformation help improve the performance and reliability of your pipelines, reduce bottlenecks, and reduce impact to the source data for scalability. Unified with the Databricks Platform: Create ingestion pipelines with governance from Unity Catalog, observability from Lakehouse Monitoring, and seamless orchestration with Lakeflow Jobs for analytics, AI and BI. Availability The Lakeflow Connect Free Tier is available starting today on Azure Databricks. If you are at FabCon in Atlanta, Accelerating Data and AI with Azure Databricks on Thursday, March 19th, 8:00–9:00 AM, room C302 to see how these capabilities come together to accelerate performance, simplify architecture, and maximize value on Azure Getting Started Resources To learn more about the Lakeflow Connect Free Tier and Lakeflow Connect, review our pricing page, and documentation. Get started ingesting your data today for free, signup with an Azure free account. Get started with Azure Databricks for free Product tour: Databricks Lakeflow Connect for Salesforce: Powering Smarter Selling with AI and Analytics Product tour: Effortless ServiceNow Data Ingestion with Databricks Lakeflow Connect Product tour: Simplify Data Ingestion with Lakeflow Connect: From Google Analytics to AI On-demand video: Use Lakeflow Connect for Salesforce to predict customer churn On-demand video: Databricks Lakeflow Connect for Workday Reports: Connect, Ingest, and Analyze Workday Data Without Complexity On-demand video: Data Ingestion With Lakeflow Connect —-- * Your actual ingestion capacity will vary based on specific workload characteristics, record sizes, and source types. ** Excludes Zerobus Ingest, Auto Loader and other self-managed connectors. Customer will continue to incur charges for underlying infrastructure consumption from the cloud vendor.3.8KViews0likes0CommentsNear–Real-Time CDC to Delta Lake for BI and ML with Lakeflow on Azure Databricks
The Challenge: Too Many Tools, Not Enough Clarity Modern data teams on Azure often stitch together separate orchestrators, custom streaming consumers, hand-rolled transformation notebooks, and third-party connectors — each with its own monitoring UI, credential system, and failure modes. The result is observability gaps, weeks of work per new data source, disconnected lineage, and governance bolted on as an afterthought. Lakeflow, Databricks’ unified data engineering solution, solves this by consolidating ingestion, transformation, and orchestration natively inside Azure Databricks — governed end-to-end by Unity Catalog. Component What It Does Lakeflow Connect Point-and-click connectors for databases (using CDC), SaaS apps, files, streaming, and Zerobus for direct telemetry Lakeflow Spark Declarative Pipelines Declarative ETL with AutoCDC, data quality enforcement, and automatic incremental processing Lakeflow Jobs Managed orchestration with 99.95% uptime, a visual task DAG, and repair-and-rerun Architecture Step 1: Stream Application Telemetry with Zerobus Ingest Zerobus Ingest, part of Lakeflow Connect, lets your application push events directly to a Delta table over gRPC — no message bus, no Structured Streaming job. Sub-5-second latency, up to 100 MB/sec per connection, immediately queryable in Unity Catalog. Prerequisites Azure Databricks workspace with Unity Catalog enabled and serverless compute on A service principal with write access to the target table Setup First, create the target table in a SQL notebook: CREATE CATALOG IF NOT EXISTS prod; CREATE SCHEMA IF NOT EXISTS prod.bronze; CREATE TABLE IF NOT EXISTS prod.bronze.telemetry_events ( event_id STRING, user_id STRING, event_type STRING, session_id STRING, ts BIGINT, page STRING, duration_ms INT ); 1. Go to Settings → Identity and Access → Service Principals → Add service principal 2. Open the service principal → Secrets tab → Generate secret. Save the Client ID and secret. 3. In a SQL notebook, grant access: GRANT USE CATALOG ON CATALOG prod TO `<client-id>`; GRANT USE SCHEMA ON SCHEMA prod.bronze TO `<client-id>`; GRANT MODIFY, SELECT ON TABLE prod.bronze.telemetry_events TO `<client-id>`; 4. Derive your Zerobus endpoint from your workspace URL: <workspace-id>.zerobus.<region>.azuredatabricks.net (The workspace ID is the number in your workspace URL, e.g. adb-**1234567890**.12.azuredatabricks.net) 5. Install the SDK: pip install databricks-zerobus-ingest-sdk 6. In your application, open a stream and push records: from zerobus.sdk.sync import ZerobusSdk from zerobus.sdk.shared import RecordType, StreamConfigurationOptions, TableProperties sdk = ZerobusSdk("<workspace-id>.zerobus.<region>.azuredatabricks.net", "https://<workspace-url>") stream = sdk.create_stream( "<client-id>", "<client-secret>", TableProperties("prod.bronze.telemetry_events"), StreamConfigurationOptions(record_type=RecordType.JSON) ) stream.ingest_record({"event_id": "e1", "user_id": "u42", "event_type": "page_view", "ts": 1700000000000}) stream.close() 7. Verify in Catalog → prod → bronze → telemetry_events → Sample Data Step 2: Ingest from On-Premises SQL Server via CDC Lakeflow Connect reads SQL Server's transaction log incrementally — no full table scans, no custom extraction software. Connectivity to your on-prem server is over Azure ExpressRoute. Prerequisites SQL Server reachable from Databricks over ExpressRoute (TCP port 1433) CDC enabled on the source database and tables (see setup below) A SQL login with CDC read permissions on the source database Databricks: CREATE CONNECTION privilege on the metastore; USE CATALOG, CREATE TABLE on the destination catalog Setup Enable CDC on SQL Server: USE YourDatabase; EXEC sys.sp_cdc_enable_db; EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'orders', @role_name = NULL; EXEC sys.sp_cdc_enable_table @source_schema = N'dbo', @source_name = N'customers', @role_name = NULL; Configure the connector in Databricks: Click Data Ingestion in the sidebar (or + New → Add Data) Select SQL Server from the connector list Ingestion Gateway page — enter a gateway name, select staging catalog/schema, click Next Ingestion Pipeline page — name the pipeline, click Create connection: Host: your on-prem IP (e.g. 10.0.1.50) · Port: 1433 · Database: YourDatabase Enter credentials, click Create, then Create pipeline and continue Source page — expand the database tree, check dbo.orders and dbo.customers; optionally enable History tracking (SCD Type 2) per table. Set Destination table name to orders_raw and customers_raw respectively. Destination page — set catalog: prod, schema: bronze, click Save and continue Settings page — set a sync schedule (e.g. every 5 minutes), click Save and run pipeline Step 3: Transform with Spark Declarative Pipelines The Lakeflow Pipelines Editor is an IDE built for developing pipelines in Lakeflow Spark Declarative Pipelines (SDP), and lets you define Bronze → Silver → Gold in SQL. SDP then handles incremental execution, schema evolution, and lineage automatically. Prerequisites Bronze tables populated (from Steps 1 and 2) CREATE TABLE and USE SCHEMA privileges on prod.silver and prod.gold Setup 1. In the sidebar, click Jobs & Pipelines → ETL pipeline → Start with an empty file → SQL 2. Rename the pipeline (click the name at top) to lakeflow-demo-pipeline 3. Paste the following SQL: -- Silver: latest order state (SCD Type 1) CREATE OR REFRESH STREAMING TABLE prod.silver.orders; APPLY CHANGES INTO prod.silver.orders FROM STREAM(prod.bronze.orders_raw) KEYS (order_id) SEQUENCE BY updated_at STORED AS SCD TYPE 1; -- Silver: full customer history (SCD Type 2) CREATE OR REFRESH STREAMING TABLE prod.silver.customers; APPLY CHANGES INTO prod.silver.customers FROM STREAM(prod.bronze.customers_raw) KEYS (customer_id) SEQUENCE BY updated_at STORED AS SCD TYPE 2; -- Silver: telemetry with data quality check CREATE OR REFRESH STREAMING TABLE prod.silver.telemetry_events ( CONSTRAINT valid_event_type EXPECT (event_type IN ('page_view', 'add_to_cart', 'purchase')) ON VIOLATION DROP ROW ) AS SELECT * FROM STREAM(prod.bronze.telemetry_events); -- Gold: materialized view joining all three Silver tables CREATE OR REFRESH MATERIALIZED VIEW prod.gold.customer_activity AS SELECT o.order_id, o.customer_id, c.customer_name, c.email, o.order_amount, o.order_status, COUNT(e.event_id) AS total_events, SUM(CASE WHEN e.event_type = 'purchase' THEN 1 ELSE 0 END) AS purchase_events FROM prod.silver.orders o LEFT JOIN prod.silver.customers c ON o.customer_id = c.customer_id LEFT JOIN prod.silver.telemetry_events e ON CAST(o.customer_id AS STRING) = e.user_id -- user_id in telemetry is string GROUP BY o.order_id, o.customer_id, c.customer_name, c.email, o.order_amount, o.order_status; 4. Click Settings (gear icon) → set Pipeline mode: Continuous → Target catalog: prod → Save 5. Click Start — the editor switches to the live Graph view Step 4: Govern with Unity Catalog All tables from Steps 1–3 are automatically registered in Unity Catalog, Databricks’ built-in governance and security offering, with full lineage. No manual registration needed. View lineage Go to Catalog → prod → gold → customer_activity Click the Lineage tab → See Lineage Graph Click the expand icon on each upstream node to reveal the full chain: Bronze sources → Silver → Gold Set Permissions -- Grant analysts read access to the Gold layer only GRANT SELECT ON TABLE prod.gold.customer_activity TO `analysts@contoso.com`; -- Mask PII for non-privileged users CREATE FUNCTION prod.security.mask_email(email STRING) RETURNS STRING RETURN CASE WHEN is_account_group_member('data-engineers') THEN email ELSE CONCAT(LEFT(email, 2), '***@***.com') END; ALTER TABLE prod.silver.customers ALTER COLUMN email SET MASK prod.security.mask_email; Step 5: Orchestrate and Monitor with Lakeflow Jobs Wire the Connect pipeline and SDP pipeline into a single job with dependencies, scheduling, and alerting, all from the UI with Lakeflow Jobs. Prerequisites Pipelines from Steps 2 and 3 saved in the workspace Setup Go to Jobs & Pipelines → Create → Job Task 1: click the Pipeline tile → name it ingest_sql_server_cdc → select your Lakeflow Connect pipeline → Create task Task 2: click + Add task → Pipeline → name it transform_bronze_to_gold → select lakeflow-demo-pipeline → set Depends on: ingest_sql_server_cdc → Create task In the Job details panel on the right: click Add schedule → set frequency → add email notification on failure → Save Click Run now to trigger a run, then click the run ID to open the Run detail view For health monitoring across all jobs, query system tables in any notebook or SQL warehouse: SELECT job_name, result_state, DATEDIFF(second, start_time, end_time) AS duration_sec FROM system.lakeflow.job_run_timeline WHERE start_time >= CURRENT_TIMESTAMP - INTERVAL 24 HOURS ORDER BY start_time DESC; Step 6: Visualize with AI/BI Dashboards and Genie AI/BI Dashboard helps you create AI-powered, low-code dashboards. Click + New → Dashboard Click Add a visualization, connect to prod.gold.customer_activity, and build charts Click Publish — viewers see data under their own Unity Catalog permissions automatically Genie allows you to interact with their data using natural language 1. In the sidebar, click Genie → New 2. On Choose data sources, select prod.gold.customer_activity → Create 3. Add context in the Instructions box (e.g., table relationships, business definitions) 4. Switch to the Chat tab and ask a question: "Which customers have the highest total events and what were their order amounts?" 5. Genie generates and executes SQL, returning a result table. Click View SQL to inspect the query. Everything in One Platform Capability Lakeflow Previously Required Telemetry ingestion Zerobus Ingest Message bus + custom consumer Database CDC Lakeflow Connect Custom scripts or 3rd-party tools Transformation + AutoCDC Spark Declarative Pipelines Hand-rolled MERGE logic Data quality SDP Expectations Separate validation tooling Orchestration Lakeflow Jobs External schedulers (Airflow, etc.) Governance Unity Catalog Disconnected ACLs and lineage Monitoring Job UI + System Tables Separate APM tools BI + NL Query AI/BI Dashboards + Genie External BI tools Customers seeing results on Azure Databricks: Ahold Delhaize — 4.5x faster deployment and 50% cost reduction running 1,000+ ingestion jobs daily Porsche Holding — 85% faster ingestion pipeline development vs. a custom-built solution Next Steps Lakeflow product page Lakeflow Connect documentation Live demos on Demo Center Get started with Azure Databricks478Views0likes0CommentsAzure Databricks Lakebase is now generally available
Modern applications are built on real-time, intelligent, and increasingly powered by AI agents that need fast, reliable access to operational data—without sacrificing governance, scale, or simplicity. To solve for this, Azure Databricks Lakebase introduces a serverless, Postgres database architecture that separates compute from storage and integrates natively with the Databricks Data Intelligence Platform on Azure. Lakebase is now generally available in Azure Databricks enabling you and your team to start building and validating real-time and AI-driven applications directly on your lakehouse foundation. Why Azure Databricks Lakebase? Lakebase was created for modern workloads and reduce silos. By decoupling compute from storage, Lakebase treats infrastructure as an on-demand service—scaling automatically with workload needs and scaling to zero when idle. Key capabilities include: Serverless Postgres for Production Workloads: Lakebase delivers a managed Postgres experience with predictable performance and built-in reliability features suitable for production applications, while abstracting away infrastructure management. Instant Branching and Point-in-Time Recovery: Teams can create zero-copy branches of production data in seconds for testing, debugging, or experimentation, and restore databases to precise points in time to recover from errors or incidents. Unified Governance with Unity Catalog: Operational data in Lakebase can be governed using the same Unity Catalog policies that secure analytics and AI workloads, enabling consistent access control, auditing, and compliance across the platform. Built for AI and Real-Time Applications: Lakebase is designed to support AI-native patterns such as real-time feature serving, agent memory, and low-latency application state—while keeping data directly connected to the lakehouse for analytics and learning workflows. Lakebase allows applications to operate directly on governed, lake-backed data—reducing complexity with pipeline synchronization or duplicating storage On Azure Databricks, this unlocks new scenarios such as: Real-time applications built on lakehouse data AI agents with persistent, governed memory Faster release cycles with safe, isolated database branches Simplified architectures with fewer moving parts All while using familiar Postgres interfaces and tools. Get Started with Azure Databricks Lakebase Lakebase is integrated into the Azure Databricks experience and can be provisioned directly within Azure Databricks workspaces. For Azure Databricks customers building intelligent, real-time applications, it offers a new foundation—one designed for the pace and complexity of modern data-driven systems. We’re excited to see what you build, get started today!1.1KViews0likes0CommentsServerless Workspaces are generally available in Azure Databricks
Recently, we announced that Serverless Workspaces in public preview. Today, we are excited to share that Serverless Workspaces are generally available in Azure Databricks. Azure Databricks now offers two workspace models: Serverless and Classic. With Serverless, Azure Databricks operates and maintains the entire environment on your behalf. You still configure governance elements like Unity Catalog, identity federation, and workspace-level policies, but the heavy lifting of infrastructure setup disappears. As a result, teams can start building immediately instead of waiting on networking or compute provisioning. Classic workspaces take the opposite approach. When creating a Classic workspace, you must design and deploy the full networking layout, determine how compute should be managed, establish storage patterns, and configure all inbound and outbound connectivity. These decisions are critical and have benefits in heavily regulated or secure industries, but they may create overhead for teams who simply want to start working with data. A Serverless Workspace eliminates that overhead entirely. Once created, it’s ready for use—no virtual network design, no storage configuration, and no cluster management. Serverless workspaces use serverless compute and default storage. Unity Catalog is automatically provisioned so that the same governance model applies. Key capabilities and consideration of Serverless workspaces Storage: Each Serverless workspace includes fully managed object storage called default storage. You can build managed catalogs, volumes, and tables without supplying your own storage accounts or credentials. Features like multi-key projection and restricted object-store access ensure that only authorized users can work with the data. Classic compute cannot interact with data assets in default storage. If you already have an Azure Blob Storage account (likely with hierarchical namespace enabled) or if your organization requires using your own storage due to security or compliance requirements, you can also create a connection between your storage account and Serverless Workspace. Compute: Workloads run on automatically provisioned serverless compute—no need to build or maintain clusters. Azure Databricks handles scaling and resource optimization so users can focus purely on data and analytics. Network: There’s no requirement to deploy NAT gateways, firewalls, or Private Link endpoints. Instead, you define serverless egress rules and serverless Private Link controls that apply uniformly to all workloads in the workspace. Unity Catalog access: Governed data remains accessible from the new workspace with existing permissions intact. Your data estate stays consistent and secure. Choosing between Serverless and Classic Azure Databricks supports both workspace types so organizations can select what best matches their needs. Use Serverless when rapid workspace creation, minimal configuration, and reduced operational overhead are priorities. It’s the fastest path to a fully governed environment. Use Classic when you require a custom VNet design, specific network topologies, finer grain security controls, or features that are not yet available in the serverless model. Some organizations also simply prefer to manage Azure resources directly, making Classic workspaces a suitable option. Note that Azure Databricks serverless workspaces are only available in regions that support serverless compute. To learn get started create your Serverless workspace today!802Views0likes0CommentsGeneral Availability: Automatic Identity Management (AIM) for Entra ID on Azure Databricks
In February, we announced that Automatic Identity Management in public preview and loved to hear your overwhelmingly positive feedback. Prior to public preview, you either had to set up an Entra Enterprise Application or involve an Azure Databricks account admin to import the appropriate groups. This required manual steps whether it was adding or removing users with organizational changes, maintaining scripts, or requiring additional Entra or SCIM configuration. Identity management was thus cumbersome and required management overhead. Today, we are excited to announce that Automatic Identity management (AIM) for Entra ID on Azure Databricks is generally available. This means no manual user setup is needed and you can instantly add users to your workspace(s). Users, groups, and service principals from Microsoft Entra ID are automatically available within Azure Databricks, including support for nested groups and dashboards. This native integration is one of the many reasons Databricks runs best on Azure. Here are some addition ways AIM could benefit you and your organization: Seamlessly share dashboards You can share AI/BI dashboards with any user, service principal, or group in Microsoft Entra ID immediately as these users are automatically added to the Azure Databricks account upon login. Members of Microsoft Entra ID who do not have access to the workspace are granted access to a view-only copy of a dashboard published with embedded credentials. This enables you to share dashboards with users outside your organization, too. To learn more, see share a dashboard. Updated defaults for new accounts All new Azure Databricks accounts have AIM enabled – no opt in or additional configuration required. For existing accounts, you can enable AIM with a single click in the Account Admin Console. Soon, we will also make this the default for existing accounts. Automation at scale enabled via APIs You can also register users, groups, or service principles in Microsoft Entra ID via APIs. Being able to do this programmatically enables the enterprise scale most of our customers need. You can also enable automation via scripts leveraging these APIs. Read the Databricks blog here and get started via documentation today!1.8KViews1like0CommentsClosing the loop: Interactive write-back from Power BI to Azure Databricks
This is a collaborative post from Microsoft and Databricks. We thank Toussaint Webb, Product Manager at Databricks, for his contributions. We're excited to announce that the Azure Databricks connector for Power Platform is now Generally Available. With this integration, organizations can seamlessly build Power Apps, Power Automate flows, and Copilot Studio agents with secure, governed data and no data duplication. A key functionality unlocked by this connector is the ability to write data back from Power BI to Azure Databricks. Many organizations want to not only analyze data but also act on insights quickly and efficiently. Power BI users, in particular, have been seeking a straightforward way to “close the loop” by writing data back from Power BI into Azure Databricks. This capability is now here - real-time updates and streamlined operational workflows with the new Azure Databricks connector for Power Platform. With this connector, users can now read from and write to Azure Databricks data warehouses in real time, all from within familiar interfaces — no custom connectors, no data duplication, and no loss of governance. How It Works: Write-backs from Power BI through Power Apps Enabling writebacks from Power BI to Azure Databricks is seamless. Follow these steps: Open Power Apps and create a connection to Azure Databricks (documentation). In Power BI (desktop or service), add a Power Apps visual to your report (purple Power Apps icon). Add data to connect to your Power App via the visualization pane. Create a new Power App directly from the Power BI interface, or choose an existing app to embed. Start writing records to Azure Databricks! With this integration, users can make real-time updates directly within Power BI using the embedded Power App, instantly writing changes back to Azure Databricks. Think of all the workflows that this can unlock, such as warehouse managers monitoring performance and flagging issues on the spot, or store owners reviewing and adjusting inventory levels as needed. The seamless connection between Azure Databricks, Power Apps, and Power BI lets you close the loop on critical processes by uniting reporting and action in one place. Try It Out: Get started with Azure Databricks Power Platform Connector The Power Platform Connector is now Generally Available for all Azure Databricks customers. Explore more in the deep dive blog here and to get started, check out our technical documentation. Coming soon we will add the ability to execute existing Azure Databricks Jobs via Power Automate. If your organization is looking for an even more customizable end-to-end solution, check out Databricks Apps in Azure Databricks! No extra services or licenses required.4.6KViews2likes2CommentsAnnouncing the Azure Databricks connector in Power Platform
We are ecstatic to announce the public preview of the Azure Databricks Connector for Power Platform. This native connector is specifically for Power Apps, Power Automation, and Copilot Studio within Power Platform and enables seamless, single click connection. With this connector, your organization can build data-driven, intelligent conversational experiences that leverage the full power of your data within Azure Databricks without any additional custom configuration or scripting – it's all fully built in! The Azure Databricks connector in power platform enables you to: Maintain governance: All access controls for data you set up in Azure Databricks are maintained in Power Platform Prevent data copy: Read and write to your data without data duplication Secure your connection: Connect Azure Databricks to Power Platform using Microsoft Entra user-based OAuth or service principals Have real time updates: Read and write data and see updates in Azure Databricks in near real time Build agents with context: Build agents with Azure Databricks as grounding knowledge with all the context of your data Instead of spending time copying or moving data and building custom connections which require additional manual maintenance, you can now seamlessly connect and focus on what matters – getting rich insights from your data – without worrying about security or governance. Let’s see how this connector can be beneficial across Power Apps, Power Automate, and Copilot Studio: Azure Databricks Connector for Power Apps – You can seamlessly connect to Azure Databricks from Power Apps to enable read/write access to your data directly within canvas apps enabling your organization to build data-driven experiences in real time. For example, our retail customers are using this connector to visualize different placements of items within the store and how they impact revenue. Azure Databricks Connector for Power Automate – You can execute SQL commands against your data within Azure Databricks with the rich context of your business use case. For example, one of our global retail customers is using automated workflows to track safety incidents, which plays a crucial role in keeping employees safe. Azure Databricks as a Knowledge Source in Copilot Studio – You can add Azure Databricks as a primary knowledge source for your agents, enabling them to understand, reason over, and respond to user prompts based on data from Azure Databricks. To get started, all you need to do in Power Apps or Power Automate is add a new connection – that's how simple it is! Check out our demo here and get started using our documentation today! This connector is available in all public cloud regions. You can also learn more about customer use cases in this blog. You can also review the connector reference here
3.4KViews2likes2CommentsAnnouncing the availability of Azure Databricks connector in Azure AI Foundry
At Microsoft, Databricks Data Intelligence Platform is available as a fully managed, native, first party Data and AI solution called Azure Databricks. This makes Azure the optimal cloud for running Databricks workloads. Because of our unique partnership, we can bring you seamless integrations leveraging the power of the entire Microsoft ecosystem to do more with your data. Azure AI Foundry is an integrated platform for Developers and IT Administrators to design, customize, and manage AI applications and agents. Today we are excited to announce the public preview of the Azure Databricks connector in Azure AI Foundry. With this launch you can build enterprise-grade AI agents that reason over real-time Azure Databricks data while being governed by Unity Catalog. These agents will also be enriched by the responsible AI capabilities of Azure AI Foundry. Here are a few ways this can benefit you and your organization: Native Integration: Connect to Azure Databricks AI/BI Genie from Azure AI Foundry Contextual Answers: Genie agents provide answers grounded in your unique data Supports Various LLMs: Secure, authenticated data access Streamlined Process: Real-time data insights within GenAI apps Seamless Integration: Simplifies AI agent management with data governance Multi-Agent workflows: Leverages Azure AI agents and Genie Spaces for faster insights Enhanced Collaboration: Boosts productivity between business and technical users To further democratize the use of data to those in your organization who aren't directly interacting with Azure Databricks, you can also take it one step further with Microsoft Teams and AI/BI Genie. AI/BI Genie enables you to get deep insights from your data using your natural language without needing to access Azure Databricks. Here you see an example of what an agent built in AI Foundry using data from Azure Databricks available in Microsoft Teams looks like We'd love to hear your feedback as you use the Azure Databricks connector in AI Foundry. Try it out today – to help you get started, we’ve put together some samples here. Read more on the Databricks blog, too.9.3KViews5likes3CommentsAnnouncing the availability of Azure Databricks connector in Azure AI Foundry
At Microsoft, Databricks Data Intelligence Platform is available as a fully managed, native, first party Data and AI solution called Azure Databricks. This makes Azure the optimal cloud for running Databricks workloads. Because of our unique partnership, we can bring you seamless integrations leveraging the power of the entire Microsoft ecosystem to do more with your data. Azure AI Foundry is an integrated platform for Developers and IT Administrators to design, customize, and manage AI applications and agents. Today we are excited to announce the public preview of the Azure Databricks connector in Azure AI Foundry. With this launch you can build enterprise-grade AI agents that reason over real-time Azure Databricks data while being governed by Unity Catalog. These agents will also be enriched by the responsible AI capabilities of Azure AI Foundry. Here are a few ways this seamless integration can benefit you and your organization: Native Integration: Connect to Azure Databricks AI/BI Genie from Azure AI Foundry Contextual Answers: Genie agents provide answers grounded in your unique data Supports Various LLMs: Secure, authenticated data access Streamlined Process: Real-time data insights within GenAI apps Seamless Integration: Simplifies AI agent management with data governance Multi-Agent workflows: Leverages Azure AI agents and Genie Spaces for faster insights Enhanced Collaboration: Boosts productivity between business and technical users To further democratize the use of data for those in your organization aren't directly interacting with Azure Databricks, you can also take it one step further with Microsoft Teams and AI/BI Genie. AI/BI Genie enables you to get deep insights from your data using your natural language without needing to access Azure Databricks. Here you see an example of what an agent built in AI Foundry using data from Azure Databricks available in Microsoft Teams looks like We'd love to hear your feedback as you use the Azure Databricks connector in AI Foundry. Try it out today – to help you get started, we’ve put together some samples here.1.3KViews0likes0Comments