azure
106 TopicsSecure Medallion Architecture Pattern on Azure Databricks (Part I)
This article presents a security-first pattern for Azure Databricks: a Medallion Architecture where Bronze, Silver and Gold each run as their Lakeflow Job and cluster, orchestrated by a parent job. Run-as identities are Microsoft Entra service principals; storage access is governed via Unity Catalog External Locations backed by the Access Connector’s managed identity. Least-privilege is enforced with cluster policies and UC grants. Prefer managed tables to unlock Predictive Optimisation, Automatic liquid clustering and Automatic statistics. Secrets live in Azure Key Vault and are read at runtime. Monitor reliability and cost with system tables and Jobs UI. Part II covers more low-level concepts and CI/CD.1.2KViews11likes0CommentsApproaches to Integrating Azure Databricks with Microsoft Fabric: The Better Together Story!
Azure Databricks and Microsoft Fabric can be combined to create a unified and scalable analytics ecosystem. This document outlines eight distinct integration approaches, each accompanied by step-by-step implementation guidance and key design considerations. These methods are not prescriptive—your cloud architecture team can choose the integration strategy that best aligns with your organization’s governance model, workload requirements and platform preferences. Whether you prioritize centralized orchestration, direct data access, or seamless reporting, the flexibility of these options allows you to tailor the solution to your specific needs.2.8KViews8likes1CommentAnnouncing the availability of Azure Databricks connector in Azure AI Foundry
At Microsoft, Databricks Data Intelligence Platform is available as a fully managed, native, first party Data and AI solution called Azure Databricks. This makes Azure the optimal cloud for running Databricks workloads. Because of our unique partnership, we can bring you seamless integrations leveraging the power of the entire Microsoft ecosystem to do more with your data. Azure AI Foundry is an integrated platform for Developers and IT Administrators to design, customize, and manage AI applications and agents. Today we are excited to announce the public preview of the Azure Databricks connector in Azure AI Foundry. With this launch you can build enterprise-grade AI agents that reason over real-time Azure Databricks data while being governed by Unity Catalog. These agents will also be enriched by the responsible AI capabilities of Azure AI Foundry. Here are a few ways this can benefit you and your organization: Native Integration: Connect to Azure Databricks AI/BI Genie from Azure AI Foundry Contextual Answers: Genie agents provide answers grounded in your unique data Supports Various LLMs: Secure, authenticated data access Streamlined Process: Real-time data insights within GenAI apps Seamless Integration: Simplifies AI agent management with data governance Multi-Agent workflows: Leverages Azure AI agents and Genie Spaces for faster insights Enhanced Collaboration: Boosts productivity between business and technical users To further democratize the use of data to those in your organization who aren't directly interacting with Azure Databricks, you can also take it one step further with Microsoft Teams and AI/BI Genie. AI/BI Genie enables you to get deep insights from your data using your natural language without needing to access Azure Databricks. Here you see an example of what an agent built in AI Foundry using data from Azure Databricks available in Microsoft Teams looks like We'd love to hear your feedback as you use the Azure Databricks connector in AI Foundry. Try it out today – to help you get started, we’ve put together some samples here. Read more on the Databricks blog, too.8.8KViews5likes3CommentsAzure Databricks Cost Optimization: A Practical Guide
Co-Authored by: Sanjeev Nair Sanjeev Nair and Rafia Aqil Rafia_Aqil This guide walks through a proven approach to Databricks cost optimization, structured in three phases: Discovery, Cluster/Data/Code Best Practices, and Team Alignment & Next Steps. Phase 1: Discovery Assessing Your Current State The following questions are designed to guide your initial assessment and help you identify areas for improvement. Documenting answers to each will provide a baseline for optimization and inform the next phases of your cost management strategy. Environment & Organization Cluster Management Cost Optimization Data Management Performance Monitoring Future Planning What is the current scale of your Databricks environment? How many workspaces do you have? How are your workspaces organized (e.g., by environment type, region, use case)? How many clusters are deployed? How many users are active? What are the primary use cases for Databricks in your organization? Data engineering Data science Machine learning Business intelligence How are clusters currently managed? Manual configuration Automated scripts Databricks REST API Cluster policies What is the average cluster uptime? Hours per day Days per week What is the average cluster utilization rate? CPU usage Memory usage What is the current monthly spend on Databricks? Total cost Breakdown by workspace Breakdown by cluster What cost management tools are currently in use? Azure Cost Management Third-party tools Are there any existing cost optimization strategies in place? Reserved instances Spot instances Cluster auto-scaling What is the current data storage strategy? Data lake Data warehouse Hybrid What is the average data ingestion rate? GB per day Number of files What is the average data processing time? ETL jobs Machine learning models What types of data formats are used in your environment? Delta Lake Parquet JSON CSV Other formats relevant to your workloads What performance monitoring tools are currently in use? Databricks Ganglia Azure Monitor Third-party tools What are the key performance metrics tracked? Job execution time Cluster performance Data processing speed Are there any planned expansions or changes to the Databricks environment? New use cases Increased data volume Additional users What are the long-term goals for Databricks cost optimization? Reducing overall spend Improving resource utilization & cost attribution Enhancing performance Understanding Databricks Cost Structure Total Cost = Cloud Cost + DBU Cost Cloud Cost: Compute (VMs, networking, IP addresses), storage (ADLS, MLflow artifacts), other services (firewalls), cluster type (serverless compute, classic compute) DBU Cost: Workload size, cluster/warehouse size, photon acceleration, compute runtime, workspace tier, SKU type (Jobs, Delta Live Tables, All Purpose Clusters, Serverless), model serving, queries per second, model execution time Diagnose Cost and Issues Effectively diagnosing cost and performance issues in Databricks requires a structured approach. Use the following steps and metrics to gain visibility into your environment and uncover actionable insights. 1. Identify Costly Workloads Account Console Usage Reports: Review usage reports to identify usage breakdowns by product, SKU name, and custom tags. Usage Breakdown by Product and SKU: Helps you understand which services and compute types (clusters, SQL warehouses, serverless options) are consuming the most resources. Custom Tags for Attribution: Tags allow you to attribute costs to teams, projects, or departments, making it easier to identify high-cost areas. Workflow and Job Analysis: By correlating usage data with workflows and jobs, you can pinpoint long-running or resource-heavy workloads that drive costs. Focus on Long-Running Workloads: Examine workloads with extended runtimes or high resource utilization. Key Question: Which pipelines or workloads are driving the majority of your costs? Now That You’ve Identified Long-Running Workloads, Review These Key Areas: 2. Review Cluster Metrics CPU Utilization: Track guest, iowait, idle, irq, nice, softirq, steal, system, and user times to understand how compute resources are being used. Memory Utilization: Monitor used, free, buffer, and cached memory to identify over- or under-utilization. Key Question: Is your cluster over- or under-utilized? Are resources being wasted or stretched too thin? 3. Review SQL Warehouse Metrics Live Statistics: Monitor warehouse status, running/queued queries, and current cluster count. Time Scale Filter: Analyze query and cluster activity over different time frames (8 hours, 24 hours, 7 days, 14 days). Peak Query Count Chart: Identify periods of high concurrency. Completed Query Count Chart: Track throughput and query success/failure rates. Running Clusters Chart: Observe cluster allocation and recycling events. Query History Table: Filter and analyze queries by user, duration, status, and statement type. Key Question: Is your SQL Warehouse over- or under-utilized? Are resources being wasted or stretched too thin? 4. Review Spark UI Stages Tab: Look for skewed data, high input/output, and shuffle times. Uneven task durations may indicate data skew or inefficient data handling. Jobs Timeline: Identify long-running jobs or stages that consume excessive resources. Stage Analysis: Determine if stages are I/O bound or suffering from data skew/spill. Executor Metrics: Monitor memory usage, CPU utilization, and disk I/O. Frequent garbage collection or high memory usage may signal the need for better resource allocation. 4.1. Spark UI: Storage & Jobs Tab Storage Level: Check if data is stored in memory, on disk, or both. Size: Assess the size of cached data. Job Analysis: Investigate jobs that dominate the timeline or have unusually long durations. Look for gaps caused by complex execution plans, non-Spark code, driver overload, or cluster malfunction. 4.2. Spark UI: Executor Tab Storage Memory: Compare used vs. available memory. Task Time (Garbage Collection): Review long tasks and garbage collection times. Shuffle Read/Write: Measure data transferred between stages. 5. Additional Diagnostic Methods System Tables in Unity Catalog: Query system tables for cost attribution and resource usage trends. Cost Observability Queries Tagging Analysis: Use tags to identify which teams or projects consume the most resources. Dashboards & Alerts: Set up cost dashboards and budget alerts for proactive monitoring. Phase 2: Cluster/Code/Data Best Practices Alignment Cluster UI Configuration and Cost Attribution Effectively configuring clusters/workloads in Databricks is essential for balancing performance, scalability, and cost. Tunning settings and features when used strategically can help organizations maximize resource efficiency and minimize unnecessary spending. Key Configuration Strategies 1. Reduce Idle Time: Clusters to incur costs even when not actively processing workloads. To avoid paying for unused resources: Enable Auto-Terminate: Set clusters automatically shut down after a period of inactivity. This simple setting can significantly reduce wasted spending. Enable Autoscaling: Workloads fluctuate in size and complexity. Autoscaling allows clusters to dynamically adjust the number of nodes based on demand: Automatic Resource Adjustment: Scale up for heavy jobs and scale down for lighter loads, ensuring you only pay for what you use. It significantly enhances cost efficiency and overall performance. For serverless and streaming, using Delta Live Tables with autoscaling is recommended. This approach leads to better resource management and reliability. Use Spot Instances: For batch processing and non-critical workloads, spot instances offer substantial cost savings: Lower VM Costs: Spot instances are typically much cheaper than standard VMs. However, they are not recommended for jobs requiring constant uptime due to potential interruptions. Considerations: Azure Spot VMs are intended for non-critical, fault-tolerant tasks. They can be evicted without notice, riskingproduction stability. No SLA guarantees mean potentialdowntime for critical applications. Using Spot VMs could lead to reliability issues in production environments. Leverage Photon Engine: Photon is Databricks’ high-performance, vectorized query engine: Accelerate Large Workloads: Photon can dramatically reduce runtime for compute-intensive tasks, improving both speed and cost efficiency. Keep Runtimes Up to Date: Using the latest Databricks runtime ensures optimal performance and security: Benefit from Improvements: Regular updates include performance enhancements, bug fixes, and new features. Apply Cluster Policies: Cluster policies help standardize configurations and enforce cost controls across teams: Governance and Consistency: Policies can restrict certain settings, enforce tagging, and ensure clusters are created with cost-effective defaults. Optimize Storage: type impacts both performance and cost: Switch from HDDs to SSDs: SSDs provide faster caching and shuffle operations, which can improve job efficiency and reduce runtime. Tag Clusters for Cost Attribution: Tagging clusters enables granular tracking and reporting: Visibility and Accountability: Use tags to attribute costs to specific teams, projects, or environments, supporting better budgeting and chargeback processes. Select the Right Cluster Type: Different workloads require different cluster types, see table below for Serverless vs Classic Compute: Feature Classic Compute Serverless Compute Control Full control over config & network Minimal control, fully managed by Databricks Startup Time Slower (unless pre-warmed) Instant Cost Model Hourly, supports reservations Pay-per-use, elastic scaling Security VNet injection, private endpoints NCC-based private connectivity Best For Heavy ETL, ML, compliance workloads Interactive queries, unpredictable demand Job Clusters: Ideal for scheduled jobs and Delta Live Tables. All-Purpose Clusters: Suited for ad-hoc analysis and collaborative work. Single-Node Clusters: Efficient for simple exploratory data analysis or pure Python tasks. Serverless Compute: Scalable, managed workloads with automatic resource management. 11. Monitor and Adjust Regularly: review cluster metrics and query history: Continuous Optimization: Use built-in dashboards to monitor usage, identify bottlenecks, and adjust cluster size or configuration as needed. Code Best Practices Avoid Reprocessing Large Tables Use a CDC (Change Data Capture) architecture with Delta Live Tables (DLT) to process only new or changed data, minimizing unnecessary computation. Ensure Code Parallelizes Well Write Spark code that leverages parallel processing. Avoid loops, deeply nested structures, and inefficient user-defined functions (UDFs) that can hinder scalability. Reduce Memory Consumption Tweak Spark configurations to minimize memory overhead. Clean out legacy or unnecessary settings that may have carried over from previous Spark versions. Prefer SQL Over Complex Python Use SQL (declarative language) for Spark jobs whenever possible. SQL queries are typically more efficient and easier to optimize than complex Python logic. Modularize Notebooks Use %run to split large notebooks into smaller, reusable modules. This improves maintainability. Use LIMIT in Exploratory Queries When exploring data, always use the LIMIT clause to avoid scanning large datasets unnecessarily. Monitor Job Performance Regularly review Spark UI to detect inefficiencies such as high shuffle, input, or output. Review the below table for optimization opportunities: Spark stage high I/O - Azure Databricks | Microsoft Learn Databricks Code Performance Enhancements & Data Engineering Best Practices By enabling the below features and applying best practices, you can significantly lower costs, accelerate job execution, and build Databricks pipelines that are both scalable and highly reliable. For more guidance review: Comprehensive Guide to Optimize Data Workloads | Databricks. Feature / Technique Purpose / Benefit How to Use / Enable / Key Notes Disk Caching Accelerates repeated reads of Parquet files Set spark.databricks.io.cache.enabled = true Dynamic File Pruning (DFP) Skips irrelevant data files during queries, improves query performance Enabled by default in Databricks Low Shuffle Merge Reduces data rewriting during MERGE operations, less need to recalculate ZORDER Use Databricks runtime with feature enabled Adaptive Query Execution (AQE) Dynamically optimizes query plans based on runtime statistics Available in Spark 3.0+, enabled by default Deletion Vectors Efficient row removal/change without rewriting entire Parquet file Enable in workspace settings, use with Delta Lake Materialized Views Faster BI queries, reduced compute for frequently accessed data Create in Databricks SQL Optimize Compacts Delta Lake files, improves query performance Run regularly, combine with ZORDER on high-cardinality columns ZORDER Physically sorts/co-locates data by chosen columns for faster queries Use with OPTIMIZE, select columns frequently used in filters/joins Auto Optimize Automatically compacts small files during writes Enable optimizeWrite and autoCompact table properties Liquid Clustering Simplifies data layout, replaces partitioning/ZORDER, flexible clustering keys Recommended for new Delta tables, enables easy redefinition of clustering keys File Size Tuning Achieve optimal file size for performance and cost Set delta.targetFileSize table property Broadcast Hash Join Optimizes joins by broadcasting smaller tables Adjust spark.sql.autoBroadcastJoinThreshold and spark.databricks.adaptive.autoBroadcastJoinThreshold Shuffle Hash Join Faster join alternative to sort-merge join Prefer over sort-merge join when broadcasting isn’t possible, Photon engine can help Cost-Based Optimizer (CBO) Improves query plans for complex joins Enabled by default, collect column/table statistics with ANALYZE TABLE Data Spilling & Skew Handles uneven data distribution and excessive shuffle Use AQE, set spark.sql.shuffle.partitions=auto, optimize partitioning Data Explosion Management Controls partition sizes after transformations (e.g., explode, join) Adjust spark.sql.files.maxPartitionBytes, use repartition() after reads Delta Merge Efficient upserts and CDC (Change Data Capture) Use MERGE operation in Delta Lake, combine with CDC architecture Data Purging (Vacuum) Removes stale data files, maintains storage efficiency Run VACUUM regularly based on transaction frequency Phase 3: Team Alignment and Next Steps Implementing Cost Observability and Taking Action Effective cost management in Databricks goes beyond configuration and code—it requires robust observability, granular tracking, and proactive measures. Below outlines how your teams can achieve this using system tables, tagging, dashboards, and actionable scripts. Cost Observability with System Tables Databricks Unity Catalog provides system tables that store operational data for your account. These tables enable historical cost observability and empower FinOps teams to analyze spend independently. System Tables Location: Found inside the Unity Catalog under the “system” schema. Key Benefits: Structured data for querying, historical analysis, and cost attribution. Action: Assign permissions to FinOps teams so they can access and analyze dedicated cost tables. Enable Tags for Granular Tracking Tagging is a powerful feature for tracking, reporting, and budgeting at a granular level. Classic Compute: Manually add key/value pairs when creating clusters, jobs, SQL Warehouses, or Model Serving endpoints. Use cluster policies to enforce custom tags. Serverless Compute: Create budget policies and assign permissions to teams or members for serverless workloads. Action: Tag all compute resources to enable detailed cost attribution and reporting. Track Costs with Dashboards and Alerts Databricks offers prebuilt dashboards and queries for cost forecasting and usage analysis. Dashboards: Visualize spend, usage trends, and forecast future costs. Prebuilt Queries: Use top queries with system tables to answer meaningful cost questions. Budget Alerts: Set up alerts in the Account Console (Usage > Budget) to receive notifications when spend approaches defined thresholds. Build Culture of Efficiency To go beyond technical fixes and build a culture of efficiency, by focusing on the below strategic actions: Collaborate with Internal Engineers: Spend time with engineering teams to understand workload patterns and optimization opportunities. Peer Reviews and Code Audits: Conduct regular code review sessions and peer reviews to ensure best practices are followed for Spark jobs, data pipelines, and cluster configurations. Create Internal Best Practice Documentation: Develop clear guidelines for writing optimized code, managing data, and maintaining clusters. Make these resources easily accessible for all teams. Implement Observability Dashboards: Use Databricks’ built-in features to create dashboards that track spend, monitor resource utilization, and highlight anomalies. Set Alerts and Budgets: Configure alerts for long-running workloads and establish budgets using prebuilt Databricks capabilities to prevent cost overruns. 5. Azure Reservations and Azure Savings Plan When optimizing Databricks costs on Azure, it’s important to understand the two main commitment-based savings options: Azure Reservations and Azure Savings Plans. Both can help you reduce compute costs, but they differ in flexibility and how savings are applied. Which Should You Choose? Reservations are ideal if you have stable, predictable Databricks workloads and want maximum savings. Savings Plans are better if you expect your compute needs to change, or if you want a simpler, more flexible way to save across multiple services. Pro Tip: You can combine both options—use Reservations for your baseline, always-on Databricks clusters, and Savings Plans for bursty, variable, or new workloads. Summary Table: Action Steps It’s critical to monitor costs continuously and align your teams with established best practices, while scheduling regular code review sessions to ensure efficiency and consistency. Area Best Practice / Action System Tables Use for historical cost analysis and attribution Tagging Apply to all compute resources for granular tracking Dashboards Visualize spend, usage, and forecasts Alerts Set budget alerts for proactive cost management Scripts/Queries Build custom analysis tools for deep insights Cluster/Data/Code Review & Align Regularly review best practices, share findings, and align teams on optimization Save on your Usage Consider Azure Reservations and Azure Savings Plan1.7KViews4likes0CommentsTransforming Text to Video: Harnessing the Power of Azure Open AI and Cognitive Services with Python
Introduction In today's digital age, video content has become a powerful medium for communication and storytelling. Whether it's for marketing, education, or entertainment purposes, videos could captivate and engage audiences in ways that traditional text-based content often cannot. However, creating compelling videos from scratch can be a time-consuming and resource-intensive process. Fortunately, with the advancements in artificial intelligence and the availability of cloud-based services like Azure Open AI and Cognitive Services, it is now possible to automate and streamline the process of converting text into videos. These cutting-edge technologies provide developers and content creators with powerful tools and APIs that leverage natural language processing and computer vision to transform plain text into visually appealing and professional-looking videos. This document serves as a comprehensive guide and a starting point for developers who are eager to explore the exciting realm of Azure Open AI and Cognitive Services for text-to-video conversion. While this guide presents a basic implementation, its purpose is to inspire and motivate developers to delve deeper into the possibilities offered by these powerful technologies. Whether you are a developer looking to integrate text-to-video functionality into your applications or a content creator seeking to automate the video production process, this guide will provide you with the insights and resources you need to get started. So let's dive in and discover the exciting world of text-to-video conversion using Azure Open AI and Cognitive Services! Prerequisite The next part of the sections will show the Architecture and the implementation through python coding. If you are new to these technologies, don’t worry please go through this prerequisite links to get started: Azure Open AI : Get started with Azure OpenAI Service - Training | Microsoft Learn Azure Cognitive Services: Key Phrase Extraction : What is key phrase extraction in Azure Cognitive Service for Language? - Azure Cognitive Services | Microsoft Learn Speech To Text: Speech to text overview - Speech service - Azure Cognitive Services | Microsoft Learn Python coding : You will find multiple courses from internet, you can refer Learn Python - Free Interactive Python Tutorial Architecture Figure 1: Architecture The following architecture outlines a generic flow for converting text content into video files: The steps are explained below : Initially, an application (implemented in Python, but applicable to any programming language) accepts textual content as input from the user. The application utilizes the Azure Open AI Python SDK to invoke the summarization functionality, which generates a summarized text. This summarization is stored in memory for further use. The summarized content serves as input for the Azure Cognitive Services, specifically for generating key phrases and an audio file. The key phrases are extracted and stored in the application's memory for later use. Simultaneously, the audio file is stored and persisted on the compute server. Alternatively, it can be stored in any preferred persistent storage solution. The key phrases are then used as input for the Azure Open AI API, generating meaningful DALL·E prompts. These DALL·E prompts are stored in memory for subsequent utilization. The DALL·E prompts serve as input for another Azure Open AI API call, generating images that will be used in the final video. The generated images are stored on the compute server or any chosen persistent storage medium. To create the final video, a custom Python application is employed. This application combines the previously generated audio and images, resulting in the creation of the video file. The final video is initially stored on the compute server but can be subsequently pushed to any desired storage layer for further consumption. By following this process, textual content can be effectively transformed into a video file, providing enhanced accessibility and visual representation of the original information. Text Summarization through Azure Open AI The code provided is an example of text summarization using OpenAI's GPT-3.5 language model. Here's a breakdown of the code: Importing necessary libraries and setting up OpenAI API: import os import openai openai.api_type = "azure" openai.api_base = "https://<Your_Resource_Name>.openai.azure.com/" openai.api_version = "2022-12-01" openai.api_key = "<Your API Key>" This section imports the required libraries and sets up the OpenAI API credentials. You would need to replace <Your_Resource_Name> with your actual resource name and <Your API Key> with your OpenAI API key. Setting the number of sentences for the summary: num_of_sentences = 5 This line defines the number of sentences that the summary should consist of. You can change this value according to your requirements. Obtaining user input: content = input("Please enter the content: ") This line prompts the user to enter the content they want to summarize and stores it in the content variable. Creating the prompt for summarization: prompt = 'Provide a summary of the text below that captures its main idea in '+ str(num_of_sentences) +'sentences. \n' + content This line constructs the prompt by combining the predefined sentence with the user's input content. Generating the summary using OpenAI's Completion API: response_summ = openai.Completion.create( engine="text-davinci", prompt=prompt, temperature=0.3, max_tokens=250, top_p=1, frequency_penalty=0, presence_penalty=0, best_of=1, stop=None) This code sends a request to the OpenAI API for generating the summary. It uses the openai.Completion.create() method with the following parameters: -> engine: Specifies the language model to use. Here, it uses the "text-davinci" model, which is a powerful and versatile model. -> prompt: The prompt for the model to generate a summary based on. -> temperature: Controls the randomness of the generated output. Lower values (e.g., 0.3) make the output more focused and deterministic. -> max_tokens: Specifies the maximum number of tokens the response can have. Tokens are chunks of text, and this value limits the length of the generated summary. -> top_p: Controls the diversity of the output. A higher value (e.g., 1) allows more diverse responses by considering a larger set of possibilities. -> frequency_penalty and presence_penalty: These parameters control the preference of the model for repeating or including certain phrases. Here, they are set to 0, indicating no preference. -> best_of: Specifies the number of independent tries the model will make and return the best result. -> stop: Specifies a string or list of strings at which to stop the generated summary. Printing the generated summary: print(response_summ.choices[0].text) This line prints the generated summary by accessing the text property of the first choice in the response. The summary will be displayed in the console. Key Phrase Extraction using Azure Cognitive Service The code provided demonstrates key phrase extraction using Microsoft Azure's Text Analytics service. Here's an explanation of the code: Setting up the required credentials and endpoint: key = "<Your_cognitive_service_key>" endpoint = "https://<Your_cognitive_service>.cognitiveservices.azure.com/" These lines define the cognitive service key and endpoint for the Text Analytics service. You need to replace <Your_cognitive_service_key> with your actual cognitive service key and <Your_cognitive_service> with the name of your cognitive service. Importing necessary libraries: from azure.ai.textanalytics import TextAnalyticsClient from azure.core.credentials import AzureKeyCredential These lines import the required libraries from the Azure SDK. Authenticating the client: def authenticate_client(): ta_credential = AzureKeyCredential(key) text_analytics_client = TextAnalyticsClient( endpoint=endpoint, credential=ta_credential) return text_analytics_client client = authenticate_client() This code defines the authenticate_client() function that creates an instance of the TextAnalyticsClient using the provided key and endpoint. The client variable stores the authenticated client. Defining the key phrase extraction example: def key_phrase_extraction_example(client): try: phrase_list, phrases = [], '' documents = [response_summ.choices[0].text] response_kp = client.extract_key_phrases(documents = documents)[0] if not response_kp.is_error: print("\tKey Phrases:") for phrase in response_kp.key_phrases: print("\t\t", phrase) phrase_list.append(phrase) phrases = phrases +"\n"+ phrase else: print(response_kp.id, response_kp.error) except Exception as err: print("Encountered exception. {}".format(err)) return phrase_list, phrases This code defines the key_phrase_extraction_example() function that takes the authenticated client as input. It performs key phrase extraction on a given document (in this case, response_summ.choices[0].text) using the client.extract_key_phrases() method. The extracted phrases are stored in the phrase_list and phrases variables. If there is an error, it is printed. Executing the key phrase extraction example: phrase_list, phrases = key_phrase_extraction_example(client) This line calls the key_phrase_extraction_example() function with the authenticated client as an argument. The extracted key phrases are stored in the phrase_list and phrases variables, which can be used for further processing or display. Overall, the code sets up the Azure Text Analytics client, authenticates it, and demonstrates key phrase extraction on a given text using the client. Create Dall-e prompts for image generation using Azure Open AI The code provided focuses on generating images based on the extracted phrases. Here's an explanation of the code: Creating a prompt for image generation: prompt = ''' Provide an image idea for each phrases: ''' + phrases This line creates a prompt by concatenating the phrase list obtained from the key phrase extraction with a predefined text. The prompt serves as input for the image generation model. Generating image ideas using OpenAI's text completion API: response_phrase = openai.Completion.create( engine="text-davinci", prompt=prompt, temperature=0.3, max_tokens=3000, top_p=1, frequency_penalty=0, presence_penalty=0, best_of=1, stop=None) This code uses OpenAI's text completion API to generate image ideas based on the provided prompt. The generated ideas are stored in response_phrase.choices[0].text. Extracting image phrases from the generated response: image_phrases = response_phrase.choices[0].text.split("\n")[1:] This code splits the generated response by newlines and stores the resulting lines (image phrases) in the image_phrases variable. Processing image phrases: im_ph = [] for image_phrase in image_phrases: #print(image_phrase) if(len(image_phrase) > 0): im_ph.append(image_phrase.split(":")[1]) This code processes each image phrase by splitting it based on the colon (":") character and appending the second part to the im_ph list. This step is done to extract the actual image idea from each phrase. Setting up the necessary variables: import requests import time import os api_base = 'https://<Your_Resource_Name>.openai.azure.com/' api_key = "<Your_API_KEY>" api_version = '2022-08-03-preview' url = "{}dalle/text-to-image?api-version={}".format(api_base, api_version) headers= { "api-key": api_key, "Content-Type": "application/json" } These lines define the API base URL, API key, API version, and the endpoint URL for the DALL-E model's text-to-image generation. Generating images using DALL-E: images = [] for phrase in im_ph: body = { "caption": phrase , "resolution": "1024x1024" } submission = requests.post(url, headers=headers, json=body) print(submission) operation_location = submission.headers['Operation-Location'] retry_after = submission.headers['Retry-after'] status = "" #while (status != "Succeeded"): time.sleep(int(retry_after)) response = requests.get(operation_location, headers=headers) status = response.json()['status'] print(status) if status == "Succeeded": image_url = response.json()['result']['contentUrl'] images.append(image_url) This code performs the image generation using the DALL-E model. It sends a POST request to the DALL-E text-to-image endpoint with each image phrase as the caption and the desired resolution. The API response contains the location of the operation and the estimated time to wait. The code then waits for the specified duration and retrieves the response using a GET request. If the status of the operation is "Succeeded," the generated image URL is extracted and added to the images list. Downloading the generated images: import urllib.request counter = 0 image_list = [] for url in images: counter += 1 filename = "file" + str(counter) + ".jpg" urllib.request.urlretrieve(url, filename) image_list.append(filename) print ("Downloading done.....") This code downloads the generated images by iterating over the list of image URLs. Each image is downloaded using urllib.request.urlretrieve and saved with a unique filename. The filenames are stored in the image_list list. It seems that this code integrates the DALL-E model with the provided phrases to generate corresponding images. Create Audio File using Azure Speech Service The code provided demonstrates how to create audio files for the text summarization output using the Azure Cognitive Services Speech SDK. Here's an explanation of the code: Import the package import azure.cognitiveservices.speech as speechsdk To use the Azure Cognitive Services Speech SDK, you need to import the speechsdk module from the azure.cognitiveservices.speech package. Setting up the necessary variables: speech_key, service_region = "<Your speech key>", "<location>" These variables represent your Azure Cognitive Services Speech API key and the region where your service is hosted. Creating the SpeechConfig object: speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region) The speech_config object is created using the Speech API key and service region. It provides the necessary configuration for the speech synthesizer. Defining the text_to_speech function: def text_to_speech(text, filename): audio_config = speechsdk.AudioConfig(filename=filename) speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=audio_config) result = speech_synthesizer.speak_text_async(text).get() if result.reason == speechsdk.ResultReason.SynthesizingAudioCompleted: print(f"Audio saved to {filename}") else: print(f"Error: {result.error_details}") This function takes the text and filename as input. It creates an audio_config object with the specified filename to store the synthesized audio. Then, a speech_synthesizer object is created using the provided speech configuration and audio configuration. The speak_text_async method is called to synthesize the input text into audio. The result is then checked, and if the audio synthesis is completed successfully, it prints a success message along with the filename. Otherwise, it prints an error message with the details of the error. Generating audio for the text summarization output: text = response_summ.choices[0].text filename = "audio.mp4" text_to_speech(text, filename) This code retrieves the text from the response_summ object, which contains the summarized text. It then specifies the filename for the audio file. The text_to_speech function is called with the text and filename to generate the audio file. In summary, this code uses the Azure Cognitive Services Speech SDK to convert the summarized text into audio by utilizing the speech synthesis capabilities provided by the Azure Speech service. Stich the Audio File and the images to create the video. The code provided is for creating a video by combining a sequence of images with an audio file. Here's a breakdown of the code: #Stich the audio files and the images together from moviepy.editor import * print("Creating the video.....") def create_video(images, audio, output): clips = [ImageClip(m).resize(height=1024).set_duration(2) for m in images] concat_clip = concatenate_videoclips(clips, method="compose") audio_clip = AudioFileClip(audio) final_clip = concat_clip.set_audio(audio_clip) final_clip.write_videofile(output, fps=24) images = image_list audio = filename output = "video.mp4" create_video(images, audio, output) print("Video created.....") The from moviepy.editor import * statement imports the necessary functions and classes from the MoviePy library, which is used for video editing and manipulation. The create_video function is defined to generate the final video. It takes three parameters: images, audio, and output. Inside the create_video function, a list comprehension is used to create a sequence of video clips (clips) from the provided images list. Each image is converted to a video clip using ImageClip(m), where m is the path to the image file. The resize function is used to set the height of each clip to 1024 pixels, and set_duration sets the duration of each clip to 2 seconds. The concatenate_videoclips function is used to concatenate the video clips in clips into a single clip (concat_clip). The method="compose" argument specifies that the clips should be composited together. The AudioFileClip class is used to load the audio file (audio) and create an audio clip (audio_clip). The audio clip is then set to the concatenated video clip using set_audio, creating the final clip (final_clip). The write_videofile function is called on final_clip to save the video to the specified output file (output). The fps=24 argument sets the frame rate of the video to 24 frames per second. The images, audio, and output variables are assigned with the appropriate values (image_list, filename, and "video.mp4", respectively). The create_video function is called with the provided arguments to generate the video. Finally, a message is printed indicating that the video creation process is complete. Note: This code assumes that you have the MoviePy library installed. If not, you can install it using pip install moviepy. Conclusion In conclusion, the code provided offers a good starting point for a text summarization and multimedia generation solution. It combines various technologies and APIs to perform text summarization, key phrase extraction, image generation, audio synthesis, and video creation. The text summarization process involves inputting a text content and using OpenAI's language model to generate a summary capturing the main idea. The summary is then used for key phrase extraction using Azure Cognitive Services. These key phrases serve as prompts for generating image ideas using OpenAI's image generation capabilities. Once the image phrases are obtained, they are used to request images from the DALL-E model. The images are downloaded and stored locally for further use. Additionally, the key phrases are converted into audio using Azure Cognitive Services' Text-to-Speech functionality, and the audio file is saved. Finally, the images and audio are stitched together using the MoviePy library to create a video. The images are resized, and a composited video clip is generated by concatenating the image clips. The audio file is added to the video clip, resulting in the final video. It's important to note that this solution is not perfect and may require further customization and fine-tuning based on specific requirements. Additionally, it relies on external services and APIs, which may have limitations or dependencies. However, it provides a solid foundation for implementing a text summarization and multimedia generation pipeline. By leveraging the power of natural language processing, image generation, audio synthesis, and video editing, this solution demonstrates the potential to automate the creation of engaging multimedia content from text. Further enhancements and integrations can be explored to improve the accuracy and quality of the generated summaries, images, audio, and videos.28KViews4likes7CommentsAzure Stream Analytics releases slew of improvements at Ignite 2022: Output to Delta Lake and more!
Today we are excited to announce numerous new capabilities that unlock new stream processing patterns that work with your modern lakehouses. We are announcing native support of Delta Lake output, no code editor GA, improved development & troubleshooting experience and much more!7.2KViews4likes1Comment