azure functions
364 TopicsRethinking Background Workloads with Azure Functions on Azure Container Apps
Objective Azure Container Apps provides a flexible platform for running background workloads, supporting multiple execution models to address different workload needs. Two commonly used models are: Azure Functions on Azure Container Apps - overview of Azure functions Azure Container Apps Jobs – overview of Container App Jobs Both are first‑class capabilities on the same platform and are designed for different types of background processing. This blog explores Use Cases where Azure Functions on Azure Container Apps are best suited Use Cases where Container App Jobs provide advantages Use Cases where Azure Functions on Azure Container Apps Are suited Azure Functions on Azure Container Apps are particularly well suited for event‑driven and workflow‑oriented background workloads, where work is initiated by external signals and coordination is a core concern. The following use cases illustrate scenarios where the Functions programming model aligns naturally with the workload, allowing teams to focus on business logic while the platform handles triggering, scaling, and coordination. Event‑Driven Data Ingestion Pipelines For ingestion pipelines where data arrives asynchronously and unpredictably. Example: A retail company processes inventory updates from hundreds of suppliers. Files land in Blob Storage overnight, varying widely in size and arrival time. In this scenario: Each file is processed independently as it arrives Execution is driven by actual data arrival, not schedules Parallelism and retries are handled by the platform .blob_trigger(arg_name="blob", path="inventory-uploads/{name}", connection="StorageConnection") async def process_inventory(blob: func.InputStream): data = blob.read() # Transform and load to database await transform_and_load(data, blob.name) Multi‑Step, Event‑Driven Processing Workflows Functions works well for workloads that involve multiple dependent steps, where each step can fail independently and must be retried or resumed safely. Example: An order processing workflow that includes validation, inventory checks, payment capture, and fulfilment notifications. Using Durable Functions: Workflow state persisted automatically Each step can be retried independently Execution resumes from the point of failure rather than restarting Durable Functions on Container Apps solves this declaratively: .orchestration_trigger(context_name="context") def order_workflow(context: df.DurableOrchestrationContext): order = context.get_input() # Each step is independently retryable with built-in checkpointing validated = yield context.call_activity("validate_order", order) inventory = yield context.call_activity("check_inventory", validated) payment = yield context.call_activity("capture_payment", inventory) yield context.call_activity("notify_fulfillment", payment) return {"status": "completed", "order_id": order["id"]} Scheduled, Recurring Background Tasks For time‑based background work that runs on a predictable cadence and is closely tied to application logic. Example: Daily financial summaries, weekly aggregations, or month‑end reconciliation reports. Timer‑triggered Functions allow: Schedules to be defined in code Logic to be versioned alongside application code Execution to run in the same Container Apps environment as other services .timer_trigger(schedule="0 0 6 * * *", arg_name="timer") async def daily_financial_summary(timer: func.TimerRequest): if timer.past_due: logging.warning("Timer is running late!") await generate_summary(date.today() - timedelta(days=1)) await send_to_stakeholders() Long‑Running, Parallelizable Workloads Scenarios which require long‑running workloads to be decomposed into smaller units of work and coordinated as a workflow. Example: A large data migration processing millions of records. With Durable Functions: Work is split into independent batches Batches execute in parallel across multiple instances Progress is checkpointed automatically Failures are isolated to individual batches .orchestration_trigger(context_name="context") def migration_orchestrator(context: df.DurableOrchestrationContext): batches = yield context.call_activity("get_migration_batches") # Process all batches in parallel across multiple instances tasks = [context.call_activity("migrate_batch", b) for b in batches] results = yield context.task_all(tasks) yield context.call_activity("generate_report", results) Use Cases where Container App Jobs are a Best Fit Azure Container Apps Jobs are well suited for workloads that require explicit execution control or full ownership of the runtime and lifecycle. Common examples include: Batch Processing Using Existing Container Images Teams often have existing containerized batch workloads such as data processors, ETL tools, or analytics jobs that are already packaged and validated. When refactoring these workloads into a Functions programming model is not desirable, Container Apps Jobs allow them to run unchanged while integrating into the Container Apps environment. Large-Scale Data Migrations and One-Time Operations Jobs are a natural fit for one‑time or infrequently run migrations, such as schema upgrades, backfills, or bulk data transformations. These workloads are typically: Explicitly triggered Closely monitored Designed to run to completion under controlled conditions The ability to manage execution, retries, and shutdown behavior directly is often important in these scenarios. Custom Runtime or Specialized Dependency Workloads Some workloads rely on: Specialized runtimes Native system libraries Third‑party tools or binaries When these requirements fall outside the supported Functions runtimes, Container Apps Jobs provide the flexibility to define the runtime environment exactly as needed. Externally Orchestrated or Manually Triggered Workloads In some architectures, execution is coordinated by an external system such as: A CI/CD pipeline An operations workflow A custom scheduler or control plane Container Apps Jobs integrate well into these models, where execution is initiated explicitly rather than driven by platform‑managed triggers. Long-Running, Single-Instance Processing For workloads that are intentionally designed to run as a single execution unit without fan‑out, trigger‑based scaling, or workflow orchestration Jobs provide a straightforward execution model. This includes tasks where parallelism, retries, and state handling are implemented directly within the application. Making the Choice Consideration Azure Functions on Azure Container Apps Azure Container Apps Jobs Trigger model Event‑driven (files, messages, timers, HTTP, events) Explicit execution (manual, scheduled, or externally triggered) Scaling behavior Automatic scaling based on trigger volume / queue depth Fixed or explicitly defined parallelism Programming model Functions programming model with triggers, bindings, Durable Functions General container execution model State management Built‑in state, retries, and checkpointing via Durable Functions Custom state management required Workflow orchestration Native support using Durable Functions Must be implemented manually Boilerplate required Minimal (no polling, retry, or coordination code) Higher (polling, retries, lifecycle handling) Runtime flexibility Limited to supported Functions runtimes Full control over runtime and dependencies Getting Started on Functions on Azure Container Apps If you’re already running on Container Apps, adding Functions is straightforward: Your Functions run alongside your existing apps, sharing the same networking, observability, and scaling infrastructure. Check out the documentation for details - Getting Started on Functions on Azure Container Apps # Create a Functions app in your existing Container Apps environment az functionapp create \ --name my-batch-processor \ --storage-account mystorageaccount \ --environment my-container-apps-env \ --workload-profile-name "Consumption" \ --runtime python \ --functions-version 4 Getting Started on Container App Jobs on Azure Container Apps If you already have an Azure Container Apps environment, you can create a job using the Azure CLI. Checkout the documentation for details - Jobs in Azure Container Apps az containerapp job create \ --name my-job \ --resource-group my-resource-group \ --environment my-container-apps-env \ --trigger-type Manual \ --image mcr.microsoft.com/k8se/quickstart-jobs:latest \ --cpu 0.25 \ --memory 0.5Gi Quick Links Azure Functions on Azure Container Apps overview Create your Azure Functions app through custom containers on Azure Container Apps Run event-driven and batch workloads with Azure Functions on Azure Container Apps717Views0likes0CommentsBest Practice: Using Self-Signed Certificates with Java on Azure Functions (Linux)
If you are developing Java applications on Azure Functions (Linux dedicated plan) and need to connect to services secured by self-signed certificates, you have likely encountered the dreaded SSL handshake error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target By default, the Java Virtual Machine (JVM) only trusts certificates signed by well-known Certificate Authorities (CAs). To fix this, you need to tell your Java Function App to trust your specific self-signed certificate. While there are several ways to achieve this, this guide outlines the best practice: manually adding the certificate to a custom Java keystore located in persistent storage. Why this approach? In Azure App Service and Azure Functions (Linux), the file system is generally ephemeral, meaning changes to system folders (like /usr/lib/jvm) are lost upon restart. However, the /home directory is persistent. By creating a custom truststore in /home and pointing the JVM to it, your configuration remains intact across restarts, scaling operations, and platform updates. Step-by-Step Solution 1. Prepare the Custom Keystore First, we need to create a new base keystore. We will copy the default system cacerts (which contains standard public CAs) to our persistent storage. Connect to your Function App via SSH using the Kudu site (https://<your-app-name>.scm.azurewebsites.net/webssh/host). Run the following command to copy the truststore. (Note: The source path may vary depending on your Java version. You can confirm your exact JVM path by running echo $JAVA_HOME in the console. For example, if it returns /usr/lib/jvm/msft-17-x64, use that path below.) cp /usr/lib/jvm/msft-17-x64/lib/security/cacerts /home/site/wwwroot/my-truststore.jks 2. Import the Self-Signed Certificate Upload your root certificate (e.g., self-signed.badssl.com.cer) to the site (you can use drag-and-drop in Kudu or FTP). Then, import it into your new custom keystore. Run the following command (ensure keytool is in your PATH or navigate to the bin folder): ./keytool -import -alias my-self-signed-cert \ -file /home/self-signed.badssl.com.cer \ -keystore /home/site/wwwroot/my-truststore.jks \ -storepass changeit -noprompt 3. Verify the Import It is always good practice to verify that the certificate was actually added. Run: ./keytool -list -v \ -keystore /home/site/wwwroot/my-truststore.jks \ -storepass changeit -alias my-self-signed-cert If successful, you will see the certificate details printed in the console. 4. Configure the Application Setting Finally, we need to tell the JVM to use our new truststore instead of the default system one. Go to the Azure Portal > Configuration > Application Settings and add (or update) the JAVA_OPTS setting: Name: JAVA_OPTS Value: -Djavax.net.ssl.trustStore=/home/site/wwwroot/my-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit Save the settings. This will restart your Function App, and the JVM will now load your custom truststore at startup. Important Considerations File Location & Deployment In the example above, we placed the keystore in /home/site/wwwroot/. Warning: Depending on your deployment method (e.g., specific ZipDeploy configurations or "Run From Package"), the contents of /wwwroot might be wiped or overwritten during a new code deployment. If you are concerned about your deployment process overwriting the .jks file, you can save it in any other folder under /home, for example, /home/my-certs/. Just update the JAVA_OPTS path accordingly. Maintenance This is a manual solution. If your self-signed certificate expires: You do not need to recreate the whole keystore. Simply run the ./keytool -import command again to update the certificate in the existing .jks file. Maintaining the validity of the self-signed certificate is your responsibility. Azure Key Vault Note You might wonder, "Can I use Azure Key Vault?" Azure Key Vault is excellent for private keys, but it generally supports importing .pfx or .pem formats for privately signed certificates. Since public .cer certificates are not secrets (they are public, after all), the method above is often the most direct way to handle them for Java trust validation. Alternative Workarounds If you prefer not to manage a custom keystore file in the persistent /home directory, here are two alternative approaches. Both of these require modifying your application code. 1. Load the Azure-Managed Certificate via Code You can upload your .cer public certificate directly to the TLS/SSL settings (Public Keys Certificates) blade in the Azure Portal. After uploading, you must add the Application Setting WEBSITE_LOAD_CERTIFICATES with the value * (or the specific certificate thumbprint). Azure acts as the OS loader. It places the certificate file at /var/ssl/certs/<thumbprint>.der. Important Distinction: App Service vs. Function App There is a difference in how the "Blessed Images" (the default platform images) handle these certificates at startup: Azure App Service (Linux): In many scenarios, the platform's startup scripts automatically import these certificates into the JVM keystore. Azure Functions (Linux): The Function App runtime does not automatically import these certificates into the JVM keystore during startup. If you SSH into the Function App and run openssl or curl, the connection might succeed because those OS-level tools check the /var/ssl/certs folder. However, your Java application will throw a above handshake error because the JVM only looks at its own cacerts truststore, which is effectively empty of your custom certs. Since the certificate is present on the disk, you must write Java code to explicitly load this specific file into an SSLContext. Reference: Use TLS/SSL Certificates in App Code - Azure App Service | Microsoft Learn 2. Build the JKS Locally and Load it via Code Instead of creating the keystore on the server (the "Best Practice" method), you can create the my-truststore.jks on your local developer machine, include it inside your application (e.g., in src/main/resources), and deploy it as part of your JAR/WAR. You then write code to load this JKS file from the classpath or file system to initialize your SSL connection. Reference: Configure Security for Tomcat, JBoss, or Java SE Apps - Azure App Service | Microsoft Learn141Views0likes0CommentsBuilding MCP Apps with Azure Functions MCP Extension
Today, we are thrilled to announce the release of MCP App support in the Azure Functions MCP (Model Context Protocol) extension! You can now build MCP Apps using the Functions MCP Extension in Python, TypeScript, and .NET. What are MCP Apps Until now, MCP has primarily been a way for AI agents to “talk” to data and tools. A tool would take an input, perform a task, and return a text response. While powerful, text has limits. For example, it’s easier to see a chart than to read a long list of data points. It’s also more convenient and accurate to provide complex inputs via a form than a series of text responses. MCP Apps addresses the limits by allowing MCP servers to return interactive HTML interfaces that render directly in the conversation. The following scenarios shed light into how the UI capabilities of MCP Apps improve the user experience of MCP tools in ways that texts can’t: Data exploration: A sales analytics tool returns an interactive dashboard. Users filter by region, drill down into specific accounts, and export reports without leaving the conversation. Configuration wizards: A deployment tool presents a form with dependent fields. Selecting “production” reveals additional security options; selecting “staging” shows different defaults. Real-time monitoring: A server health tool shows live metrics that update as systems change. No need to re-run the tool to see current status. Building MCP Apps with Azure Functions MCP Extension Azure Functions is the ideal platform for hosting remote MCP servers because of its built-in authentication, event-driven scaling from 0 to N, and serverless billing. This ensures your agentic tools are secure, cost-effective, and ready to handle any load. How It Works: Connecting Tools to Resources Building an MCP App involves two main components: Tools: Tools are executable functions that allow an LLM to interact with external systems (e.g., querying a database or sending an email). Resources: Resources are read-only data entities (e.g., log files, API docs, or database schemas) that provide the LLM with information without triggering side effects. You connect the tools to resources via the tools’ metadata. 1. The Tool with UI Metadata The following code snippet defines an MCP tool called GetWeather using the McpToolTrigger and associated metadata using McpMetadata. The McpMetadata declares that the tool has an associated UI, telling AI clients that when this tool is invoked, there’s a specific visual component available to display the results. Example (Python): TOOL_METADATA = '{"ui": {"resourceUri": "ui://weather/index.html"}}' @app.mcp_tool(metadata=TOOL_METADATA) @app.mcp_tool_property(arg_name="location", description="City name to check weather for (e.g., Seattle, New York, Miami)") def get_weather(location: str) -> Dict[str, Any]: result = weather_service.get_current_weather(location) return json.dumps(result) Example (C#): private const string ToolMetadata = """ { "ui": { "resourceUri": "ui://weather/index.html" } } """; [Function(nameof(GetWeather))] public async Task<object> GetWeather( [McpToolTrigger(nameof(GetWeather), "Returns current weather for a location via Open-Meteo.")] [McpMetadata(ToolMetadata)] ToolInvocationContext context, [McpToolProperty("location", "City name to check weather for (e.g., Seattle, New York, Miami)")] string location) { var result = await _weatherService.GetCurrentWeatherAsync(location); return result; } 2. The Resource Serving the UI The following snippet defines an MCP resource called GetWeatherWidget, which serves the bundled HTML at the matching URI. The MimeType is set to text/html;profile=mcp-app. Note that the resource URI (ui://weather/index.html) is the same as the one specified in ToolMetadata from above. Example (Python): RESOURCE_METADATA = '{"ui": {"prefersBorder": true}}' WEATHER_WIDGET_URI = "ui://weather/index.html" WEATHER_WIDGET_NAME = "Weather Widget" WEATHER_WIDGET_DESCRIPTION = "Interactive weather display for MCP Apps" WEATHER_WIDGET_MIME_TYPE = "text/html;profile=mcp-app" @app.mcp_resource_trigger( arg_name="context", uri=WEATHER_WIDGET_URI, resource_name=WEATHER_WIDGET_NAME, description=WEATHER_WIDGET_DESCRIPTION, mime_type=WEATHER_WIDGET_MIME_TYPE, metadata=RESOURCE_METADATA ) def get_weather_widget(context) -> str: # Get the path to the widget HTML file current_dir = Path(__file__).parent file_path = current_dir / "app" / "dist" / "index.html" return file_path.read_text(encoding="utf-8") Example (C#): // Optional UI metadata private const string ResourceMetadata = """ { "ui": { "prefersBorder": true } } """; [Function(nameof(GetWeatherWidget))] public string GetWeatherWidget( [McpResourceTrigger( "ui://weather/index.html", "Weather Widget", MimeType = "text/html;profile=mcp-app", Description = "Interactive weather display for MCP Apps")] [McpMetadata(ResourceMetadata)] ResourceInvocationContext context) { var file = Path.Combine(AppContext.BaseDirectory, "app", "dist", "index.html"); return File.ReadAllText(file); } See quickstarts in Getting Started section for full sample code. 3. Putting It All Together User asks: “What’s the weather in Seattle?” Agent calls the GetWeathertool. The tool returns weather data (as a normal tool result). The tool also includes ui.resourceUri metadata (ui://weather/index.html) telling the client an interactive UI is available. The client fetches the UI resource from ui://weather/index.html and loads it in a sandboxed iframe. The client passes the tool result to the UI app. User sees an interactive weather widget instead of plain text Get Started You can start building today using our samples. Each sample demonstrates how to define tools that trigger interactive UI components: Python quickstart TypeScript quickstart .NET quickstart Documentation Learn more about the Azure Functions MCP extension. Learn more about MCP Apps. Next Step: Authentication The samples above secure the MCP Apps using access keys. Learn how to secure the apps using Microsoft Entra and the built-in MCP auth feature.7.1KViews1like0CommentsHow to Troubleshoot Azure Functions Not Visible in Azure Portal
How to Troubleshoot Azure Functions Not Visible in Azure Portal Overview Azure Functions is a powerful serverless compute service that enables you to run event-driven code without managing infrastructure. When you deploy functions to Azure, you expect to see them listed in the Azure Portal under your Function App. However, there are situations where your functions may not appear in the portal, even though they were successfully deployed. This issue can be frustrating, especially when your functions are actually running and processing requests correctly, but you cannot see them in the portal UI. In this blog, we will explore the common causes of functions not appearing in the Azure Portal and provide step-by-step solutions to help you troubleshoot and resolve this issue. Understanding How Functions Appear in the Portal Before diving into troubleshooting, it's important to understand how the Azure Portal discovers and displays your functions. Function Visibility Process When you open a Function App in the Azure Portal, the following process occurs: Host Status Check: The portal queries your Function App's host status endpoint (/admin/host/status) Function Enumeration: The portal requests a list of functions from the Functions runtime Metadata Retrieval: For each function, the portal retrieves metadata including trigger type, bindings, and configuration UI Rendering: The portal displays the functions in the Functions blade If any step in this process fails, your functions may not appear in the portal. Key Files for Function Discovery File Purpose Location host.json Host configuration Root of function app function.json Function metadata (script languages) Each function folder *.dll or compiled code Function implementation bin folder or function folder extensions.json Extension bindings bin folder Visibility Issue Categories Category Common Causes Deployment Failed deployment, missing files, package issues Function Configuration Invalid function.json, binding errors, disabled Host/Runtime Host startup failure, runtime errors, worker issues Storage AzureWebJobsStorage issues, connectivity Portal/Sync Sync triggers failure, cache issues, ARM API Networking VNET, private endpoints, firewall blocking Common Causes and Solutions 1. Function App Host Is Not Running Symptoms: No functions visible in the portal "Function host is not running" error message Host status shows "Error" or no response Why This Happens: The Functions host must be running for the portal to discover functions. If the host fails to start, functions won't be visible. How to Verify: You can check the host status using the following URL: Function API https://<your-function-app>.azurewebsites.net/admin/host/status?code=<master-key> Expected healthy response: { "state": "Running" } Solution: Navigate to your Function App in the Azure Portal Go to Diagnose and solve problems Search for "Function App Down or reporting", "Function app startup issue" Review the diagnostics for startup errors Common fixes: Check Application Settings for missing or incorrect values Verify AzureWebJobsStorage connection string is valid Ensure FUNCTIONS_EXTENSION_VERSION is set correctly (e.g., ~4) Check for missing extension bundles in host.json 2. Deployment Issues Symptoms: Functions visible locally but not in portal after deployment Only some functions appear Old versions of functions showing Why This Happens: Deployment problems can result in incomplete or corrupted deployments where function files are missing or incorrectly placed. How to Verify: The verification method depends on your hosting plan: For Windows plans (Consumption, Premium, Dedicated) - Use Kudu: Navigate to Development Tools → Advanced Tools (Kudu) Go to Debug console → CMD or PowerShell Navigate to site/wwwroot Verify your function folders and files exist Kudu is not available for Linux Consumption and Flex Consumption plans. Use alternatives such as SSH or Azure CLI instead. refer Deployment technologies in Azure Functions For compiled languages (C#, F#), verify: site/wwwroot/ ├── host.json ├── bin/ │ ├── <YourAssembly>.dll │ └── extensions.json └── function1/ └── function.json Similarly for script languages (JavaScript, Python, PowerShell), verify: Python Folder Structure PowerShell Folder structure NodeJS Folder structure Solution: Redeploy your function app using your preferred method: Visual Studio / VS Code Azure CLI GitHub Actions / Azure DevOps Kudu ZIP deploy Clear the deployment cache: Restart your function app through Portal or CLI/PowerShell # Using Azure CLI az functionapp restart az functionapp restart --name <app-name> --resource-group <resource-group> 3. Invalid or Missing function.json Symptoms: Specific functions not appearing Some functions visible, others missing Function appears but shows wrong trigger type Why This Happens: Each function requires a valid function.json file (generated at build time for compiled languages, or manually created for script languages). If this file is missing, malformed, or contains errors, the function won't be discovered. Example of Valid function.json for http trigger: { "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get", "post"] }, { "type": "http", "direction": "out", "name": "$return" } ] } Common function.json Errors: Error Example Fix Missing type {"bindings": [{"name": "req"}]} Add "type": "httpTrigger" Invalid direction "direction": "input" Use "in" or "out" Syntax error Missing comma or bracket Validate JSON syntax Wrong binding name Mismatched parameter names Match code parameter names Solution: Check the function folder in Kudu for function.json Validate the JSON syntax using a JSON validator For compiled functions, ensure your project builds successfully Check build output for warnings about function metadata 4. V2 Programming Model Issues (Python/Node.js) Symptoms: Using Python v2 or Node.js v4 programming model Functions defined in code but not visible in portal No function.json files in function folders Why This Happens: The V2 programming model for Python and v4 model for Node.js define functions using decorators/code instead of function.json files. The portal requires the host to be running to discover these functions dynamically. Python V2 Example: import azure.functions as func import logging app = func.FunctionApp() @app.function_name(name="HttpTrigger1") @app.route(route="hello", auth_level=func.AuthLevel.FUNCTION) def hello(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.HttpResponse("Hello!") Node.js V4 Example: const { app } = require('@azure/functions'); app.http('HttpTrigger1', { methods: ['GET', 'POST'], authLevel: 'function', handler: async (request, context) => { context.log('HTTP trigger function processed a request.'); return { body: 'Hello!' }; } }); Solution: Verify the host is running (see Solution #1) Check your entry point configuration. Check Application Insights for host startup errors related to function registration Check folder structure - Python folder structure 5. Extension Bundle or Dependencies Missing Symptoms: Functions not appearing after adding new trigger types Host fails to start with extension-related errors Works locally but not in Azure Why This Happens: Azure Functions uses extension bundles to provide trigger and binding implementations. If the bundle is missing or incorrectly configured, functions using those triggers won't work. How to Verify: Check your host.json for extension bundle configuration: extension bundle { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[4.*, 5.0.0)" } } Solution: Ensure extension bundle is configured in host.json Use a compatible version range: For Functions v4: [4.*, 5.0.0) For compiled C# apps using explicit extensions, ensure all NuGet packages are installed: Check for extension installation errors and fix it 6. Sync Trigger Issues Symptoms: Functions deployed successfully Host is running Portal still shows no functions or outdated function list Why This Happens: The Azure Portal caches function metadata. Sometimes this cache becomes stale or the sync process between the function host and the portal fails. Solution: Force a sync from the portal: Navigate to your Function App Click Refresh button in the Functions blade If that doesn't work, go to Overview → Restart Trigger a sync via REST API: Sync Trigger az rest --method post --url https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Web/sites/<APP_NAME>/syncfunctiontriggers?api-version=2016-08-01 7. Storage Account Connectivity Issues Symptoms: Functions not visible Host shows errors related to storage "Unable to get function keys" error Why This Happens: Azure Functions requires access to the storage account specified in AzureWebJobsStorage for: Storing function keys and secrets Coordinating distributed triggers Maintaining internal state If the function app cannot connect to storage, the host may fail to start properly. How to Verify: Check the Application Settings: Storage considerations for Azure Functions AzureWebJobsStorage - Must be a valid connection string WEBSITE_CONTENTAZUREFILECONNECTIONSTRING - For Consumption/Premium plans WEBSITE_CONTENTSHARE - File share name Solution: Verify the storage account exists and is accessible Check for firewall rules on the storage account: Go to Storage Account → Networking Ensure Function App has access (public endpoint or VNet integration) Regenerate connection strings if storage keys were rotated: Get new connection string from storage account Update AzureWebJobsStorage in Function App settings For VNet-integrated apps, ensure: Service endpoints or private endpoints are configured DNS resolution works for storage endpoints Check for more details - Storage considerations for Azure Functions 8. WEBSITE_RUN_FROM_PACKAGE Issues Symptoms: Functions not visible after deployment Functions were visible before but disappeared "No functions found" in the portal Read-only file system errors in logs Why This Happens: When WEBSITE_RUN_FROM_PACKAGE is configured, Azure Functions runs directly from a deployment package (ZIP file) instead of extracting files to wwwroot. If the package is inaccessible, corrupted, or misconfigured, the host cannot load your functions. Understanding WEBSITE_RUN_FROM_PACKAGE Values: Value Behavior 1 Indicates that the function app runs from a local package file deployed in the c:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder of your function app. <URL> Sets a URL that is the remote location of the specific package file you want to run. Required for functions apps running on Linux in a Consumption plan Not set Traditional deployment (files extracted to wwwroot) How to Verify: Check the app setting value. If using URL, verify package accessibility. Check if package exists properly (when value is 1): Go to Kudu → Debug Console Navigate to d:\home\data\SitePackages Verify a .zip file exists and packagename.txt points to it Verify package contents: Download the package Extract and verify host.json and function files are present at the root level (not in a subfolder) Common Issues: Issue Symptom Solution Expired SAS token Package URL returns 403 Generate new SAS with longer expiry Package URL not accessible Package URL returns 404 Verify blob exists and URL is correct Wrong package structure Files in subfolder Ensure files are at ZIP root, not in a nested folder Corrupted package Host startup errors Re-deploy with fresh package Storage firewall blocking Timeout errors Allow Function App access to storage 9. Configuration Filtering Functions Symptoms: Only some functions visible Specific functions always missing Functions worked before a configuration change Why This Happens: Azure Functions provides configuration options to filter which functions are loaded. If these are misconfigured, functions may be excluded. Configuration Options to Check: host.json functions array: Host.json -> functions { "functions": ["Function1", "Function2"] } Solution: Remove the functions array from host.json (or ensure all desired functions are listed) 10. Networking Configuration Issues Symptoms: Functions not visible in portal but app responds to requests "Unable to reach your function app" error in portal Portal timeout when loading functions Functions visible intermittently Host status endpoint not reachable from portal Why This Happens: When your Function App has networking restrictions configured (VNet integration, private endpoints, access restrictions), the Azure Portal may not be able to communicate with your function app to discover and display functions. The portal needs to reach your app's admin endpoints to enumerate functions. Common Networking Configurations That Cause Issues: Configuration Impact Portal Behavior Private Endpoint only (no public access) Portal can't reach admin APIs "Unable to reach function app" Access Restrictions (IP filtering) Portal IPs blocked Timeout loading functions VNet Integration with forced tunneling Outbound calls fail Host can't start properly Storage account behind firewall Host can't access keys/state Host startup failures NSG blocking outbound traffic Can't reach Azure services Various failures Important Note: When your Function App is fully private (no public access), you won't be able to see functions in the Azure Portal from outside your network. This is expected behavior. Using Diagnose and Solve Problems The Azure Portal provides built-in diagnostics to help troubleshoot function visibility issues. How to Access: Navigate to your Function App in the Azure Portal Select Diagnose and solve problems from the left menu Search for relevant detectors: Function App Down or Reporting Errors SyncTrigger Issues Deployment Networking Quick Troubleshooting Checklist Use this checklist to quickly diagnose functions not appearing in the portal: Host Status: Is the host running? Check /admin/host/status Files Present: Are function files deployed? Check via Kudu function.json Valid: Is the JSON syntax correct? Run From Package: If using WEBSITE_RUN_FROM_PACKAGE, is package accessible and configured right? Extension Bundle: Is extensionBundle properly configured in host.json? Storage Connection: Is AzureWebJobsStorage valid and reachable? No Filters: Is functions array in host.json filtering? V2 Model: For Python/Node.js v2, is host running to register? Sync Triggered: Has the portal synced with the host? Networking: Can the portal reach the app? Check access restrictions/private endpoints Verifying Functions via REST API If you cannot see functions in the portal but believe they're deployed, you can verify directly: Functions API List All Functions: curl "https://<app>.azurewebsites.net/admin/functions?code=<master-key>" Or directly from here with auth: List Functions Check Specific Function: curl "https://<app>.azurewebsites.net/admin/functions/<function-name>?code=<master-key>" Get Host Status: curl "https://<app>.azurewebsites.net/admin/host/status?code=<master-key>" If these APIs return your functions but the portal doesn't show them, the issue is likely a portal caching/sync problem (see Solution #6). Conclusion Functions not appearing in the Azure Portal can be caused by various issues, from deployment problems to configuration filtering. By following the troubleshooting steps outlined in this article, you should be able to identify and resolve the issue. Key Takeaways: Always verify the host is running first Check that function files are correctly deployed Validate function.json and host.json configurations Ensure storage connectivity is working Use the built-in diagnostics in the Azure Portal Force a sync if functions are deployed but not visible If you continue to experience issues after following these steps, consider opening a support ticket with Microsoft Azure Support, providing: Function App name and resource group Steps to reproduce the issue Any error messages observed Recent deployment or configuration changes References Azure Functions host.json reference Azure Functions deployment technologies Troubleshoot Azure Functions Python V2 programming model Node.js V4 programming model Azure Functions diagnostics Azure Functions networking options Have questions or feedback? Leave a comment below.651Views3likes0CommentsIndustry-Wide Certificate Changes Impacting Azure App Service Certificates
Executive Summary In early 2026, industry-wide changes mandated by browser applications and the CA/B Forum will affect both how TLS certificates are issued as well as their validity period. The CA/B Forum is a vendor body that establishes standards for securing websites and online communications through SSL/TLS certificates. Azure App Service is aligning with these standards for both App Service Managed Certificates (ASMC, free, DigiCert-issued) and App Service Certificates (ASC, paid, GoDaddy-issued). Most customers will experience no disruption. Action is required only if you pin certificates or use them for client authentication (mTLS). Update: February 17, 2026 We’ve published new Microsoft Learn documentation, Industry-wide certificate changes impacting Azure App Service , which provides more detailed guidance on these compliance-driven changes. The documentation also includes additional information not previously covered in this blog, such as updates to domain validation reuse, along with an expanding FAQ section. The Microsoft Learn documentation now represents the most complete and up-to-date overview of these changes. Going forward, any new details or clarifications will be published there, and we recommend bookmarking the documentation for the latest guidance. Who Should Read This? App Service administrators Security and compliance teams Anyone responsible for certificate management or application security Quick Reference: What’s Changing & What To Do Topic ASMC (Managed, free) ASC (GoDaddy, paid) Required Action New Cert Chain New chain (no action unless pinned) New chain (no action unless pinned) Remove certificate pinning Client Auth EKU Not supported (no action unless cert is used for mTLS) Not supported (no action unless cert is used for mTLS) Transition from mTLS Validity No change (already compliant) Two overlapping certs issued for the full year None (automated) If you do not pin certificates or use them for mTLS, no action is required. Timeline of Key Dates Date Change Action Required Mid-Jan 2026 and after ASMC migrates to new chain ASMC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Mar 2026 and after ASC validity shortened ASC migrates to new chain ASC stops supporting client auth EKU Remove certificate pinning if used Transition to alternative authentication if the certificate is used for mTLS Actions Checklist For All Users Review your use of App Service certificates. If you do not pin these certificates and do not use them for mTLS, no action is required. If You Pin Certificates (ASMC or ASC) Remove all certificate or chain pinning before their respective key change dates to avoid service disruption. See Best Practices: Certificate Pinning. If You Use Certificates for Client Authentication (mTLS) Switch to an alternative authentication method before their respective key change dates to avoid service disruption, as client authentication EKU will no longer be supported for these certificates. See Sunsetting the client authentication EKU from DigiCert public TLS certificates. See Set Up TLS Mutual Authentication - Azure App Service Details & Rationale Why Are These Changes Happening? These updates are required by major browser programs (e.g., Chrome) and apply to all public CAs. They are designed to enhance security and compliance across the industry. Azure App Service is automating updates to minimize customer impact. What’s Changing? New Certificate Chain Certificates will be issued from a new chain to maintain browser trust. Impact: Remove any certificate pinning to avoid disruption. Removal of Client Authentication EKU Newly issued certificates will not support client authentication EKU. This change aligns with Google Chrome’s root program requirements to enhance security. Impact: If you use these certificates for mTLS, transition to an alternate authentication method. Shortening of Certificate Validity Certificate validity is now limited to a maximum of 200 days. Impact: ASMC is already compliant; ASC will automatically issue two overlapping certificates to cover one year. No billing impact. Frequently Asked Questions (FAQs) Will I lose coverage due to shorter validity? No. For App Service Certificate, App Service will issue two certificates to span the full year you purchased. Is this unique to DigiCert and GoDaddy? No. This is an industry-wide change. Do these changes impact certificates from other CAs? Yes. These changes are an industry-wide change. We recommend you reach out to your certificates’ CA for more information. Do I need to act today? If you do not pin or use these certs for mTLS, no action is required. Glossary ASMC: App Service Managed Certificate (free, DigiCert-issued) ASC: App Service Certificate (paid, GoDaddy-issued) EKU: Extended Key Usage mTLS: Mutual TLS (client certificate authentication) CA/B Forum: Certification Authority/Browser Forum Additional Resources Changes to the Managed TLS Feature Set Up TLS Mutual Authentication Azure App Service Best Practices – Certificate pinning DigiCert Root and Intermediate CA Certificate Updates 2023 Sunsetting the client authentication EKU from DigiCert public TLS certificates Feedback & Support If you have questions or need help, please visit our official support channels or the Microsoft Q&A, where our team and the community can assist you.3.1KViews1like0CommentsHow Azure SRE Agent Can Investigate Resources in a Private Network
⚠️ Important Note on Network Communication: In this pattern, Azure SRE Agent communicates over the public network to reach the Azure Function proxy. The proxy endpoint is secured with Easy Auth (Microsoft Entra ID) and only authenticated callers can invoke it. We are also actively working on enabling SRE Agent to be injected directly into private networks, which will eliminate the need for a public proxy altogether. Stay tuned for updates on private network injection support. TL;DR When you configure Azure Monitor Private Link Scope (AMPLS) with publicNetworkAccessForQuery: Disabled , all public queries to your Log Analytics Workspace are blocked. To enable Azure SRE Agent to query these protected workspaces, deploy Azure Functions inside your VNet as a secure query proxy. What We Built: This sample deploys to a single subscription with two resource groups ( rg-originations-* and rg-workload-* ). The same pattern works identically across subscriptions. Simply deploy each resource group to a different subscription. Why Public Queries Get Blocked Many organizations secure their Log Analytics Workspaces using Azure Monitor Private Link Scope (AMPLS) with Private Endpoints. This is a best practice for compliance and data security, but it means all public queries are blocked. Resource Type Can Live in VNet? How to Access Privately Virtual Machine Yes Direct (it has a NIC) Container App Yes VNet integration Azure SQL No Private Endpoint Storage Account No Private Endpoint Log Analytics Workspace No AMPLS + Private Endpoint When you configure publicNetworkAccessForQuery: Disabled on the workspace and queryAccessMode: PrivateOnly on the AMPLS, any query that does not come through a Private Endpoint is rejected. This includes queries from Azure SRE Agent, which runs as a cloud service outside your VNet. The Architecture Two resource groups: rg-originations-ampls-demo (LAW + AMPLS, query access disabled) and rg-workload-ampls-demo (VNet + Private Endpoint + Function). This pattern works across subscriptions with cross-subscription RBAC for the Function's Managed Identity. The Problem: Blocked Queries When you configure AMPLS with private-only query access, any attempt to query from outside the VNet fails: InsufficientAccessError: The query was blocked due to private link 2. configuration. Access is denied because this request was not made 3. through a private endpoint. This is the expected behavior. But it means Azure SRE Agent (which runs as a cloud service, not inside your VNet) cannot directly query these workspaces. The Solution: Azure Functions as Query Proxy Deploy Azure Functions inside the workload VNet. This serverless proxy: Capability Description Runs inside VNet VNet-integrated with vnetRouteAllEnabled: true Uses Managed Identity Authenticates to LAW via Azure RBAC Exposes HTTPS endpoints SRE Agent calls as custom HTTP tools Proxies queries Transforms API calls into KQL queries Serverless scaling Pay only when queries are executed Why This Pattern Works Data ingestion and query access use different network paths: Operation Direction Network Status Log Ingestion AMA → Private Endpoint → LAW Private Works External Query Public Internet → LAW Public Blocked VNet Query VNet → Private Endpoint → LAW Private Works SRE Agent Query HTTPS → Function → PE → LAW Hybrid Works The Azure Function acts as a bridge between two networks: (1) Public side with HTTPS endpoint for SRE Agent, (2) Private side with VNet integration routing all outbound traffic through the Private Endpoint. This is why the pattern works. The Function "translates" public API calls into private network queries. Setting Up the Architecture Step 1: Configure the Originations LAW az monitor log-analytics workspace create --resource-group originations-rg --workspace-name originations-law --location eastus 4. az monitor log-analytics workspace update --resource-group originations-rg --workspace-name originations-law --set properties.publicNetworkAccessForQuery=Disabled Step 2: Create the Azure Monitor Private Link Scope az monitor private-link-scope create --name originations-ampls --resource-group originations-rg 5. # ... (see full sample on GitHub) Step 3: Create the Private Endpoint in Workload Resource Group az network private-endpoint create --name pe-ampls --resource-group rg-workload-ampls-demo --vnet-name vnet-workload-ampls-demo --subnet endpoints --private-connection-resource-id "/subscriptions/.../resourceGroups/rg-originations-ampls-demo/providers/Microsoft.Insights/privateLinkScopes/ampls-originations-ampls-demo" --group-id azuremonitor --connection-name ampls-connection Step 4: Deploy the Azure Function az functionapp plan create --name plan-law-query --resource-group workload-rg --location eastus --sku EP1 --is-linux true 6. az functionapp create --name func-law-query --resource-group workload-rg --plan plan-law-query --runtime python --runtime-version 3.11 --functions-version 4 --assign-identity '[system]' 7. # ... (see full sample on GitHub) Step 5: Configure Easy Auth (Microsoft Entra ID) on the Function App Instead of function keys, we secure the Azure Function with Easy Auth (Microsoft Entra ID authentication). This eliminates the need to manage secrets. The SRE Agent authenticates using its Managed Identity. 5a. Set Function Auth Level to Anonymous Since Easy Auth handles authentication at the platform level, set authLevel to anonymous in each function.json : { 8. "scriptFile": "__init__.py", 9. "bindings": [ 10. {"authLevel": "anonymous", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get", "post"]}, 11. {"type": "http", "direction": "out", "name": "$return"} 12. ] 13. } 5b. Enable Easy Auth via Azure Portal Navigate to your Function App in the Azure Portal Go to Settings → Authentication Click Add identity provider and select Microsoft Configure: Create new app registration, Current tenant, Federated identity credential, Require authentication, HTTP 401 for unauthenticated Add the SRE Agent's Managed Identity Client ID under Allowed client applications Note the Application (client) ID for PythonTool configuration Finding the SRE Agent Managed Identity Client ID Option 1: Azure Portal: Navigate to your SRE Agent → Settings → Identity → copy the Client ID under System assigned or User assigned. Option 2: Azure CLI: az containerapp show --name <YOUR-SRE-AGENT-NAME> --resource-group <YOUR-SRE-AGENT-RG> --query "identity.userAssignedIdentities" -o json 5c. Deploy SRE Agent Tools (PythonTools with Easy Auth) Critical: PythonTools must use def main(**kwargs) . Each tool acquires a Bearer Token from the SRE Agent's Managed Identity via IDENTITY_ENDPOINT and calls the Azure Function endpoints. See sample repository for full subagent definition and tool implementations. How Easy Auth Token Acquisition Works PythonTool reads IDENTITY_ENDPOINT and IDENTITY_HEADER environment variables (set automatically by the SRE Agent runtime) PythonTool calls the identity endpoint with resource=api://<app-id> to get a Bearer Token PythonTool includes the token in the Authorization: Bearer <token> header Easy Auth validates the token against the App Registration Function App executes the query using its Managed Identity No secrets required: Unlike function keys, Easy Auth uses Managed Identity tokens that are automatically rotated and never stored in code or configuration. The Investigation Flow Step Actor Action 1 You "There are errors on my workload VMs. Investigate." 2 SRE Agent Calls Azure Function's query_logs endpoint 3 Azure Function Queries LAW via Private Endpoint 4 Log Analytics Returns results (allowed, since request came from PE) 5 Azure Function Returns JSON response to SRE Agent 6 SRE Agent Analyzes logs, identifies root cause, responds Security Considerations Concern How It's Secured Log Analytics Public query access disabled, Private Link only Private Endpoint In isolated subnet with NSG rules Azure Function Managed Identity for LAW access (no secrets) API Authentication Easy Auth (Microsoft Entra ID) with Bearer Token, no secrets to manage VNet Routing vnetRouteAllEnabled: true for all traffic Audit Trail All invocations logged in Application Insights Try It Yourself git clone https://github.com/BandaruDheeraj/private-law-query-sample cd private-law-query-sample azd up # See Step 5 for Easy Auth configuration ./inject-failure.ps1 This creates: rg-originations-{env} (LAW + AMPLS) and rg-workload-{env} (VNet + PE + Functions + VMs) Key Takeaways AMPLS blocks public queries by design: When you configure Private Link with private-only query access, all external queries are rejected. This is the expected security behavior. Azure Functions provide a serverless query proxy: VNet-integrated Functions with Managed Identity can query private Log Analytics on behalf of SRE Agent. Resource groups simulate cross-subscription: This sample uses two resource groups; the same pattern works across subscriptions. Easy Auth eliminates secret management: Using Microsoft Entra ID authentication instead of function keys means no secrets to rotate or store. Security is maintained end-to-end: The workspace remains fully private; only the trusted Function can query it. The SRE Agent authenticates with its Managed Identity. Resources Resource Link Sample Repository github.com/BandaruDheeraj/private-law-query-sample Azure Monitor Private Link docs.microsoft.com/azure/azure-monitor/logs/private-link-security Azure Functions VNet Integration docs.microsoft.com/azure/azure-functions/functions-networking-options AMPLS Design Guidance docs.microsoft.com/azure/azure-monitor/logs/private-link-design Managed Identity for Azure Functions docs.microsoft.com/azure/app-service/overview-managed-identity Azure Developer CLI (azd) learn.microsoft.com/azure/developer/azure-developer-cli542Views0likes0CommentsUsing Claude Opus 4.6 in Github Copilot
The model selection in Github Copilot got richer with the addition of Claude Opus 4.6. The Model capability along with the addition of agents makes it a powerful combination to build complex code which requires many hours or days. Claude Opus 4.6 is better in coding skills as compared to the previous models. It also plans more carefully, performs more reliably in larger codebases, and has better code review as well as debugging skills to catch its own mistakes. In my current experiment, I used it multiple times to review its own code and while it took time (understandably) to get familiar with the code base. After that initial effort on the evaluation, the suggestions for fixes/improvements were on dot and often even better than a human reviewer (me in this case). Opus 4.6 also can run agentic tasks for longer. Following the release of the model, Anthropic published a paper on using Opus 4.6 to build C Compiler with a team of parallel Claudes. The compiler was built by 16 agents from scratch to get a Rust-based C compiler which was capable of compiling the Linux kernel. This is an interesting paper (shared in resources). Using Claude Opus 4.6 in Agentic Mode In less than an hour, I built a document analyzer to analyse the content, extract insights, build knowledge graphs and summarize elements. The code was built using Claude Opus 4.6 alongwith Claude Agents in Visual Studio Code. The initial prompt built the code and in the next hour after a few more interactions - unit tests were added and the UI worked as expected specifically for rendering the graphs. In the second phase, I converted the capabilities into Agents with tools and skills making the codebase Agentic. All this was done in Visual Studio using Github Copilot. Adding the complexity of Agentic execution was staggered across phases but the coding agent may well have built it right in the first instance with detailed specifications and instructions. The Agent could also fix UI requirements and problems in graph rendering from the snapshot shared in the chat window. That along with the logging was sufficient to quickly get to an application which worked as expected. The final graph rendering used mermaid diagrams in javascript while the backend was in python. Knowledge Graph rendering using mermaid What are Agents? Agents perform complete coding tasks end-to-end. They understand your project, make changes across multiple files, run commands, and adapt based on the results. An agent runs in the local, background, cloud, or third-party mode. An agent takes a high-level task and it breaks the task down into steps. It executes those steps with tools and self-corrects on errors. Multiple agent sessions can run in parallel, each focused on a different task. On creating a new agent session, the previous session remains active and can be accessed between tasks via the agent sessions list. The Chat window in Visual Studio Code allows for changing the model and also the Agent Mode. The Agent mode can be local for Local Agents or run in the background or on Cloud. Additionally, Third Party Agents are also available for coding. In the snapshot below, the Claude Agent (Third Party Agent) is used. In this project Azure GPT 4.1 was used in the code to perform the document analysis but this can be changed to any model of choice. I also used the ‘Ask before edits” mode to track the command runs. Alternatively, the other option was to let the Agent run autonomously. Visual Studio Code - Models and Agent Mode The local Agentic mode was also a good option and I used it a few times specifically as it is not constrained by network connectivity. But when the local compute does not suffice, the cloud mode is the next best option. Background agents are CLI-based agents, such as Copilot CLI running in the background on your local machine. They operate autonomously in the editor and Background agents use Git worktrees to work in an isolated environment from your main workspace to prevent conflicts with your active work. How to get the model? The model is accessible to GitHub Copilot Pro/Pro+, business, and enterprise users. Opus 4.6 operates more reliably in large codebases, offering improved code review and debugging skills. The Fast mode for Claude Opus 4.6, rolled out in research preview, provides a high-speed option with output token delivery speeds up to 2.5 times faster while maintaining comparable capabilities to Opus 4.6. Resources https://www.anthropic.com/news/claude-opus-4-6 https://www.anthropic.com/engineering/building-c-compiler https://github.blog/changelog/2026-02-05-claude-opus-4-6-is-now-generally-available-for-github-copilot https://code.visualstudio.com/docs/copilot/agents/overview1.5KViews1like2CommentsBuilding with Azure OpenAI Sora: A Complete Guide to AI Video Generation
In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API. Table of Contents Introduction to Sora Models Azure AI Foundry vs. OpenAI API Structure API Integration: Request Body Parameters Video Generation Modes Cost Analysis per Generation Technical Limitations & Constraints Resolution & Duration Support Implementation Best Practices Introduction to Sora Models Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions: Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds) Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview) Azure AI Foundry vs. OpenAI API Structure Key Architectural Differences Sora 1 uses Azure's traditional deployment-based API structure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/... Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields Job Management: Uses /jobs/{id} for status polling Content Download: Uses /video/generations/{generation_id}/content/video Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720") Job Management: Uses /videos/{video_id} for status polling Content Download: Uses /videos/{video_id}/content Why This Matters? This architectural difference requires conditional request formatting in your code: const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; } API Integration: Request Body Parameters Sora 1 API Parameters Standard Text-to-Video Request: { "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" } Parameter Details: model (String, Required): Your Azure deployment name prompt (String, Required): Natural language description of the video (max 32000 chars) height (String, Required): Video height in pixels width (String, Required): Video width in pixels n_seconds (String, Required): Duration (1-20 seconds) n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution) Sora 2 API Parameters Text-to-Video Request: { "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" } Image-to-Video Request (uses FormData): const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP Video-to-Video Remix Request: Endpoint: POST .../videos/{video_id}/remix Body: Only { "prompt": "your new description" } The original video's structure, motion, and framing are reused while applying the new prompt Parameter Details: model (String, Optional): Your deployment name prompt (String, Required): Video description size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280") seconds (String, Optional): "4", "8", or "12" (defaults to "4") input_reference (File, Optional): Reference image for image-to-video mode remix_video_id (String, URL parameter): ID of video to remix Video Generation Modes 1. Text-to-Video (Both Models) The foundational mode where you provide a text prompt describing the desired video. Implementation: const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), }); Best Practices: Include shot type (wide, close-up, aerial) Describe subject, action, and environment Specify lighting conditions (golden hour, dramatic, soft) Add camera movement if desired (pans, tilts, tracking shots) 2. Image-to-Video (Sora 2 Only) Generate a video anchored to or starting from a reference image. Key Requirements: Supported formats: JPEG, PNG, WebP Image dimensions must exactly match the selected video resolution Our implementation automatically resizes uploaded images to match Implementation Detail: // Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage); 3. Video-to-Video Remix (Sora 2 Only) Create variations of existing videos while preserving their structure and motion. Use Cases: Change weather conditions in the same scene Modify time of day while keeping camera movement Swap subjects while maintaining composition Adjust artistic style or color grading Endpoint Structure: POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview Implementation: let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; } Cost Analysis per Generation Sora 1 Pricing Model Base Rate: ~$0.05 per second per variant at 720p Resolution Scaling: Cost scales linearly with pixel count Formula: const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier; Examples: 720p (1280×720), 12 seconds, 1 variant: $0.60 1080p (1920×1080), 12 seconds, 1 variant: $1.35 720p, 12 seconds, 2 variants: $1.20 Sora 2 Pricing Model Flat Rate: $0.10 per second per variant (no resolution scaling in public preview) Formula: const totalCost = 0.10 * duration * variants; Examples: 720p (1280×720), 4 seconds: $0.40 720p (1280×720), 12 seconds: $1.20 720p (720×1280), 8 seconds: $0.80 Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters. Cost Comparison Scenario Sora 1 (720p) Sora 2 (720p) Winner 4s video $0.20 $0.40 Sora 1 12s video $0.60 $1.20 Sora 1 12s + audio N/A (no audio) $1.20 Sora 2 (unique) Image-to-video N/A $0.40-$1.20 Sora 2 (unique) Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities. Technical Limitations & Constraints Sora 1 Limitations Resolution Options: 9 supported resolutions from 480×480 to 1920×1080 Includes square, portrait, and landscape formats Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080 Duration: Flexible: 1 to 20 seconds Any integer value within range Variants: Depends on resolution: 1080p: Variants disabled (n_variants must be 1) 720p: Max 2 variants Other resolutions: Max 4 variants Concurrent Jobs: Maximum 2 jobs running simultaneously Job Expiration: Videos expire 24 hours after generation Audio: No audio generation (silent videos only) Sora 2 Limitations Resolution Options (Public Preview): Only 2 options: 720×1280 (portrait) or 1280×720 (landscape) No square formats No 1080p support in current preview Duration: Fixed options only: 4, 8, or 12 seconds No custom durations Defaults to 4 seconds if not specified Variants: Not prominently supported in current API documentation Focus is on single high-quality generations with audio Concurrent Jobs: Maximum 2 jobs (same as Sora 1) Job Expiration: 24 hours (same as Sora 1) Audio: Native audio generation included (dialogue, sound effects, ambience) Shared Constraints Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third. Job Lifecycle: queued → preprocessing → processing/running → completed Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video. Generation Time: Typical: 1-5 minutes depending on resolution, duration, and API load Can occasionally take longer during high demand Resolution & Duration Support Matrix Sora 1 Support Matrix Resolution Aspect Ratio Max Variants Duration Range Use Case 480×480 Square 4 1-20s Social thumbnails 480×854 Portrait 4 1-20s Mobile stories 854×480 Landscape 4 1-20s Quick previews 720×720 Square 4 1-20s Instagram posts 720×1280 Portrait 2 1-20s TikTok/Reels 1280×720 Landscape 2 1-20s YouTube shorts 1080×1080 Square 1 1-20s Premium social 1080×1920 Portrait 1 1-20s Premium vertical 1920×1080 Landscape 1 1-20s Full HD content Sora 2 Support Matrix Resolution Aspect Ratio Duration Options Audio Generation Modes 720×1280 Portrait 4s, 8s, 12s ✅ Yes Text, Image, Video Remix 1280×720 Landscape 4s, 8s, 12s ✅ Yes Text, Image, Video Remix Note: Sora 2's limited resolution options in public preview are expected to expand in future releases. Implementation Best Practices 1. Job Status Polling Strategy Implement adaptive backoff to avoid overwhelming the API: const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; } 2. Handling Different Response Structures Sora 1 Video Download: const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`; Sora 2 Video Download: const videoUrl = `${root}/videos/${jobId}/content`; 3. Error Handling try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification } 4. Image Preprocessing for Image-to-Video Always resize images to match the target video resolution: async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); } 5. Cost Tracking Implement cost estimation before generation and tracking after: // Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId }); 6. Progressive User Feedback Provide detailed status updates during the generation process: const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`); Conclusion Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways: Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows Optimize costs: Calculate estimates upfront and track actual usage for better budget management Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge. Resources: Azure AI Foundry Sora Documentation OpenAI Sora API Reference Azure OpenAI Service Pricing This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.476Views0likes0Comments