azure functions
369 TopicsIntroducing Skills in Azure API Center
The problem Modern applications depend on more than APIs. A single AI workflow might call an LLM, invoke an MCP tool, integrate a third-party service, and reference a business capability spanning dozens of endpoints. Without a central inventory, these assets become impossible to discover, easy to duplicate, and painful to govern. Azure API Center — part of the Azure API Management platform — already catalogs models, agents, and MCP servers alongside traditional APIs. Skills extend that foundation to cover reusable AI capabilities. What is a Skill? As AI agents become more capable, organizations need a way to define and govern what those agents can actually do. Skills are the answer. A Skill in Azure API Center is a reusable, registered capability that AI agents can discover and consume to extend their functionality. Each skill is backed by source code — typically hosted in a Git repository — and describes what it does, what APIs or MCP servers it can access, and who owns it. Think of skills as the building blocks of AI agent behavior, promoted into a governed inventory alongside your APIs, MCP servers, models, and agents. Example: A "Code Review Skill" performs automated code reviews using static analysis. It is registered in API Center with a Source URL pointing to its GitHub repo, allowed to access your code analysis API, and discoverable by any AI agent in your organization. How Skills work in API Center Skills can be added to your inventory in two ways: registered manually through the Azure portal, or synchronized automatically from a connected Git repository. Both approaches end up in the same governed catalog, discoverable through the API Center portal. Option 1: Register a Skill manually Use the Azure portal to register a skill directly. Navigate to Inventory > Assets in your API center, select + Register an asset > Skill, and fill in the registration form. Figure 2: Register a skill form in the Azure portal. The form captures everything needed to make a skill discoverable and governable: Field Description Title Display name for the skill (e.g. Code Review Skill). Identification Auto-generated URL slug based on the title. Editable. Summary One-line description of what the skill does. Description Full detail on capabilities, use cases, and expected behavior. Lifecycle stage Current state: Design, Preview, Production, or Deprecated. Source URL Git repository URL for the skill source code. Allowed tools The APIs or MCP servers from your inventory this skill is permitted to access. Enforces governance at the capability level. License Licensing terms: MIT, Apache 2.0, Proprietary, etc. Contact information Owner or support contact for the skill. Governance note: The Allowed tools field is key for AI governance. It explicitly defines which APIs and MCP servers a skill can invoke — preventing uncontrolled access and making security review straightforward. Option 2: Sync Skills from a Git repository For teams managing skills in source control, API Center can integrate directly with a Git repository and synchronize skill information automatically. This is the recommended approach for teams practicing GitOps or managing many skills at scale. Figure 3: Integrating a Git repository to sync skills automatically into API Center. When you configure a Git integration, API Center: Creates an Environment representing the repository as a source of skills Scans for files matching the configured pattern (default: **/skill.md) Syncs matching skills into your inventory and keeps them current as the repo changes For private repositories, a Personal Access Token (PAT) stored in Azure Key Vault is used for authentication. API Center's managed identity retrieves the PAT securely — no credentials are stored in the service itself. Tip: Use the Automatically configure managed identity and assign permissions option when setting up the integration if you haven't pre-configured a managed identity. API Center handles the Key Vault permissions automatically. Discovering Skills in your catalog Once registered — manually or via Git — skills appear in the Inventory > Assets page alongside all other asset types. Linked skills (synced from Git) are visually identified with a link icon, so teams can see at a glance which skills are source-controlled. From the API Center portal, developers and other stakeholders can browse the full skill catalog, filter by lifecycle stage or type, and view detailed information about each skill — including its source URL, allowed tools, and contact information. Figure 4: Skills catalog in API Center portal, showing registered skills and the details related to the skill. Developer experience: The API Center portal gives teams a self-service way to discover approved skills without needing to ask around or search GitHub. The catalog becomes the authoritative source of what's available and what's allowed. Why this matters for AI development teams Skills close a critical gap in AI governance. As organizations deploy AI agents, they need to know — and control — what those agents can do. Without a governed skill registry, capability discovery is ad hoc, reuse is low, and security review is difficult. By bringing skills into Azure API Center alongside APIs, MCP servers, models, and agents, teams get: A single inventory for all the assets AI agents depend on Explicit governance over which resources each skill can access via Allowed tools Automated, source-controlled skill registration via Git integration Discoverability for developers and AI systems through the API Center portal Consistent lifecycle management — Design through Production to Deprecated API Center, as part of the Azure API Management platform and the broader AI Gateway vision, is evolving into the system of record for AI-ready development. Skills are the latest step in that direction. Available now Skills are available today in Azure API Center (preview). To register your first skill: Sign in to the Azure portal and navigate to your API Center instance In the sidebar, select Inventory > Assets Select + Register an asset > Skill Fill in the registration form and select Create → Register and discover skills in Azure API Center (docs) → Set up your API Center portal → Explore the Azure API Management platform812Views0likes1CommentBuilding the agentic future together at JDConf 2026
JDConf 2026 is just weeks away, and I’m excited to welcome Java developers, architects, and engineering leaders from around the world for two days of learning and connection. Now in its sixth year, JDConf has become a place where the Java community compares notes on their real-world production experience: patterns, tooling, and hard-earned lessons you can take back to your team, while we keep moving the Java systems that run businesses and services forward in the AI era. This year’s program lines up with a shift many of us are seeing first-hand: delivery is getting more intelligent, more automated, and more tightly coupled to the systems and data we already own. Agentic approaches are moving from demos to backlog items, and that raises practical questions: what’s the right architecture, where do you draw trust boundaries, how do you keep secrets safe, and how do you ship without trading reliability for novelty? JDConf is for and by the people who build and manage the mission-critical apps powering organizations worldwide. Across three regional livestreams, you’ll hear from open source and enterprise practitioners who are making the same tradeoffs you are—velocity vs. safety, modernization vs. continuity, experimentation vs. operational excellence. Expect sessions that go beyond “what” and get into “how”: design choices, integration patterns, migration steps, and the guardrails that make AI features safe to run in production. You’ll find several practical themes for shipping Java in the AI era: connecting agents to enterprise systems with clear governance; frameworks and runtimes adapting to AI-native workloads; and how testing and delivery pipelines evolve as automation gets more capable. To make this more concrete, a sampling of sessions would include topics like Secrets of Agentic Memory Management (patterns for short- and long-term memory and safe retrieval), Modernizing a Java App with GitHub Copilot (end-to-end upgrade and migration with AI-powered technologies), and Docker Sandboxes for AI Agents (guardrails for running agent workflows without risking your filesystem or secrets). The goal is to help you adopt what’s new while hardening your long lived codebases. JDConf is built for community learning—free to attend, accessible worldwide, and designed for an interactive live experience in three time zones. You’ll not only get 23 practitioner-led sessions with production-ready guidance but also free on-demand access after the event to re-watch with your whole team. Pro tip: join live and get more value by discussing practical implications and ideas with your peers in the chat. This is where the “how” details and tradeoffs become clearer. JDConf 2026 Keynote Building the Agentic Future Together Rod Johnson, Embabel | Bruno Borges, Microsoft | Ayan Gupta, Microsoft The JDConf 2026 keynote features Rod Johnson, creator of the Spring Framework and founder of Embabel, joined by Bruno Borges and Ayan Gupta to explore where the Java ecosystem is headed in the agentic era. Expect a practitioner-level discussion on how frameworks like Spring continue to evolve, how MCP is changing the way agents interact with enterprise systems, and what Java developers should be paying attention to right now. Register. Attend. Earn. Register for JDConf 2026 to earn Microsoft Rewards points, which you can use for gift cards, sweepstakes entries, and more. Earn 1,000 points simply by signing up. When you register for any regional JDConf 2026 event with your Microsoft account, you'll automatically receive these points. Get 5,000 additional points for attending live (limited to the first 300 attendees per stream). On the day of your regional event, check in through the Reactor page or your email confirmation link to qualify. Disclaimer: Points are added to your Microsoft account within 60 days after the event. Must register with a Microsoft account email. Up to 10,000 developers eligible. Points will be applied upon registration and attendance and will not be counted multiple times for registering or attending at different events. Terms | Privacy JDConf 2026 Regional Live Streams Americas – April 8, 8:30 AM – 12:30 PM PDT (UTC -7) Bruno Borges hosts the Americas stream, discussing practical agentic Java topics like memory management, multi-agent system design, LLM integration, modernization with AI, and dependency security. Experts from Redis, IBM, Hammerspace, HeroDevs, AI Collective, Tekskills, and Microsoft share their insights. Register for Americas → Asia-Pacific – April 9, 10:00 AM – 2:00 PM SGT (UTC +8) Brian Benz and Ayan Gupta co-host the APAC stream, highlighting Java frameworks and practices for agentic delivery. Topics include Spring AI, multi-agent orchestration, spec-driven development, scalable DevOps, and legacy modernization, with speakers from Broadcom, Alibaba, CERN, MHP (A Porsche Company), and Microsoft. Register for Asia-Pacific → Europe, Middle East and Africa – April 9, 9:00 AM – 12:30 PM GMT (UTC +0) The EMEA stream, hosted by Sandra Ahlgrimm, will address the implementation of agentic Java in production environments. Topics include self-improving systems utilizing Spring AI, Docker sandboxes for agent workflow management, Retrieval-Augmented Generation (RAG) pipelines, modernization initiatives from a national tax authority, and AI-driven CI/CD enhancements. Presentations will feature experts from Broadcom, Docker, Elastic, Azul Systems, IBM, Team Rockstars IT, and Microsoft. Register for EMEA → Make It Interactive: Join Live Come prepared with an actual challenge you’re facing, whether you’re modernizing a legacy application, connecting agents to internal APIs, or refining CI/CD processes. Test your strategies by participating in live chats and Q&As with presenters and fellow professionals. If you’re attending with your team, schedule a debrief after the live stream to discuss how to quickly use key takeaways and insights in your pilots and projects. Learning Resources Java and AI for Beginners Video Series: Practical, episode-based walkthroughs on MCP, GenAI integration, and building AI-powered apps from scratch. Modernize Java Apps Guide: Step-by-step guide using GitHub Copilot agent mode for legacy Java project upgrades, automated fixes, and cloud-ready migrations. AI Agents for Java Webinar: Embedding AI Agent capabilities into Java applications using Microsoft Foundry, from project setup to production deployment. Java Practitioner’s Guide: Learning plan for deploying, managing, and optimizing Java applications on Azure using modern cloud-native approaches. Register Now JDConf 2026 is a free global event for Java teams. Join live to ask questions, connect, and gain practical patterns. All 23 sessions will be available on-demand. Register now to earn Microsoft Rewards points for attending. Register at JDConf.com128Views0likes0CommentsTake Control of Every Message: Partial Failure Handling for Service Bus Triggers in Azure Functions
The Problem: All-or-Nothing Batch Processing in Azure Service Bus Azure Service Bus is one of the most widely used messaging services for building event-driven applications on Azure. When you use Azure Functions with a Service Bus trigger in batch mode, your function receives multiple messages at once for efficient, high-throughput processing. But what happens when one message in the batch fails? Your function receives a batch of 50 Service Bus messages. 49 process perfectly. 1 fails. What happens? In the default model, the entire batch fails. All 50 messages go back on the queue and get reprocessed, including the 49 that already succeeded. This leads to: Duplicate processing — messages that were already handled successfully get processed again Wasted compute — you pay for re-executing work that already completed Infinite retry loops — if that one "poison" message keeps failing, it blocks the entire batch indefinitely Idempotency burden — your downstream systems must handle duplicates gracefully, adding complexity to every consumer This is the classic all-or-nothing batch failure problem. Azure Functions solves it with per-message settlement. The Solution: Per-Message Settlement for Azure Service Bus Azure Functions gives you direct control over how each individual message is settled in real time, as you process it. Instead of treating the batch as all-or-nothing, you settle each message independently based on its processing outcome. With Service Bus message settlement actions in Azure Functions, you can: Action What It Does Complete Remove the message from the queue (successfully processed) Abandon Release the lock so the message returns to the queue for retry, optionally modifying application properties Dead-letter Move the message to the dead-letter queue (poison message handling) Defer Keep the message in the queue but make it only retrievable by sequence number This means in a batch of 50 messages, you can: Complete 47 that processed successfully Abandon 2 that hit a transient error (with updated retry metadata) Dead-letter 1 that is malformed and will never succeed All in a single function invocation. No reprocessing of successful messages. No building failure response objects. No all-or-nothing. Why This Matters 1. Eliminates Duplicate Processing When you complete messages individually, successfully processed messages are immediately removed from the queue. There's no chance of them being redelivered, even if other messages in the same batch fail. 2. Enables Granular Error Handling Different failures deserve different treatments. A malformed message should be dead-lettered immediately. A message that failed due to a transient database timeout should be abandoned for retry. A message that requires manual intervention should be deferred. Per-message settlement gives you this granularity. 3. Implements Exponential Backoff Without External Infrastructure By combining abandon with modified application properties, you can track retry counts per message and implement exponential backoff patterns directly in your function code, no additional queues or Durable Functions required. 4. Reduces Cost You stop paying for redundant re-execution of already-successful work. In high-throughput systems processing millions of messages, this can be a material cost reduction. 5. Simplifies Idempotency Requirements When successful messages are never redelivered, your downstream systems don't need to guard against duplicates as aggressively. This reduces architectural complexity and potential for bugs. Before: One Message = One Function Invocation Before batch support, there was no cardinality option, Azure Functions processed each Service Bus message as a separate function invocation. If your queue had 50 messages, the runtime spun up 50 individual executions. Single-Message Processing (The Old Way) import { app, InvocationContext } from '@azure/functions'; async function processOrder( message: unknown, // ← One message at a time, no batch context: InvocationContext ): Promise<void> { try { const order = message as Order; await processOrder(order); } catch (error) { context.error('Failed to process message:', error); // Message auto-complete by default. throw error; } } app.serviceBusQueue('processOrder', { connection: 'ServiceBusConnection', queueName: 'orders-queue', handler: processOrder, }); What this cost you: 50 messages on the queue Old (single-message) New (batch + settlement) Function invocations 50 separate invocations 1 invocation Connection overhead 50 separate DB/API connections 1 connection, reused across batch Compute cost 50× invocation overhead 1× invocation overhead Settlement control Binary: throw or don't 4 actions per message Every message paid the full price of a function invocation, startup, connection setup, teardown. At scale (millions of messages/day), this was a significant cost and latency penalty. And when a message failed, your only option was to throw (retry the whole message) or swallow the error (lose it silently). Code Examples Let's see how this looks across all three major Azure Functions language stacks. Node.js (TypeScript with @ azure/functions-extensions-servicebus) import '@azure/functions-extensions-servicebus'; import { app, InvocationContext } from '@azure/functions'; import { ServiceBusMessageContext, messageBodyAsJson } from '@azure/functions-extensions-servicebus'; interface Order { id: string; product: string; amount: number; } export async function processOrderBatch( sbContext: ServiceBusMessageContext, context: InvocationContext ): Promise<void> { const { messages, actions } = sbContext; for (const message of messages) { try { const order = messageBodyAsJson<Order>(message); await processOrder(order); await actions.complete(message); // ✅ Done } catch (error) { context.error(`Failed ${message.messageId}:`, error); await actions.deadletter(message); // ☠️ Poison } } } app.serviceBusQueue('processOrderBatch', { connection: 'ServiceBusConnection', queueName: 'orders-queue', sdkBinding: true, autoCompleteMessages: false, cardinality: 'many', handler: processOrderBatch, }); Key points: Enable sdkBinding: true and autoCompleteMessages: false to gain manual settlement control ServiceBusMessageContext provides both the messages array and actions object Settlement actions: complete(), abandon(), deadletter(), defer() Application properties can be passed to abandon() for retry tracking Built-in helpers like messageBodyAsJson<T>() handle Buffer-to-object parsing Full sample: serviceBusSampleWithComplete Python (V2 Programming Model) import json import logging from typing import List import azure.functions as func import azurefunctions.extensions.bindings.servicebus as servicebus app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION) @app.service_bus_queue_trigger(arg_name="messages", queue_name="orders-queue", connection="SERVICEBUS_CONNECTION", auto_complete_messages=False, cardinality="many") def process_order_batch(messages: List[servicebus.ServiceBusReceivedMessage], message_actions: servicebus.ServiceBusMessageActions): for message in messages: try: order = json.loads(message.body) process_order(order) message_actions.complete(message) # ✅ Done except Exception as e: logging.error(f"Failed {message.message_id}: {e}") message_actions.dead_letter(message) # ☠️ Poison def process_order(order): logging.info(f"Processing order: {order['id']}") Key points: Uses azurefunctions.extensions.bindings.servicebus for SDK-type bindings with ServiceBusReceivedMessage Supports both queue and topic triggers with cardinality="many" for batch processing Each message exposes SDK properties like body, enqueued_time_utc, lock_token, message_id, and sequence_number Full sample: servicebus_samples_settlement .NET (C# Isolated Worker) using Azure.Messaging.ServiceBus; using Microsoft.Azure.Functions.Worker; public class ServiceBusBatchProcessor(ILogger<ServiceBusBatchProcessor> logger) { [Function(nameof(ProcessOrderBatch))] public async Task ProcessOrderBatch( [ServiceBusTrigger("orders-queue", Connection = "ServiceBusConnection")] ServiceBusReceivedMessage[] messages, ServiceBusMessageActions messageActions) { foreach (var message in messages) { try { var order = message.Body.ToObjectFromJson<Order>(); await ProcessOrder(order); await messageActions.CompleteMessageAsync(message); // ✅ Done } catch (Exception ex) { logger.LogError(ex, "Failed {MessageId}", message.MessageId); await messageActions.DeadLetterMessageAsync(message); // ☠️ Poison } } } private Task ProcessOrder(Order order) => Task.CompletedTask; } public record Order(string Id, string Product, decimal Amount); Key points: Inject ServiceBusMessageActions directly alongside the message array Each message is individually settled with CompleteMessageAsync, DeadLetterMessageAsync, or AbandonMessageAsync Application properties can be modified on abandon to track retry metadata Full sample: ServiceBusReceivedMessageFunctions.cs249Views0likes0CommentsHelp wanted: Refresh articles in Azure Architecture Center (AAC)
I’m the Project Manager for architecture review boards (ARBs) in the Azure Architecture Center (AAC). We’re looking for subject matter experts to help us improve the freshness of the AAC, Cloud Adoption Framework (CAF), and Well-Architected Framework (WAF) repos. This opportunity is currently limited to Microsoft employees only. As an ARB member, your main focus is to review, update, and maintain content to meet quarterly freshness targets. Your involvement directly impacts the quality, relevance, and direction of Azure Patterns & Practices content across AAC, CAF, and WAF. The content in these repos reaches almost 900,000 unique readers per month, so your time investment has a big, global impact. The expected commitment is 4-6 hours per month, including attendance at weekly or bi-weekly sync meetings. Become an ARB member to gain: Increased visibility and credibility as a subject‑matter expert by contributing to Microsoft‑authored guidance used by customers and partners worldwide. Broader internal reach and networking without changing roles or teams. Attribution on Microsoft Learn articles that you own. Opportunity to take on expanded roles over time (for example, owning a set of articles, mentoring contributors, or helping shape ARB direction). We’re recruiting new members across several ARBs. Our highest needs are in the Web ARB, Containers ARB, and Data & Analytics ARB: The Web ARB focuses on modern web application architecture on Azure—App Service and PaaS web apps, APIs and API Management, ingress and networking (Application Gateway, Front Door, DNS), security and identity, and designing for reliability, scalability, and disaster recovery. The Containers ARB focuses on containerized and Kubernetes‑based architectures—AKS design and operations, networking and ingress, security and identity, scalability, and reliability for production container platforms. The Data & Analytics ARB focuses on data platform and analytics architectures—data ingestion and integration, analytics and reporting, streaming and real‑time scenarios, data security and governance, and designing scalable, reliable data solutions on Azure. We’re also looking for people to take ownership of other articles across AAC, CAF, and WAF. These articles span many areas, including application and solution architectures, containers and compute, networking and security, governance and observability, data and integration, and reliability and operational best practices. You don’t need to know everything—deep expertise in one or two areas and an interest in keeping Azure architecture guidance accurate and current is what matters most. Please reply to this post if you’re interested in becoming an ARB member, and I’ll follow up with next steps. If you prefer, you can email me at v-jodimartis@microsoft.com. Thanks! 🙂32Views0likes0CommentsA Practical Path Forward for Heroku Customers with Azure
On February 6, 2026, Heroku announced it is moving to a sustaining engineering model focused on stability, security, reliability, and ongoing support. Many customers are now reassessing how their application platforms will support today’s workloads and future innovation. Microsoft is committed to helping customers migrate and modernize applications from platforms like Heroku to Azure.190Views0likes0CommentsRethinking Background Workloads with Azure Functions on Azure Container Apps
Objective Azure Container Apps provides a flexible platform for running background workloads, supporting multiple execution models to address different workload needs. Two commonly used models are: Azure Functions on Azure Container Apps - overview of Azure functions Azure Container Apps Jobs – overview of Container App Jobs Both are first‑class capabilities on the same platform and are designed for different types of background processing. This blog explores Use Cases where Azure Functions on Azure Container Apps are best suited Use Cases where Container App Jobs provide advantages Use Cases where Azure Functions on Azure Container Apps Are suited Azure Functions on Azure Container Apps are particularly well suited for event‑driven and workflow‑oriented background workloads, where work is initiated by external signals and coordination is a core concern. The following use cases illustrate scenarios where the Functions programming model aligns naturally with the workload, allowing teams to focus on business logic while the platform handles triggering, scaling, and coordination. Event‑Driven Data Ingestion Pipelines For ingestion pipelines where data arrives asynchronously and unpredictably. Example: A retail company processes inventory updates from hundreds of suppliers. Files land in Blob Storage overnight, varying widely in size and arrival time. In this scenario: Each file is processed independently as it arrives Execution is driven by actual data arrival, not schedules Parallelism and retries are handled by the platform .blob_trigger(arg_name="blob", path="inventory-uploads/{name}", connection="StorageConnection") async def process_inventory(blob: func.InputStream): data = blob.read() # Transform and load to database await transform_and_load(data, blob.name) Multi‑Step, Event‑Driven Processing Workflows Functions works well for workloads that involve multiple dependent steps, where each step can fail independently and must be retried or resumed safely. Example: An order processing workflow that includes validation, inventory checks, payment capture, and fulfilment notifications. Using Durable Functions: Workflow state persisted automatically Each step can be retried independently Execution resumes from the point of failure rather than restarting Durable Functions on Container Apps solves this declaratively: .orchestration_trigger(context_name="context") def order_workflow(context: df.DurableOrchestrationContext): order = context.get_input() # Each step is independently retryable with built-in checkpointing validated = yield context.call_activity("validate_order", order) inventory = yield context.call_activity("check_inventory", validated) payment = yield context.call_activity("capture_payment", inventory) yield context.call_activity("notify_fulfillment", payment) return {"status": "completed", "order_id": order["id"]} Scheduled, Recurring Background Tasks For time‑based background work that runs on a predictable cadence and is closely tied to application logic. Example: Daily financial summaries, weekly aggregations, or month‑end reconciliation reports. Timer‑triggered Functions allow: Schedules to be defined in code Logic to be versioned alongside application code Execution to run in the same Container Apps environment as other services .timer_trigger(schedule="0 0 6 * * *", arg_name="timer") async def daily_financial_summary(timer: func.TimerRequest): if timer.past_due: logging.warning("Timer is running late!") await generate_summary(date.today() - timedelta(days=1)) await send_to_stakeholders() Long‑Running, Parallelizable Workloads Scenarios which require long‑running workloads to be decomposed into smaller units of work and coordinated as a workflow. Example: A large data migration processing millions of records. With Durable Functions: Work is split into independent batches Batches execute in parallel across multiple instances Progress is checkpointed automatically Failures are isolated to individual batches .orchestration_trigger(context_name="context") def migration_orchestrator(context: df.DurableOrchestrationContext): batches = yield context.call_activity("get_migration_batches") # Process all batches in parallel across multiple instances tasks = [context.call_activity("migrate_batch", b) for b in batches] results = yield context.task_all(tasks) yield context.call_activity("generate_report", results) Use Cases where Container App Jobs are a Best Fit Azure Container Apps Jobs are well suited for workloads that require explicit execution control or full ownership of the runtime and lifecycle. Common examples include: Batch Processing Using Existing Container Images Teams often have existing containerized batch workloads such as data processors, ETL tools, or analytics jobs that are already packaged and validated. When refactoring these workloads into a Functions programming model is not desirable, Container Apps Jobs allow them to run unchanged while integrating into the Container Apps environment. Large-Scale Data Migrations and One-Time Operations Jobs are a natural fit for one‑time or infrequently run migrations, such as schema upgrades, backfills, or bulk data transformations. These workloads are typically: Explicitly triggered Closely monitored Designed to run to completion under controlled conditions The ability to manage execution, retries, and shutdown behavior directly is often important in these scenarios. Custom Runtime or Specialized Dependency Workloads Some workloads rely on: Specialized runtimes Native system libraries Third‑party tools or binaries When these requirements fall outside the supported Functions runtimes, Container Apps Jobs provide the flexibility to define the runtime environment exactly as needed. Externally Orchestrated or Manually Triggered Workloads In some architectures, execution is coordinated by an external system such as: A CI/CD pipeline An operations workflow A custom scheduler or control plane Container Apps Jobs integrate well into these models, where execution is initiated explicitly rather than driven by platform‑managed triggers. Long-Running, Single-Instance Processing For workloads that are intentionally designed to run as a single execution unit without fan‑out, trigger‑based scaling, or workflow orchestration Jobs provide a straightforward execution model. This includes tasks where parallelism, retries, and state handling are implemented directly within the application. Making the Choice Consideration Azure Functions on Azure Container Apps Azure Container Apps Jobs Trigger model Event‑driven (files, messages, timers, HTTP, events) Explicit execution (manual, scheduled, or externally triggered) Scaling behavior Automatic scaling based on trigger volume / queue depth Fixed or explicitly defined parallelism Programming model Functions programming model with triggers, bindings, Durable Functions General container execution model State management Built‑in state, retries, and checkpointing via Durable Functions Custom state management required Workflow orchestration Native support using Durable Functions Must be implemented manually Boilerplate required Minimal (no polling, retry, or coordination code) Higher (polling, retries, lifecycle handling) Runtime flexibility Limited to supported Functions runtimes Full control over runtime and dependencies Getting Started on Functions on Azure Container Apps If you’re already running on Container Apps, adding Functions is straightforward: Your Functions run alongside your existing apps, sharing the same networking, observability, and scaling infrastructure. Check out the documentation for details - Getting Started on Functions on Azure Container Apps # Create a Functions app in your existing Container Apps environment az functionapp create \ --name my-batch-processor \ --storage-account mystorageaccount \ --environment my-container-apps-env \ --workload-profile-name "Consumption" \ --runtime python \ --functions-version 4 Getting Started on Container App Jobs on Azure Container Apps If you already have an Azure Container Apps environment, you can create a job using the Azure CLI. Checkout the documentation for details - Jobs in Azure Container Apps az containerapp job create \ --name my-job \ --resource-group my-resource-group \ --environment my-container-apps-env \ --trigger-type Manual \ --image mcr.microsoft.com/k8se/quickstart-jobs:latest \ --cpu 0.25 \ --memory 0.5Gi Quick Links Azure Functions on Azure Container Apps overview Create your Azure Functions app through custom containers on Azure Container Apps Run event-driven and batch workloads with Azure Functions on Azure Container Apps1KViews0likes0CommentsBest Practice: Using Self-Signed Certificates with Java on Azure Functions (Linux)
If you are developing Java applications on Azure Functions (Linux dedicated plan) and need to connect to services secured by self-signed certificates, you have likely encountered the dreaded SSL handshake error: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target By default, the Java Virtual Machine (JVM) only trusts certificates signed by well-known Certificate Authorities (CAs). To fix this, you need to tell your Java Function App to trust your specific self-signed certificate. While there are several ways to achieve this, this guide outlines the best practice: manually adding the certificate to a custom Java keystore located in persistent storage. Why this approach? In Azure App Service and Azure Functions (Linux), the file system is generally ephemeral, meaning changes to system folders (like /usr/lib/jvm) are lost upon restart. However, the /home directory is persistent. By creating a custom truststore in /home and pointing the JVM to it, your configuration remains intact across restarts, scaling operations, and platform updates. Step-by-Step Solution 1. Prepare the Custom Keystore First, we need to create a new base keystore. We will copy the default system cacerts (which contains standard public CAs) to our persistent storage. Connect to your Function App via SSH using the Kudu site (https://<your-app-name>.scm.azurewebsites.net/webssh/host). Run the following command to copy the truststore. (Note: The source path may vary depending on your Java version. You can confirm your exact JVM path by running echo $JAVA_HOME in the console. For example, if it returns /usr/lib/jvm/msft-17-x64, use that path below.) cp /usr/lib/jvm/msft-17-x64/lib/security/cacerts /home/site/wwwroot/my-truststore.jks 2. Import the Self-Signed Certificate Upload your root certificate (e.g., self-signed.badssl.com.cer) to the site (you can use drag-and-drop in Kudu or FTP). Then, import it into your new custom keystore. Run the following command (ensure keytool is in your PATH or navigate to the bin folder): ./keytool -import -alias my-self-signed-cert \ -file /home/self-signed.badssl.com.cer \ -keystore /home/site/wwwroot/my-truststore.jks \ -storepass changeit -noprompt 3. Verify the Import It is always good practice to verify that the certificate was actually added. Run: ./keytool -list -v \ -keystore /home/site/wwwroot/my-truststore.jks \ -storepass changeit -alias my-self-signed-cert If successful, you will see the certificate details printed in the console. 4. Configure the Application Setting Finally, we need to tell the JVM to use our new truststore instead of the default system one. Go to the Azure Portal > Configuration > Application Settings and add (or update) the JAVA_OPTS setting: Name: JAVA_OPTS Value: -Djavax.net.ssl.trustStore=/home/site/wwwroot/my-truststore.jks -Djavax.net.ssl.trustStorePassword=changeit Save the settings. This will restart your Function App, and the JVM will now load your custom truststore at startup. Important Considerations File Location & Deployment In the example above, we placed the keystore in /home/site/wwwroot/. Warning: Depending on your deployment method (e.g., specific ZipDeploy configurations or "Run From Package"), the contents of /wwwroot might be wiped or overwritten during a new code deployment. If you are concerned about your deployment process overwriting the .jks file, you can save it in any other folder under /home, for example, /home/my-certs/. Just update the JAVA_OPTS path accordingly. Maintenance This is a manual solution. If your self-signed certificate expires: You do not need to recreate the whole keystore. Simply run the ./keytool -import command again to update the certificate in the existing .jks file. Maintaining the validity of the self-signed certificate is your responsibility. Azure Key Vault Note You might wonder, "Can I use Azure Key Vault?" Azure Key Vault is excellent for private keys, but it generally supports importing .pfx or .pem formats for privately signed certificates. Since public .cer certificates are not secrets (they are public, after all), the method above is often the most direct way to handle them for Java trust validation. Alternative Workarounds If you prefer not to manage a custom keystore file in the persistent /home directory, here are two alternative approaches. Both of these require modifying your application code. 1. Load the Azure-Managed Certificate via Code You can upload your .cer public certificate directly to the TLS/SSL settings (Public Keys Certificates) blade in the Azure Portal. After uploading, you must add the Application Setting WEBSITE_LOAD_CERTIFICATES with the value * (or the specific certificate thumbprint). Azure acts as the OS loader. It places the certificate file at /var/ssl/certs/<thumbprint>.der. Important Distinction: App Service vs. Function App There is a difference in how the "Blessed Images" (the default platform images) handle these certificates at startup: Azure App Service (Linux): In many scenarios, the platform's startup scripts automatically import these certificates into the JVM keystore. Azure Functions (Linux): The Function App runtime does not automatically import these certificates into the JVM keystore during startup. If you SSH into the Function App and run openssl or curl, the connection might succeed because those OS-level tools check the /var/ssl/certs folder. However, your Java application will throw a above handshake error because the JVM only looks at its own cacerts truststore, which is effectively empty of your custom certs. Since the certificate is present on the disk, you must write Java code to explicitly load this specific file into an SSLContext. Reference: Use TLS/SSL Certificates in App Code - Azure App Service | Microsoft Learn 2. Build the JKS Locally and Load it via Code Instead of creating the keystore on the server (the "Best Practice" method), you can create the my-truststore.jks on your local developer machine, include it inside your application (e.g., in src/main/resources), and deploy it as part of your JAR/WAR. You then write code to load this JKS file from the classpath or file system to initialize your SSL connection. Reference: Configure Security for Tomcat, JBoss, or Java SE Apps - Azure App Service | Microsoft Learn223Views0likes0CommentsBuilding MCP Apps with Azure Functions MCP Extension
Today, we are thrilled to announce the release of MCP App support in the Azure Functions MCP (Model Context Protocol) extension! You can now build MCP Apps using the Functions MCP Extension in Python, TypeScript, and .NET. What are MCP Apps Until now, MCP has primarily been a way for AI agents to “talk” to data and tools. A tool would take an input, perform a task, and return a text response. While powerful, text has limits. For example, it’s easier to see a chart than to read a long list of data points. It’s also more convenient and accurate to provide complex inputs via a form than a series of text responses. MCP Apps addresses the limits by allowing MCP servers to return interactive HTML interfaces that render directly in the conversation. The following scenarios shed light into how the UI capabilities of MCP Apps improve the user experience of MCP tools in ways that texts can’t: Data exploration: A sales analytics tool returns an interactive dashboard. Users filter by region, drill down into specific accounts, and export reports without leaving the conversation. Configuration wizards: A deployment tool presents a form with dependent fields. Selecting “production” reveals additional security options; selecting “staging” shows different defaults. Real-time monitoring: A server health tool shows live metrics that update as systems change. No need to re-run the tool to see current status. Building MCP Apps with Azure Functions MCP Extension Azure Functions is the ideal platform for hosting remote MCP servers because of its built-in authentication, event-driven scaling from 0 to N, and serverless billing. This ensures your agentic tools are secure, cost-effective, and ready to handle any load. How It Works: Connecting Tools to Resources Building an MCP App involves two main components: Tools: Tools are executable functions that allow an LLM to interact with external systems (e.g., querying a database or sending an email). Resources: Resources are read-only data entities (e.g., log files, API docs, or database schemas) that provide the LLM with information without triggering side effects. You connect the tools to resources via the tools’ metadata. 1. The Tool with UI Metadata The following code snippet defines an MCP tool called GetWeather using the McpToolTrigger and associated metadata using McpMetadata. The McpMetadata declares that the tool has an associated UI, telling AI clients that when this tool is invoked, there’s a specific visual component available to display the results. Example (Python): TOOL_METADATA = '{"ui": {"resourceUri": "ui://weather/index.html"}}' @app.mcp_tool(metadata=TOOL_METADATA) @app.mcp_tool_property(arg_name="location", description="City name to check weather for (e.g., Seattle, New York, Miami)") def get_weather(location: str) -> Dict[str, Any]: result = weather_service.get_current_weather(location) return json.dumps(result) Example (C#): private const string ToolMetadata = """ { "ui": { "resourceUri": "ui://weather/index.html" } } """; [Function(nameof(GetWeather))] public async Task<object> GetWeather( [McpToolTrigger(nameof(GetWeather), "Returns current weather for a location via Open-Meteo.")] [McpMetadata(ToolMetadata)] ToolInvocationContext context, [McpToolProperty("location", "City name to check weather for (e.g., Seattle, New York, Miami)")] string location) { var result = await _weatherService.GetCurrentWeatherAsync(location); return result; } 2. The Resource Serving the UI The following snippet defines an MCP resource called GetWeatherWidget, which serves the bundled HTML at the matching URI. The MimeType is set to text/html;profile=mcp-app. Note that the resource URI (ui://weather/index.html) is the same as the one specified in ToolMetadata from above. Example (Python): RESOURCE_METADATA = '{"ui": {"prefersBorder": true}}' WEATHER_WIDGET_URI = "ui://weather/index.html" WEATHER_WIDGET_NAME = "Weather Widget" WEATHER_WIDGET_DESCRIPTION = "Interactive weather display for MCP Apps" WEATHER_WIDGET_MIME_TYPE = "text/html;profile=mcp-app" @app.mcp_resource_trigger( arg_name="context", uri=WEATHER_WIDGET_URI, resource_name=WEATHER_WIDGET_NAME, description=WEATHER_WIDGET_DESCRIPTION, mime_type=WEATHER_WIDGET_MIME_TYPE, metadata=RESOURCE_METADATA ) def get_weather_widget(context) -> str: # Get the path to the widget HTML file current_dir = Path(__file__).parent file_path = current_dir / "app" / "dist" / "index.html" return file_path.read_text(encoding="utf-8") Example (C#): // Optional UI metadata private const string ResourceMetadata = """ { "ui": { "prefersBorder": true } } """; [Function(nameof(GetWeatherWidget))] public string GetWeatherWidget( [McpResourceTrigger( "ui://weather/index.html", "Weather Widget", MimeType = "text/html;profile=mcp-app", Description = "Interactive weather display for MCP Apps")] [McpMetadata(ResourceMetadata)] ResourceInvocationContext context) { var file = Path.Combine(AppContext.BaseDirectory, "app", "dist", "index.html"); return File.ReadAllText(file); } See quickstarts in Getting Started section for full sample code. 3. Putting It All Together User asks: “What’s the weather in Seattle?” Agent calls the GetWeathertool. The tool returns weather data (as a normal tool result). The tool also includes ui.resourceUri metadata (ui://weather/index.html) telling the client an interactive UI is available. The client fetches the UI resource from ui://weather/index.html and loads it in a sandboxed iframe. The client passes the tool result to the UI app. User sees an interactive weather widget instead of plain text Get Started You can start building today using our samples. Each sample demonstrates how to define tools that trigger interactive UI components: Python quickstart TypeScript quickstart .NET quickstart Documentation Learn more about the Azure Functions MCP extension. Learn more about MCP Apps. Next Step: Authentication The samples above secure the MCP Apps using access keys. Learn how to secure the apps using Microsoft Entra and the built-in MCP auth feature.7.8KViews1like0CommentsHow to Troubleshoot Azure Functions Not Visible in Azure Portal
How to Troubleshoot Azure Functions Not Visible in Azure Portal Overview Azure Functions is a powerful serverless compute service that enables you to run event-driven code without managing infrastructure. When you deploy functions to Azure, you expect to see them listed in the Azure Portal under your Function App. However, there are situations where your functions may not appear in the portal, even though they were successfully deployed. This issue can be frustrating, especially when your functions are actually running and processing requests correctly, but you cannot see them in the portal UI. In this blog, we will explore the common causes of functions not appearing in the Azure Portal and provide step-by-step solutions to help you troubleshoot and resolve this issue. Understanding How Functions Appear in the Portal Before diving into troubleshooting, it's important to understand how the Azure Portal discovers and displays your functions. Function Visibility Process When you open a Function App in the Azure Portal, the following process occurs: Host Status Check: The portal queries your Function App's host status endpoint (/admin/host/status) Function Enumeration: The portal requests a list of functions from the Functions runtime Metadata Retrieval: For each function, the portal retrieves metadata including trigger type, bindings, and configuration UI Rendering: The portal displays the functions in the Functions blade If any step in this process fails, your functions may not appear in the portal. Key Files for Function Discovery File Purpose Location host.json Host configuration Root of function app function.json Function metadata (script languages) Each function folder *.dll or compiled code Function implementation bin folder or function folder extensions.json Extension bindings bin folder Visibility Issue Categories Category Common Causes Deployment Failed deployment, missing files, package issues Function Configuration Invalid function.json, binding errors, disabled Host/Runtime Host startup failure, runtime errors, worker issues Storage AzureWebJobsStorage issues, connectivity Portal/Sync Sync triggers failure, cache issues, ARM API Networking VNET, private endpoints, firewall blocking Common Causes and Solutions 1. Function App Host Is Not Running Symptoms: No functions visible in the portal "Function host is not running" error message Host status shows "Error" or no response Why This Happens: The Functions host must be running for the portal to discover functions. If the host fails to start, functions won't be visible. How to Verify: You can check the host status using the following URL: Function API https://<your-function-app>.azurewebsites.net/admin/host/status?code=<master-key> Expected healthy response: { "state": "Running" } Solution: Navigate to your Function App in the Azure Portal Go to Diagnose and solve problems Search for "Function App Down or reporting", "Function app startup issue" Review the diagnostics for startup errors Common fixes: Check Application Settings for missing or incorrect values Verify AzureWebJobsStorage connection string is valid Ensure FUNCTIONS_EXTENSION_VERSION is set correctly (e.g., ~4) Check for missing extension bundles in host.json 2. Deployment Issues Symptoms: Functions visible locally but not in portal after deployment Only some functions appear Old versions of functions showing Why This Happens: Deployment problems can result in incomplete or corrupted deployments where function files are missing or incorrectly placed. How to Verify: The verification method depends on your hosting plan: For Windows plans (Consumption, Premium, Dedicated) - Use Kudu: Navigate to Development Tools → Advanced Tools (Kudu) Go to Debug console → CMD or PowerShell Navigate to site/wwwroot Verify your function folders and files exist Kudu is not available for Linux Consumption and Flex Consumption plans. Use alternatives such as SSH or Azure CLI instead. refer Deployment technologies in Azure Functions For compiled languages (C#, F#), verify: site/wwwroot/ ├── host.json ├── bin/ │ ├── <YourAssembly>.dll │ └── extensions.json └── function1/ └── function.json Similarly for script languages (JavaScript, Python, PowerShell), verify: Python Folder Structure PowerShell Folder structure NodeJS Folder structure Solution: Redeploy your function app using your preferred method: Visual Studio / VS Code Azure CLI GitHub Actions / Azure DevOps Kudu ZIP deploy Clear the deployment cache: Restart your function app through Portal or CLI/PowerShell # Using Azure CLI az functionapp restart az functionapp restart --name <app-name> --resource-group <resource-group> 3. Invalid or Missing function.json Symptoms: Specific functions not appearing Some functions visible, others missing Function appears but shows wrong trigger type Why This Happens: Each function requires a valid function.json file (generated at build time for compiled languages, or manually created for script languages). If this file is missing, malformed, or contains errors, the function won't be discovered. Example of Valid function.json for http trigger: { "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get", "post"] }, { "type": "http", "direction": "out", "name": "$return" } ] } Common function.json Errors: Error Example Fix Missing type {"bindings": [{"name": "req"}]} Add "type": "httpTrigger" Invalid direction "direction": "input" Use "in" or "out" Syntax error Missing comma or bracket Validate JSON syntax Wrong binding name Mismatched parameter names Match code parameter names Solution: Check the function folder in Kudu for function.json Validate the JSON syntax using a JSON validator For compiled functions, ensure your project builds successfully Check build output for warnings about function metadata 4. V2 Programming Model Issues (Python/Node.js) Symptoms: Using Python v2 or Node.js v4 programming model Functions defined in code but not visible in portal No function.json files in function folders Why This Happens: The V2 programming model for Python and v4 model for Node.js define functions using decorators/code instead of function.json files. The portal requires the host to be running to discover these functions dynamically. Python V2 Example: import azure.functions as func import logging app = func.FunctionApp() @app.function_name(name="HttpTrigger1") @app.route(route="hello", auth_level=func.AuthLevel.FUNCTION) def hello(req: func.HttpRequest) -> func.HttpResponse: logging.info('Python HTTP trigger function processed a request.') return func.HttpResponse("Hello!") Node.js V4 Example: const { app } = require('@azure/functions'); app.http('HttpTrigger1', { methods: ['GET', 'POST'], authLevel: 'function', handler: async (request, context) => { context.log('HTTP trigger function processed a request.'); return { body: 'Hello!' }; } }); Solution: Verify the host is running (see Solution #1) Check your entry point configuration. Check Application Insights for host startup errors related to function registration Check folder structure - Python folder structure 5. Extension Bundle or Dependencies Missing Symptoms: Functions not appearing after adding new trigger types Host fails to start with extension-related errors Works locally but not in Azure Why This Happens: Azure Functions uses extension bundles to provide trigger and binding implementations. If the bundle is missing or incorrectly configured, functions using those triggers won't work. How to Verify: Check your host.json for extension bundle configuration: extension bundle { "version": "2.0", "extensionBundle": { "id": "Microsoft.Azure.Functions.ExtensionBundle", "version": "[4.*, 5.0.0)" } } Solution: Ensure extension bundle is configured in host.json Use a compatible version range: For Functions v4: [4.*, 5.0.0) For compiled C# apps using explicit extensions, ensure all NuGet packages are installed: Check for extension installation errors and fix it 6. Sync Trigger Issues Symptoms: Functions deployed successfully Host is running Portal still shows no functions or outdated function list Why This Happens: The Azure Portal caches function metadata. Sometimes this cache becomes stale or the sync process between the function host and the portal fails. Solution: Force a sync from the portal: Navigate to your Function App Click Refresh button in the Functions blade If that doesn't work, go to Overview → Restart Trigger a sync via REST API: Sync Trigger az rest --method post --url https://management.azure.com/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP>/providers/Microsoft.Web/sites/<APP_NAME>/syncfunctiontriggers?api-version=2016-08-01 7. Storage Account Connectivity Issues Symptoms: Functions not visible Host shows errors related to storage "Unable to get function keys" error Why This Happens: Azure Functions requires access to the storage account specified in AzureWebJobsStorage for: Storing function keys and secrets Coordinating distributed triggers Maintaining internal state If the function app cannot connect to storage, the host may fail to start properly. How to Verify: Check the Application Settings: Storage considerations for Azure Functions AzureWebJobsStorage - Must be a valid connection string WEBSITE_CONTENTAZUREFILECONNECTIONSTRING - For Consumption/Premium plans WEBSITE_CONTENTSHARE - File share name Solution: Verify the storage account exists and is accessible Check for firewall rules on the storage account: Go to Storage Account → Networking Ensure Function App has access (public endpoint or VNet integration) Regenerate connection strings if storage keys were rotated: Get new connection string from storage account Update AzureWebJobsStorage in Function App settings For VNet-integrated apps, ensure: Service endpoints or private endpoints are configured DNS resolution works for storage endpoints Check for more details - Storage considerations for Azure Functions 8. WEBSITE_RUN_FROM_PACKAGE Issues Symptoms: Functions not visible after deployment Functions were visible before but disappeared "No functions found" in the portal Read-only file system errors in logs Why This Happens: When WEBSITE_RUN_FROM_PACKAGE is configured, Azure Functions runs directly from a deployment package (ZIP file) instead of extracting files to wwwroot. If the package is inaccessible, corrupted, or misconfigured, the host cannot load your functions. Understanding WEBSITE_RUN_FROM_PACKAGE Values: Value Behavior 1 Indicates that the function app runs from a local package file deployed in the c:\home\data\SitePackages (Windows) or /home/data/SitePackages (Linux) folder of your function app. <URL> Sets a URL that is the remote location of the specific package file you want to run. Required for functions apps running on Linux in a Consumption plan Not set Traditional deployment (files extracted to wwwroot) How to Verify: Check the app setting value. If using URL, verify package accessibility. Check if package exists properly (when value is 1): Go to Kudu → Debug Console Navigate to d:\home\data\SitePackages Verify a .zip file exists and packagename.txt points to it Verify package contents: Download the package Extract and verify host.json and function files are present at the root level (not in a subfolder) Common Issues: Issue Symptom Solution Expired SAS token Package URL returns 403 Generate new SAS with longer expiry Package URL not accessible Package URL returns 404 Verify blob exists and URL is correct Wrong package structure Files in subfolder Ensure files are at ZIP root, not in a nested folder Corrupted package Host startup errors Re-deploy with fresh package Storage firewall blocking Timeout errors Allow Function App access to storage 9. Configuration Filtering Functions Symptoms: Only some functions visible Specific functions always missing Functions worked before a configuration change Why This Happens: Azure Functions provides configuration options to filter which functions are loaded. If these are misconfigured, functions may be excluded. Configuration Options to Check: host.json functions array: Host.json -> functions { "functions": ["Function1", "Function2"] } Solution: Remove the functions array from host.json (or ensure all desired functions are listed) 10. Networking Configuration Issues Symptoms: Functions not visible in portal but app responds to requests "Unable to reach your function app" error in portal Portal timeout when loading functions Functions visible intermittently Host status endpoint not reachable from portal Why This Happens: When your Function App has networking restrictions configured (VNet integration, private endpoints, access restrictions), the Azure Portal may not be able to communicate with your function app to discover and display functions. The portal needs to reach your app's admin endpoints to enumerate functions. Common Networking Configurations That Cause Issues: Configuration Impact Portal Behavior Private Endpoint only (no public access) Portal can't reach admin APIs "Unable to reach function app" Access Restrictions (IP filtering) Portal IPs blocked Timeout loading functions VNet Integration with forced tunneling Outbound calls fail Host can't start properly Storage account behind firewall Host can't access keys/state Host startup failures NSG blocking outbound traffic Can't reach Azure services Various failures Important Note: When your Function App is fully private (no public access), you won't be able to see functions in the Azure Portal from outside your network. This is expected behavior. Using Diagnose and Solve Problems The Azure Portal provides built-in diagnostics to help troubleshoot function visibility issues. How to Access: Navigate to your Function App in the Azure Portal Select Diagnose and solve problems from the left menu Search for relevant detectors: Function App Down or Reporting Errors SyncTrigger Issues Deployment Networking Quick Troubleshooting Checklist Use this checklist to quickly diagnose functions not appearing in the portal: Host Status: Is the host running? Check /admin/host/status Files Present: Are function files deployed? Check via Kudu function.json Valid: Is the JSON syntax correct? Run From Package: If using WEBSITE_RUN_FROM_PACKAGE, is package accessible and configured right? Extension Bundle: Is extensionBundle properly configured in host.json? Storage Connection: Is AzureWebJobsStorage valid and reachable? No Filters: Is functions array in host.json filtering? V2 Model: For Python/Node.js v2, is host running to register? Sync Triggered: Has the portal synced with the host? Networking: Can the portal reach the app? Check access restrictions/private endpoints Verifying Functions via REST API If you cannot see functions in the portal but believe they're deployed, you can verify directly: Functions API List All Functions: curl "https://<app>.azurewebsites.net/admin/functions?code=<master-key>" Or directly from here with auth: List Functions Check Specific Function: curl "https://<app>.azurewebsites.net/admin/functions/<function-name>?code=<master-key>" Get Host Status: curl "https://<app>.azurewebsites.net/admin/host/status?code=<master-key>" If these APIs return your functions but the portal doesn't show them, the issue is likely a portal caching/sync problem (see Solution #6). Conclusion Functions not appearing in the Azure Portal can be caused by various issues, from deployment problems to configuration filtering. By following the troubleshooting steps outlined in this article, you should be able to identify and resolve the issue. Key Takeaways: Always verify the host is running first Check that function files are correctly deployed Validate function.json and host.json configurations Ensure storage connectivity is working Use the built-in diagnostics in the Azure Portal Force a sync if functions are deployed but not visible If you continue to experience issues after following these steps, consider opening a support ticket with Microsoft Azure Support, providing: Function App name and resource group Steps to reproduce the issue Any error messages observed Recent deployment or configuration changes References Azure Functions host.json reference Azure Functions deployment technologies Troubleshoot Azure Functions Python V2 programming model Node.js V4 programming model Azure Functions diagnostics Azure Functions networking options Have questions or feedback? Leave a comment below.1.2KViews3likes0Comments