Cloud Services
35 TopicsOn-device AI and security: What really matters for the enterprise
AI is evolving, and so is the way businesses run it. Traditionally, most AI workloads have been processed in the cloud. When a user gives an AI tool a prompt, that input is sent over the internet to remote servers, where the model processes it and sends back a result. This model supports large-scale services like Microsoft 365 Copilot, which integrates AI into apps like Word, Excel, and Teams. Now, a new capability is emerging alongside cloud-based AI. AI can also run directly on a PC—no internet connection or remote server required. This is known as on-device processing. It means the data and the model stay on the device itself, and the work is done locally. Modern CPUs and GPUs are beginning to support this kind of processing. But neural processing units (NPUs), now included in enterprise-grade PCs such as Microsoft Surface Copilot+ PCs, are specifically designed to run AI workloads efficiently. NPUs are designed to perform the types of operations AI needs at high speed while using less power. That makes them ideal for features that need to work instantly, in a sustained fashion in the background, or without an internet connection. A flexible approach to AI deployment NPUs can enable power-efficient on-device processing, fast response times with small models, consistent functionality in offline scenarios, and more control over how data is processed and stored. For organizations, it adds flexibility in choosing how and where to run AI—whether to support real-time interactions at the edge or meet specific data governance requirements. At the same time, cloud-based AI remains essential to how organizations deliver intelligent services across teams and workflows. Microsoft 365 Copilot, for example, is powered by cloud infrastructure and integrates deeply across productivity applications using enterprise-grade identity, access, and content protections. Both models serve different but complementary needs. On-device AI adds new options for responsiveness and control. Cloud-based AI enables broad integration and centralized scale. Together, they give businesses flexibility to align AI processing with the demands of the use case, whether for fast local inference or connected collaboration. For business and IT leaders, the question is not which model is better but how to use each effectively within a secure architecture. That starts with understanding where data flows, how it is protected, and what matters most at the endpoint. Understanding AI data flow and its security impact AI systems rely on several types of input such as user prompts, system context, and business content. When AI runs in the cloud, data is transmitted to remote servers for processing. When it runs on the device, processing happens locally. Both approaches have implications for security. With cloud AI, protection depends on the strength of the vendor’s infrastructure, encryption standards, and access controls. Security follows a shared responsibility model where the cloud provider secures the platform while the enterprise defines its policies for data access, classification, and compliance. Microsoft’s approach to data security and privacy in cloud AI services Although the purpose of this blog post is to talk about on-device AI and security, it’s worth a detour to briefly touch on how Microsoft approaches data governance across its cloud-based AI services. Ultimately, the goal is for employees to be able to use whatever tools work best for what they want to get done, and they may not differentiate between local and cloud AI services. That means having a trusted provider for both is important for long-term AI value and security in the organization. Microsoft’s generative AI solutions, including Azure OpenAI Service and Copilot services and capabilities, do not use your organization’s data to train foundation models without your permission. The Azure OpenAI Service is operated by Microsoft as an Azure service; Microsoft hosts the OpenAI models in Microsoft's Azure environment and the Service does not interact with any services operated by OpenAI (e.g. ChatGPT, or the OpenAI API). Microsoft 365 Copilot and other AI tools operate within a secured boundary, pulling from organization-specific content sources like OneDrive and Microsoft Graph while respecting existing access permissions. For more resources on data privacy and security in Microsoft cloud AI services, check out Microsoft Learn. Local AI security depends on a trusted endpoint When AI runs on the device, the data stays closer to its source. This reduces reliance on network connectivity and can help limit exposure in scenarios where data residency or confidentiality is a concern. But it also means the device must be secured at every level. Running AI on the device does not inherently make it more or less secure. It shifts the security perimeter. Now the integrity of the endpoint matters even more. Surface Copilot+ PCs are built with this in mind. As secured-core PCs, they integrate hardware-based protections that help guard against firmware, OS-level, and identity-based threats. TPM 2.0 and Microsoft Pluton security processors provide hardware-based protection for sensitive data Hardware-based root of trust verifies system integrity from boot-up Microsoft-developed firmware can reduce exposure to third-party supply chain risks and helps address emerging threats rapidly via Windows Update Windows Hello and Enhanced Sign-in Security (ESS) offer strong authentication at the hardware level These protections and others work together to create a dependable foundation for local AI workloads. When AI runs on a device like this, the same enterprise-grade security stack that protects the OS and applications also applies to AI processing. Why application design is part of the security equation Protecting the device is foundational—but it’s not the whole story. As organizations begin to adopt generative AI tools that run locally, the security conversation must also expand to include how those tools are designed, governed, and managed. The value of AI increases dramatically when it can work with rich, contextual data. But that same access introduces new risks if not handled properly. Local AI tools must be built with clear boundaries around what data they can access, how that access is granted, and how users and IT teams can control it. This includes opt-in mechanisms, permission models, and visibility into what’s being stored and why. Microsoft Recall (preview) on Copilot+ PCs is a case study in how thoughtful application design can make local AI both powerful and privacy conscious. It captures snapshots of the desktop embedded with contextual information, enabling employees to find almost anything that has appeared on their screen by describing it in their own words. This functionality is only possible because Recall has access to a wide range of on-device data—but that access is carefully managed. Recall runs entirely on the device. It is turned off by default—even when enabled by IT—and requires biometric sign-in with Windows Hello Enhanced Sign-in Security to activate. Snapshots are encrypted and stored locally, protected by Secured-core PC features and the Microsoft Pluton security processor. These safeguards ensure that sensitive data stays protected, even as AI becomes more deeply embedded in everyday workflows. IT admins can manage Recall through Microsoft Intune, with policies to enable or disable the feature, control snapshot retention, and apply content filters. Even when Recall is enabled, it remains optional for employees, who can pause snapshot saving, filter specific apps or websites, and delete snapshots at any time. This layered approach—secure hardware, secure OS, and secure app design—reflects Microsoft’s broader strategy for responsible local AI and aligns to the overall Surface security approach. It helps organizations maintain governance and compliance while giving users confidence that they are in control of their data and that the tools are designed to support them, not surveil them. This balance is essential to building trust in AI-powered workflows and ensuring that innovation doesn’t come at the expense of privacy or transparency. For more information, check out the related blog post. Choosing the right AI model for the use case Local AI processing complements cloud AI, offering additional options for how and where workloads run. Each approach supports different needs and use cases. What matters is selecting the right model for the task while maintaining consistent security and governance across the entire environment. On-device AI is especially useful in scenarios where organizations need to reduce data movement or ensure AI works reliably in disconnected environments In regulated industries such as finance, legal, or government, local processing can help support compliance with strict data-handling requirements In the field, mobile workers can use AI features such as document analysis or image recognition without relying on a stable connection For custom enterprise models, on-device execution through the Windows AI Foundry Local lets developers embed AI in apps while maintaining control over how data is used and stored These use cases reflect a broader trend. Businesses want more flexibility in how they deploy and manage AI. On-device processing makes that possible without requiring a tradeoff in security or integration. Security fundamentals matter most Microsoft takes a holistic view of AI security across cloud services, on-device platforms, and everything in between. Whether your AI runs in Azure or on a Surface device, the same principles apply. Protect identity, encrypt data, enforce access controls, and ensure transparency. This approach builds on the enterprise-grade protections already established across Microsoft’s technology stack. From the Secure Development Lifecycle to Zero Trust access policies, Microsoft applies rigorous standards to every layer of AI deployment. For business leaders, AI security extends familiar principles—identity, access, data protection—into new AI-powered workflows, with clear visibility and control over how data is handled across cloud and device environments. Securing AI starts with the right foundations AI is expanding from cloud-only services to include new capable endpoints. This shift gives businesses more ways to match the processing model to the use case without compromising security. Surface Copilot+ PCs support this flexibility by delivering local AI performance on a security-forward enterprise-ready platform. When paired with Microsoft 365 and Azure services, they offer a cohesive ecosystem that respects data boundaries and aligns with organizational policies. AI security is not about choosing between cloud or device. It is about enabling a flexible, secure ecosystem where AI can run where it delivers the most value—on the endpoint, in the cloud, or across both. This adaptability unlocks new ways to work, automate, and innovate, without increasing risk. Surface Copilot+ PCs are part of that broader strategy, helping organizations deploy AI with confidence and control—at scale, at speed, and at the edge of what’s next.755Views1like0CommentsBoosting Performance with the Latest Generations of Virtual Machines in Azure
Microsoft Azure recently announced the availability of the new generation of VMs (v6)—including the Dl/Dv6 (general purpose) and El/Ev6 (memory-optimized) series. These VMs are powered by the latest Intel Xeon processors and are engineered to deliver: Up to 30% higher per-core performance compared to previous generations. Greater scalability, with options of up to 128 vCPUs (Dv6) and 192 vCPUs (Ev6). Significant enhancements in CPU cache (up to 5× larger), memory bandwidth, and NVMe-enabled storage. Improved security with features like Intel® Total Memory Encryption (TME) and enhanced networking via the new Microsoft Azure Network Adaptor (MANA). By Microsoft Evaluated Virtual Machines and Geekbench Results The table below summarizes the configuration and Geekbench results for the two VMs we tested. VM1 represents a previous-generation machine with more vCPUs and memory, while VM2 is from the new Dld e6 series, showing superior performance despite having fewer vCPUs. VM1 features VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM1 - D16S V5 (16 Vcpus - 64GB RAM) VM2 features VM2 - D16ls v6 (16 Vcpus - 32GB RAM) VM2 - D16ls v6 (16 Vcpus - 32GB RAM) Key Observations: Single-Core Performance: VM2 scores 2013 compared to VM1’s 1570, a 28.2% improvement. This demonstrates that even with half the vCPUs, the new Dld e6 series provides significantly better performance per core. Multi-Core Performance: Despite having fewer cores, VM2 achieves a multi-core score of 12,566 versus 9,454 for VM1, showing a 32.9% increase in performance. VM 1 VM 2 Enhanced Throughput in Specific Workloads: File Compression: 1909 MB/s (VM2) vs. 1654 MB/s (VM1) – a 15.4% improvement. Object Detection: 2851 images/s (VM2) vs. 1592 images/s (VM1) – a remarkable 79.2% improvement. Ray Tracing: 1798 Kpixels/s (VM2) vs. 1512 Kpixels/s (VM1) – an 18.9% boost. These results reflect the significant advancements enabled by the new generation of Intel processors. Score VM 1 VM 1 VM 1 Score VM 2 VM 2 VM 2 Evolution of Hardware in Azure: From Ice Lake-SP to Emerald Rapids Technical Specifications of the Processors Evaluated Understanding the dramatic performance improvements begins with a look at the processor specifications: Intel Xeon Platinum 8370C (Ice Lake-SP) Architecture: Ice Lake-SP Base Frequency: 2.79 GHz Max Frequency: 3.5 GHz L3 Cache: 48 MB Supported Instructions: AVX-512, VNNI, DL Boost VM 1 Intel Xeon Platinum 8573C (Emerald Rapids) Architecture: Emerald Rapids Base Frequency: 2.3 GHz Max Frequency: 4.2 GHz L3 Cache: 260 MB Supported Instructions: AVX-512, AMX, VNNI, DL Boost VM 2 Impact on Performance Cache Size Increase: The jump from 48 MB to 260 MB of L3 cache is a key factor. A larger cache reduces dependency on RAM accesses, thereby lowering latency and significantly boosting performance in memory-intensive workloads such as AI, big data, and scientific simulations. Enhanced Frequency Dynamics: While the base frequency of the Emerald Rapids processor is slightly lower, its higher maximum frequency (4.2 GHz vs. 3.5 GHz) means that under load, performance-critical tasks can benefit from this burst capability. Advanced Instruction Support: The introduction of AMX (Advanced Matrix Extensions) in Emerald Rapids, along with the robust AVX-512 support, optimizes the execution of complex mathematical and AI workloads. Efficiency Gains: These processors also offer improved energy efficiency, reducing the energy consumed per compute unit. This efficiency translates into lower operational costs and a more sustainable cloud environment. Beyond Our Tests: Overview of the New v6 Series While our tests focused on the Dld e6 series, Azure’s new v6 generation includes several families designed for different workloads: 1. Dlsv6 and Dldsv6-series Segment: General purpose with NVMe local storage (where applicable) vCPUs Range: 2 – 128 Memory: 4 – 256 GiB Local Disk: Up to 7,040 GiB (Dldsv6) Highlights: 5× increased CPU cache (up to 300 MB) and higher network bandwidth (up to 54 Gbps) 2. Dsv6 and Ddsv6-series Segment: General purpose vCPUs Range: 2 – 128 Memory: Up to 512 GiB Local Disk: Up to 7,040 GiB in Ddsv6 Highlights: Up to 30% improved performance over the previous Dv5 generation and Azure Boost for enhanced IOPS and network performance 3. Esv6 and Edsv6-series Segment: Memory-optimized vCPUs Range: 2 – 192* (with larger sizes available in Q2) Memory: Up to 1.8 TiB (1832 GiB) Local Disk: Up to 10,560 GiB in Edsv6 Highlights: Ideal for in-memory analytics, relational databases, and enterprise applications requiring vast amounts of RAM Note: Sizes with higher vCPUs and memory (e.g., E128/E192) will be generally available in Q2 of this year. Key Innovations in the v6 Generation Increased CPU Cache: Up to 5× more cache (from 60 MB to 300 MB) dramatically improves data access speeds. NVMe for Storage: Enhanced local and remote storage performance, with up to 3× more IOPS locally and the capability to reach 400k IOPS remotely via Azure Boost. Azure Boost: Delivers higher throughput (up to 12 GB/s remote disk throughput) and improved network bandwidth (up to 200 Gbps for larger sizes). Microsoft Azure Network Adaptor (MANA): Provides improved network stability and performance for both Windows and Linux environments. Intel® Total Memory Encryption (TME): Enhances data security by encrypting the system memory. Scalability: Options ranging from 128 vCPUs/512 GiB RAM in the Dv6 family to 192 vCPUs/1.8 TiB RAM in the Ev6 family. Performance Gains: Benchmarks and internal tests (such as SPEC CPU Integer) indicate improvements of 15%–30% across various workloads including web applications, databases, analytics, and generative AI tasks. My personal perspective and point of view The new Azure v6 VMs mark a significant advancement in cloud computing performance, scalability, and security. Our Geekbench tests clearly show that the Dld e6 series—powered by the latest Intel Xeon Platinum 8573C (Emerald Rapids)—delivers up to 30% better performance than previous-generation machines with more resources. Coupled with the hardware evolution from Ice Lake-SP to Emerald Rapids—which brings a dramatic increase in cache size, improved frequency dynamics, and advanced instruction support—the new v6 generation sets a new standard for high-performance workloads. Whether you’re running critical enterprise applications, data-intensive analytics, or next-generation AI models, the enhanced capabilities of these VMs offer significant benefits in performance, efficiency, and cost-effectiveness. References and Further Reading: Microsoft’s official announcement: Azure Dld e6 VMs Internal tests performed with Geekbench 6.4.0 (AVX2) in the Germany West Central Azure region.557Views0likes2CommentsBackup vaults Vs Recovery Service Vault
Hello Team, Microsoft has introduced multiple vault types, each serving different backup and disaster recovery needs. Below is a high-level differentiation: Recovery Services Vault (RSV) Supports Azure Backup (VMs, SQL, SAP HANA, Files) and Azure Site Recovery (disaster recovery). Offers backup policies, recovery points, replication, and failover management. Backup Vault A newer, streamlined vault designed for Azure Backup only. Supports Backup Short-Term Retention (Instant Restore) and Cross-region Restore. Primarily used with Azure Policy & Backup Center for better management at scale. Microsoft Continuity Center (MCC) A centralized disaster recovery hub in Azure. Integrates Azure Site Recovery (ASR) and backup services into a single pane of glass. Allows for failover, backup monitoring, and business continuity planning. Do we have any document talks about little deeper about the above topics.Solved626Views0likes1CommentMove Azure resources from Tenant x to Tenant Y
Hi community I have one subscription-A in azure under Tenant-A with few resources like Staging/UAT vm & storage account which has certain data like in size 3TB(overall); keyvault webapp now I got new subscription-B in Azure Under Tenant-B(Subscription-B) which is provided by CSP Tenant A (Subscription-A)is pay-as you go Tenant B (Subscription-B)is under CSP pay as you go model what is the process/procedure to move/migrate resources from Tenant A(Subscription-A) to Tenant B(Subscription-B)11KViews0likes3CommentsManage serverless APIs with Apache APISIX
Serverless computing enables developers to build applications faster by eliminating the need for them to manage infrastructure. With serverless APIs in the cloud, the cloud service provider automatically provisions, scales, and manages the infrastructure required to run the code. This article shows a simple example of how to manage Java-based serverless APIs built with https://azure.microsoft.com/en-in/products/functions/. It uses https://apisix.apache.org/docs/apisix/plugins/azure-functions/ plugin to integrate https://apisix.apache.org/ with Azure Serverless Function that invokes the HTTP trigger functions and returns the response from https://azure.microsoft.com/en-us. Apache APISIX offers https://apisix.apache.org/docs/apisix/plugins/serverless/ that can be used with other serverless solutions like https://aws.amazon.com/lambda/. Learning objectives You will learn the following throughout the article: What serverless APIs? The role of API Gateway in managing complex serverless API traffic. How to set up Apache APISIX Gateway. How to build serverless APIs with Azure Functions. How to expose serverless APIs as upstream services. How to secure serverless APIs with APISIX authentication plugins. How to apply rate-limiting policies. Before we get started with the practical side of the tutorial, let's go through some concepts. What serverless APIs? https://www.koombea.com/blog/serverless-apis/ are the same as traditional APIs, except they utilize a serverless backend. For businesses and developers, serverless computing means they no longer have to worry about server maintenance or scaling server resources to meet user demands. Also, serverless APIs avoid the issue of scaling because they create server resources every time a request is made. Serverless APIs reduce latency because they are hosted on an origin server. Last but not least serverless computing is far more cost-efficient than the traditional alternative such as building entire https://www.ideamotive.co/blog/serverless-vs-microservices-architecture#:~:text=Serverless%20vs%20Microservices%20%E2%80%93%20Main%20Differences,can%20host%20microservices%20on%20serverless.. Serverless APIs using Azure function An https://learn.microsoft.com/en-us/azure/azure-functions/functions-overview is a simple way of running small pieces of code in the cloud. You don’t have to worry about the infrastructure required to host that code. You can write the Function in C#, Java, JavaScript, PowerShell, Python, or any of the languages that are listed in the https://learn.microsoft.com/en-us/azure/azure-functions/supported-languages. With Azure Functions, you can rapidly build HTTP APIs for your web apps without the headache of web frameworks. Azure Functions is serverless, so you're only charged when an HTTP endpoint is called. When the endpoints aren't being used, you aren't being charged. These two things combined make serverless platforms like Azure Functions an ideal choice for APIs where you experience unexpected spikes in traffic. API Gateway for Serverless APIs traffic management An https://apisix.apache.org/docs/apisix/terminology/api-gateway/ is the fundamental part of serverless API because it is responsible for the connection between a defined API and the function handling requests to that API. There are many benefits of API Gateway in the serverless-based APIs architecture. In addition to API Gateway’s Primary edge functionalities such as authentication, rate throttling, observability, caching, and so on, it is capable of invoking serverless APIs, subscribing to events, then processing them using callbacks and forward authentication requests to external authorization services with completely custom serverless function logic. Manage serverless APIs with Apache APISIX demo With enough theoretical knowledge in mind, now we can jump into a practical session. We use an example project repo https://github.com/Boburmirzo/apisix-manage-serverless-apis hosted on GitHub. You can find the source code and sample curl commands we use in this tutorial. For our mini-project, we’ll work with two simple Azure functions written in Java that simulates our serverless APIs for https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/products and https://github.com/Boburmirzo/apisix-manage-serverless-apis/tree/main/upstream/src/main/java/com/function/reviews services. Prerequisites Must be familiar with fundamental API concepts Must have a working knowledge of Azure Functions, for example https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/1-introduction shows how to build an HTTP API using the Azure Functions extension for Visual Studio Code. https://www.docker.com/products/docker-desktop/ https://azure.microsoft.com/en-us/free/ https://docs.microsoft.com/cli/azure/install-azure-cli https://aka.ms/azure-jdks, at least version 8 https://maven.apache.org/ https://www.npmjs.com/package/azure-functions-core-tools https://code.visualstudio.com/download https://dev.to/apisix/manage-serverless-apis-with-apache-apisix-55a8 https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions Set up the project This first thing you clone the project repo from GitHub: git clone https://github.com/Boburmirzo/apisix-manage-serverless-apis.git Open the project folder in your favorite code editor. The tutorial leverages https://code.visualstudio.com/. Run Apache APISIX To run Apache APISIX and Azure functions locally, you can follow these steps: Open a new terminal window and run docker compose up command from the root folder of the project: docker compose up -d Above command will run Apache APISIX and etcd together with Docker. For example, if Docker desktop is installed on your machine, you can see running containers there: We installed APISIX on our local environment in this demo but you can also deploy it to Azure and run it on https://learn.microsoft.com/en-us/azure/container-instances/. See the https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Run Azure functions Then, navigate to /upstream folder: mvn clean installmvn azure-functions:run The two functions will start in a terminal window. You can request both serverless APIs in your browser: For example: Deploy Azure functions Next, we deploy functions code to Azure Function App by running below cmd: mvn azure-functions:deploy Or you can simply follow this tutorial on https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-cli-java?tabs=bash%2Cazure-cli%2Cbrowser#deploy-the-function-project-to-azure Note that the function app name is randomly generated based on your artifactId, appended with a randomly generated number. In the tutorial cmds, the function app name serverless-apis is mentioned. Just to make sure our function works, we can test an invocation call directly requesting it URL in the browser: https://serverless-apis.azurewebsites.net/api/products https://serverless-apis.azurewebsites.net/api/reviews Exposing serverless APIs in APISIX Once the setup is complete, now we will expose serverless Azure function APIs as upstream services in APISIX. To do so, we need to create a new https://apisix.apache.org/docs/apisix/terminology/route/ with azure-function plugin enabled for both products and review serverless backend APIs. If azure-function plugin is enabled on a route, APISIX listens for requests on that route’s path, and then it invokes the remote Azure Function code with the parameters from that request. Create a Route for Products To create a route for Products function, run the following command: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false } }, "uri": "/products" }' Note that we set ssl_verify attribute of azure-functions plugin to false to disable SSL verification for only the demo purpose. You can also enable it to perform more secure requests from APISIX to Azure Functions. Learn other https://apisix.apache.org/docs/apisix/plugins/azure-functions/#attributes. Test With a Curl Request We can use curl to send a request, seeing if APISIX listens on the path correctly and forwards the request to the upstream service successfully: curl -i -XGET http://127.0.0.1:9080/products HTTP/1.1 200 OK [ { "id": 1, "name": "Product1", "description": "Description1" }, { "id": 2, "name": "Product2", "description": "Description2" } ] Great! We got a response from the actual serverless API on Azure Function. Next, we will make a similar configuration for the reviews function. Create a Route for Reviews and test Create the second route with Azure function plugin enabled: curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false } }, "uri": "/reviews" }' Test serverless API response: curl -i -XGET http://127.0.0.1:9080/reviews In this section, we introduced the new route and added azure-functions plugin to our serverless APIs so that APISIX can invoke remote Azure functions and manage the traffic. In the following sections, we will learn how to authenticate API consumers and apply runtime policies like rate-limiting. Secure serverless APIs with APISIX authentication plugins Up to now, our serverless APIs are public and accessible by unauthorized users. In this section, we will enable the authentication feature to disallow unauthorized requests to serverless APIs. Apache APISIX can verify the identity associated with API requests through credential and token validation. Also, it is capable of determining which traffic is authorized to pass through the API to backend services. You can check all available https://apisix.apache.org/docs/apisix/plugins/key-auth/. Let's create a new consumer for our serverless APIs and add https://apisix.apache.org/docs/apisix/plugins/basic-auth/ plugin for the existing route so that only allowed users can access them. Create a new consumer for serverless APIs The below command will create our new consumer with its credentials such as username and password: curl http://127.0.0.1:9180/apisix/admin/consumers -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "username": "consumer1", "plugins": { "basic-auth": { "username": "username1", "password": "password1" } } } Add a basic auth plugin to the existing Products and Services routes. Now we configure the basic-auth plugin for routes to let APISIX check the request header with the API consumer credentials each time APIs are called: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {} }, "uri": "/products" }' curl http://127.0.0.1:9180/apisix/admin/routes/2 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/reviews", "ssl_verify": false }, "basic-auth": {} }, "uri": "/reviews" }' Test basic auth plugin Now if we request the serverless APIs without user credentials in the header, we will get an unauthorized error: curl -i http://127.0.0.1:9080/products HTTP/1.1 401 Unauthorized {"message":"Missing authorization in request"} The result is as we expected. But if we provide the correct user credentials in the request and access the same endpoint, it should work well: curl -i -u username1:password1 http://127.0.0.1:9080/products HTTP/1.1 200 OK We have validated the client’s identity attempting to request serverless APIs by using basic authentication plugin with the help of Apache APISIX. Apply rate limiting policies for serverless APIs In this section, we will protect serverless APIs from abuse by applying a throttling policy. In Apache APISIX Gateway we can apply rate limiting to restrict the number of incoming calls. Apply and test the rate-limit policy With the existing route configurations for Products and Reviews functions selected, we can apply a rate-limit policy with https://apisix.apache.org/docs/apisix/plugins/limit-count/ plugin to protect our API from abnormal usage. We will limit the number of API calls to 2 per 60s per API consumer. To enable the limit-count plugin for the existing Products route, we need to add the plugin to the plugins attribute in our Json route configuration: curl http://127.0.0.1:9180/apisix/admin/routes/1 -H 'X-API-KEY: edd1c9f034335f136f87ad84b625c8f1' -X PUT -d ' { "name": "Create a route with Azure function plugin", "plugins": { "azure-functions": { "function_uri": "https://serverless-apis.azurewebsites.net/api/products", "ssl_verify": false }, "basic-auth": {}, "limit-count": { "count": 2, "time_window": 60, "rejected_code": 403, "rejected_msg": "Requests are too frequent, please try again later." } }, "uri": "/products" }' Apache APISIX will handle the first two requests as usual. However, a third request in the same period will return a 403 HTTP Forbidden code with our custom error message: HTTP/1.1 403 Forbidden {"error_msg":"Requests are too frequent, please try again later."} Next steps In this article, we learned step by step how to create Java-based serverless APIs with Azure Functions and Apache APISIX Gateway to manage your APIs throughout their full lifecycle from the exposing serverless APIs as upstream services in APISIX to properly secure and apply rate-limiting to limit the number of requests. This opens the doors to other use-cases of API Gateway and serverless APIs integration. You can explore other capabilities of APISIX Gateway by chaining of various https://apisix.apache.org/plugins/ to transform requests, https://dev.to/apisix/apis-observability-with-apache-apisix-plugins-1bnm, performance, and usage of our serverless APIs, https://apisix.apache.org/blog/2022/12/14/web-caching-server/ and further evolve them by https://blog.frankel.ch/evolve-apis/#:~:text=Joe%0AHello%20Joe-,Version%20the%20API,-Evolving%20an%20API that helps you to reduce development time, increase scalability, and cost savings. Apache APISIX is fully open-source API Gateway solution. If you require to have more advanced API management features for serverless APIs, you can use https://api7.ai/apisix-vs-api7 or https://api7.ai/cloud which are powered by APISIX. Related resources https://azure.microsoft.com/en-in/products/functions/. https://learn.microsoft.com/en-us/training/modules/build-api-azure-functions/. https://learn.microsoft.com/en-us/azure/azure-functions/create-first-function-vs-code-java. https://dev.to/apisix/run-apache-apisix-on-microsoft-azure-container-instance-1gdk. Recommended content https://iambobur.com/2022/11/22/how-to-choose-the-right-api-gateway/. https://api7.ai/blog/why-is-apache-apisix-the-best-api-gateway https://api7.ai/blog/api-gateway-policies. Community:right_arrow_curving_down: 🙋 https://apisix.apache.org/docs/general/join/ :bird: https://twitter.com/ApacheAPISIX :memo: https://join.slack.com/t/the-asf/shared_invite/zt-vlfbf7ch-HkbNHiU_uDlcH_RvaHv9gQ2.5KViews0likes0CommentsHow pricing will be calculated for data which is stored from dataverse table using azure synapselink
Hi folks, I am storing stale data from dataverse table to azure data lake storage using Azure synapse Link. which actually stores data day by day for the whole year. ex, each day it transforms 10-50 records to azure data lake. So my file size in azure data lake storage will increments day by day. so here i want to know how will be the pricing calculated for storing and updating file in azure data lake? it will also costs for write operations which executes for each day?1.2KViews0likes1CommentCapturing image of VM
Difference between generalized state vs specialized state while capturing VM? is sysprep really needed before capturing image irrespective of OS state we capture later? and also how can we use existing vm from which we capture image to work as it is before , because we know that it become unusable. please need answers for this1.4KViews0likes2CommentsUpdates in Azure Firewall
Hello Folks Today I will discuss about various features that are updated ( that I have used , in my work ) in Azure Firewall. Obviously in this dynamicity , everything changes in a second . But I here I am referring to those updates , which I have gone through in recent times. So Let's start . 1) IDPS signature's lookup - Perhaps this is the most interesting feature that , I found in azure firewall and that I have used it in my projects and labs . You can go to IDPS option in Azure Firewall and enable your own signature and set there mode a Alert or deny . What it does like , if you found a false positive where your request is blocked by faulty signature , you can use he signature id and set it to IDPS mode off. 2) TLS Certificate Auto generator - The second feature that I have worked on is TLS Certificate generator. For non-production you can use this mechanism , which generally creates this mechanism managed identity , key vaults , Self-signed CA certificate . 3) Web Categories Lookup - Web Categories is a filtering feature that allows administrators to allow or deny web traffic based on categories, such as gambling, social media, and more. They added tools that help manage these web categories: Category Check and MI's-Categorization Request. 4) IDPS Private range IP's - In Azure Firewall Premium IDPS, Private IP address ranges are used to identify if traffic is inbound or outbound. By default, only ranges defined by Internet Assigned Numbers Authority (IANA) RFC 1918 are considered private IP addresses. To modify your private IP addresses, you can now easily edit, remove or add ranges as needed.932Views0likes0CommentsWith the PowerShell collect details about all Azure VM's in a subscription!
Hi Microsoft Azure Friends, I used the PowerShell ISE for this configuration. But you are also very welcome to use Visual Studio Code, just as you wish. Please start with the following steps to begin the deployment (the Hashtags are comments): #The first two lines have nothing to do with the configuration, but make some space below in the blue part of the ISE Set-Location C:\Temp Clear-Host #So that you can carry out the configuration, you need the necessary cmdlets, these are contained in the module Az (is the higher-level module from a number of submodules) Install-Module -Name Az -Force -AllowClobber -Verbose #Log into Azure Connect-AzAccount #Select the correct subscription Get-AzContext Get-AzSubscription Get-AzSubscription -SubscriptionName "your subscription name" | Select-AzSubscription #Provide the subscription Id where the VMs reside $subscriptionId = "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx" #Provide the name of the csv file to be exported $reportName = "myReport.csv" #If you didn't select the subscription in the step above, you can do so now (or just skip it) Select-AzSubscription $subscriptionId #Some variables $report = @() $vms = Get-AzVM $publicIps = Get-AzPublicIpAddress $nics = Get-AzNetworkInterface | ?{ $_.VirtualMachine -NE $null} #Now start the loop foreach ($nic in $nics) { $info = "" | Select VmName, ResourceGroupName, Region, VirturalNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress $vm = $vms | ? -Property Id -eq $nic.VirtualMachine.id foreach($publicIp in $publicIps) { if($nic.IpConfigurations.id -eq $publicIp.ipconfiguration.Id) { $info.PublicIPAddress = $publicIp.ipaddress } } $info.OsType = $vm.StorageProfile.OsDisk.OsType $info.VMName = $vm.Name $info.ResourceGroupName = $vm.ResourceGroupName $info.Region = $vm.Location $info.VirturalNetwork = $nic.IpConfigurations.subnet.Id.Split("/")[-3] $info.Subnet = $nic.IpConfigurations.subnet.Id.Split("/")[-1] $info.PrivateIpAddress = $nic.IpConfigurations.PrivateIpAddress $report+=$info } #Now let's look at the result $report | ft VmName, ResourceGroupName, Region, VirturalNetwork, Subnet, PrivateIpAddress, OsType, PublicIPAddress #We save the file in our home folder $report | Export-CSV "$home/$reportName" Now you have used the PowerShell to create a report with the details about the vm's in a subscription! Congratulations! I hope this article was useful. Best regards, Tom Wechsler P.S. All scripts (#PowerShell, Azure CLI, #Terraform, #ARM) that I use can be found on github! https://github.com/tomwechsler4KViews0likes0Comments