Forum Widgets
Latest Discussions
Azure AppService Linux container fails to serve files from mounted file storage
A perfectly working setup suddenly started failing after redeploying the Azure resources involved: Storage account with multiple file shares Azure AppService on a Linux ServicePlan, hosting a Linux Docker container Azure AppService, configured with Path Mappings to the various file shares on the storage account After reprovisioning the resources (end of June 2021), we found out Apache (inside container) wasn't able to serve files from the mounted storage anymore: it returned a HTTP status 502. It was still able to persist files to these same mounted file shares (excluding hypotheses that our mounted drives were somehow unreachable). When accessing the container inside the AppService over SSH, basic curl commands to these same files returned "Received HTTP/0.9 when not allowed" We escalated this issue to MS support. The issue got fixed by applying a work-around: we had to identify an empty ResourceGroup. That way MS could make sure internally that our AppService/ServicePlan deployment eventually landed on the proper hosting resources, resulting in proper behavior. If we redeploy our resources as-is, without notice to MS support, we inevitably are confronted with the unwanted behavior. We've been asking MS Support for a ETA on a structural fix ever since (last 3 months), but never got commitment from their end. We continue to be amazed that a fairly trivial scenario, as this one, seemingly doesn't get any more priority. No doubt lots of other customers are impacted in the same way as we are. Anybody experienced any similar behavior?dbevernageOct 06, 2021Copper Contributor1.7KViews7likes0CommentsAnnouncing a Serverless and Azure Functions AMA!
We are very excited to announce a Serverless & Azure Functions AMA on October 29! This is the third in a series of AMAs around Azure, all held here in the Tech Community in this discussion space, coinciding with the Microsoft Azure Hack for Social Justice event. Upcoming dates/events are below: November 5 - App Services & Static Web Apps The AMA will take place on Thursday, October 29, 2020 from 9:00 a.m. to 10:00 a.m. PT in the Azure AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with cloud solution architects who will be on hand to answer your questions and listen to feedback.EricStarkerOct 20, 2020Former Employee6.2KViews6likes8CommentsIntroducing Microsoft Playwright Testing private preview
Explore Microsoft Playwright Testing, a new service built for running Playwright tests easily at scale. Playwright is a fast-growing, open-source framework that enables reliable end-to-end testing and automation for modern web apps. Microsoft Playwright Testing uses the cloud to enable you to run Playwright tests with much higher parallelization across different operating system-browser combinations simultaneously. This means getting tests done faster, which can help speed up delivery of features without sacrificing quality. The service is currently in private preview and needs your feedback to help shape this service! To get started, join the waitlist. And check out the full blog post for more information. How do you think Microsoft Playwright can help you with in your app development?EricStarkerAug 22, 2023Former Employee1.6KViews4likes3Comments6/8 - Low Code Application Development AMA announcement!
We are very excited to announce a Low Code Application Development AMA! The AMA will take place on Tuesday, June 8, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live text-based online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space will be open 24 hours before the event, so feel free to post your questions anytime beforehand during that period if it fits your schedule or time zone better.EricStarkerMay 10, 2021Former Employee4.8KViews4likes1CommentAzure Durable Functions Performance Optimization Techniques
This blog is co-authored by Dr. Magesh Kasthuri, Distinguished Member of Technical Staff (Wipro) and Sanjeev Radhakishin Assudani, Azure COE Principal Architect (Wipro). Performance is a very important aspect in any application. A slow performing application can result in high run costs in cloud. In this blog, we explain performance optimization steps with an example. It is very important to review Durable function configuration for language runtime configurations. Durable function configurations are defined in the host.json file. There are some important configurations like maxConcurrentActivityFunctions, maxConcurrentOrchestratorFunctions that define the number of activity function and orchestrator instances that can run on a host. Depending on your application workloads, you will have to configure them. Durable functions also use storage account for their internal working like state management, task execution etc. It is recommended to have a dedicated V1 storage account for cost optimizations. Step 1: Optimize Function Code Use Async/Await Appropriately Asynchronous programming is key to non-blocking operations. Ensure that you are using async and await keywords appropriately. Avoid blocking calls and prefer async methods where possible to keep the event loop free. Screenshot: Example of async/await usage in Visual Studio. Minimize Function Execution Time Break down large tasks into smaller sub-tasks that can be processed independently. This not only speeds up execution time but also improves reliability and error handling. Screenshot: Workflow diagram showing task breakdown. Step 2: Optimize Durable Task Management Reduce Orchestrator Function Overhead Ensure that the orchestrator functions are lightweight and primarily responsible for orchestrating activities rather than doing the heavy lifting. Offload complex processing to activity functions. Screenshot: Example orchestrator function code. Parallelize Tasks Where possible, run tasks in parallel to decrease overall execution time. Durable Functions support parallel task execution patterns, allowing multiple activities to be executed simultaneously. Screenshot: Parallel task execution example. Step 3: Optimize Resource Allocation Scale-Out Strategies Configure your Azure Functions to scale out efficiently. Use the Azure Functions Premium Plan or Dedicated (App Service) Plan for better scaling options and resource allocation. Screenshot: Scaling settings in Azure portal. Use Appropriate Pricing Plans Choose a pricing plan that aligns best with your workload. For instance, the Premium plan offers features such as better scaling, VNET integration, and always-on capabilities. Screenshot: Pricing plan options in Azure portal. Monitor and Allocate Resources Regularly monitor function performance using Azure Monitor and Application Insights. Adjust resource allocations based on the observed metrics to ensure optimal performance. Screenshot: Monitoring function performance in Azure Monitor. Durable Function Monitor Step 4: Optimize Storage and Data Handling Efficient State Management Durable Functions rely on Azure Storage for state management. Ensure efficient usage by minimizing the size and frequency of state updates. Batch state updates where possible to reduce storage operations. Screenshot: State management settings in Azure portal. Optimize Input/Output Operations Reduce the latency of I/O operations by optimizing data access patterns. Use faster storage solutions like Azure Cosmos DB or Redis for frequently accessed data. Screenshot: Data access optimization settings. Manage Concurrency Control the concurrency levels of your functions to prevent throttling and ensure fair usage of resources. Use the maxConcurrentActivityFunctions and maxConcurrentOrchestratorFunctions settings to manage concurrency effectively. Screenshot: Concurrency settings in Azure portal. Step 5: Enhance Error Handling and Retries Implement Robust Retries Configure retry policies for transient errors. Durable Functions support customizable retry policies, allowing you to define the interval and duration for retries, which can improve resilience and performance. Screenshot: Retry policy configuration. Graceful Error Handling Ensure that your functions handle errors gracefully. Use try-catch blocks and centralized error handling mechanisms to capture and log errors effectively. This will help in diagnosing performance issues and improving reliability. Screenshot: Error handling example in function code. Monitor and Analyze Failures Use Azure Monitor and Application Insights to track and analyze function failures. Understanding the root cause of failures can provide insights into performance bottlenecks and areas for improvement. Screenshot: Failure analysis in Application Insights. Step 6: Leverage Best Practices and Tools Implement Best Practices Follow Azure’s best practices for developing serverless applications. This includes keeping functions stateless, minimizing dependencies, and using managed identities for secure resource access. Screenshot: Best practices documentation in Azure portal. Use Application Insights Application Insights provides powerful telemetry and monitoring capabilities. Use it to gain insights into function performance, request rates, failure rates, and other critical metrics. Screenshot: Application Insights dashboard. Regularly Review and Refactor Periodically review and refactor your function code and orchestration logic. As workloads evolve, continuous optimization ensures that your functions remain performant and scalable. Screenshot: Code review session example. Results After implementing key performance optimization techniques for Azure Durable Functions, we observed significant increase the total number of transactions per minute processed. Transactions Per MinuteSireesha_MudapakaMar 21, 2025Microsoft724Views3likes0CommentsAZURE CONTAINER APPS: APIS, REDIS CACHE AND MICROSERVICES WITH OPENAI CHAT COMPLETIONS
Build your API Endpoints and serve your Web Apps with the power of Container Apps! Technology is moving on with amazing features and new applications almost everyday. Containers is the new path of building and deploying Applications with great flexibility, absolute control and security and a wide range of Hosting options. Before the hosting phase, we need a tool to build our Apps and based on the Containers logic we are building micro services which are the components of our app. One of the most widely used tool is Docker. With Docker we can create a Dockerfile, declare the specifics of our Image, make configurations and finally build that image and push it on our Hosting platform, being Kubernetes, Container Registries or Instances and so on. Azure Container Apps has evolved and we can use a Dockerfile without the need of Docker, so we can build and push our Containers to Azure Container Registry and directly pull the containers to Azure Container Apps managed Environment, making it possible to use one tool for the complete lifecycle of our App Deployment. Our Workshop has quite some features to display. We are using a Python Flask Web App as the Frontend and another container image , again in Python as the Backend. The Backend is an API endpoint that controls the process behind the scenes. The idea is simple enough, the Web App allows a user to select a City from a drop-down menu and get some info about the City, as well as a Photo of it. The backend service is responsible for the Photograph fetch, stored on a Storage Account and calling the Open AI Chat Completions API with a controlled prompt to get some general info about that City. As you may understand this can quickly extend into a fully functional tourist – travel Web App with security, scaling, redundancy and flexibility able to server from a few to a few thousand users. Add also Azure Redis Cache and you have an enterprise scale Application ready for production. Build For our resources we are going to use a quite simplistic approach, being Azure CLI. Before anything create a Storage Account and add a Container with some cities Photos e.x Athens.jpg, Berlin.jpg, Rome.jpg and so on. Let’s store our variables and run some mandatory commands. Login to Azure and select the subscription you are going to use: $RESOURCE_GROUP="rg-demoapp" $LOCATION="westeurope" $ENVIRONMENT="env-web-1" $FRONTEND="frontend" $API_BACKEND="backend" $ACR_NAME="myrandomnameforacr" az upgrade az extension add --name containerapp --upgrade az provider register --namespace Microsoft.App az provider register --namespace Microsoft.OperationalInsights Move on to create a resource group, Azure Redis Cache and an Azure Container Registry: az group create --name $RESOURCE_GROUP --location "$LOCATION" az redis create --location "$LOCATION" --name MyRedisCache --resource-group $RESOURCE_GROUP --sku Basic --vm-size c0 az acr create --resource-group $RESOURCE_GROUP --name $ACR_NAME --sku Basic --admin-enabled true Now let’s see our Backend. We are going to deploy our Backend first so the Frontend won’t fail once it is up and running. from flask import Flask, request, jsonify from azure.identity import DefaultAzureCredential from azure.storage.blob import BlobServiceClient from openai import OpenAI import os import redis import json app = Flask(__name__) # Environment variables OPENAI_API_KEY = os.getenv('OPENAI_API_KEY') storage_account_name = os.getenv('STORAGE_ACCOUNT_NAME') container_name = 'cities' redis_client = redis.Redis(host='xxxxx.redis.cache.windows.net', port=6380, password='xxxxxxxxxxxxxxx', ssl=True) # Initialize OpenAI with the appropriate API key client = OpenAI( organization='org-xxxxxx', api_key=OPENAI_API_KEY # Use the environment variable here for security ) # Initialize Azure credentials credential = DefaultAzureCredential() # Initialize Azure Blob Service Client with your account name and credential blob_service_client = BlobServiceClient(account_url=f"https://{storage_account_name}.blob.core.windows.net", credential=credential) @app.route('/get_city_info', methods=['POST']) def get_city_info(): city = request.form['city'] # Check for cached response cached_response = redis_client.get(city) if cached_response: return jsonify(json.loads(cached_response)) # Call OpenAI API to get the city description using the Chat model response = client.chat.completions.create( model="gpt-3.5-turbo", messages=[ {"role": "user", "content": f"Tell me about {city} with 100 words."} ] ) print(response.choices[0].message) # Extracting the response text from the last message in the conversation description = response.choices[0].message.content # Get the city image from Azure Blob Storage blob_client = blob_service_client.get_blob_client(container=container_name, blob=f'{city}.jpg') image_url = blob_client.url redis_client.setex(city, 86400, json.dumps({'description': description, 'image_url': image_url})) # 86400 seconds = 24 hours # Return the description and image URL return jsonify({ 'description': description, 'image_url': image_url }) if __name__ == '__main__': app.run(debug=True) Create a Dockerfile : # Use an official Python runtime as a parent image FROM python:3.11-bullseye # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 5000 available to the world outside this container EXPOSE 5000 # Define environment variable ENV FLASK_APP=app.py ENV FLASK_RUN_HOST=0.0.0.0 # Optionally, if you want to run in production mode # ENV FLASK_ENV=production # Run app.py when the container launches CMD ["flask", "run"] We can build directly into Azure Container Registry, and create our Container App so: az acr build --registry $ACR_NAME --image backend . az containerapp create --name $API_NAME --resource-group $RESOURCE_GROUP --environment $ENVIRONMENT --image $ACR_NAME.azurecr.io/$API_NAME --target-port 5000 --env-vars STORAGE_ACCOUNT_NAME=xxxxxx OPENAI_API_KEY=sxxxx --ingress 'external' --registry-server $ACR_NAME.azurecr.io --query properties.configuration.ingress.fqdn ## Get the URL of the API Endpoint: $API_BASE_URL=(az containerapp show --resource-group $RESOURCE_GROUP --name $API_NAME --query properties.configuration.ingress.fqdn -o tsv) Now all we need is a Flask (or any other Web App), to post our page and allow users to select a City from a drop-down menu. The index file needs an AJAX method to display the Data coming as response, so keep that in mind. from flask import Flask, render_template, request, jsonify import os import requests app = Flask(__name__) storage_account_name = os.getenv('STORAGE_ACCOUNT_NAME') backend_api_url = os.getenv('BACKEND_API_URL') @app.route('/') def index(): # Just render the initial form return render_template('index.html') @app.route('/get_city_info', methods=['POST']) def get_city_info(): city = request.form.get('city') # Call the backend service using form data response = requests.post(f"{backend_api_url}/get_city_info", data={"city": city}) if response.status_code == 200: data = response.json() description = data.get('description', "No description available") image_url = data.get('image_url', f"https://{storage_account_name}.blob.core.windows.net/cities/{city}.jpg") else: # Fallback in case of an error description = "Error fetching data" image_url = f"https://{storage_account_name}.blob.core.windows.net/cities/{city}.jpg" # The AJAX call expects a JSON response return jsonify(description=description, image_url=image_url) if __name__ == '__main__': app.run(debug=False) Same manner create a Dockerfile for the Frontend: # Use an official Python runtime as a parent image FROM python:3.11-bullseye # Set the working directory in the container WORKDIR /app # Copy the current directory contents into the container at /usr/src/app COPY . /app # Install any needed packages specified in requirements.txt RUN pip install --no-cache-dir -r requirements.txt # Make port 8000 available to the world outside this container EXPOSE 8000 # Define environment variable # ENV NAME World # Run app.py when the container launches CMD ["gunicorn", "-b", "0.0.0.0:8000", "app:app"] And build our Frontend and create the Container App: az acr build --registry $ACR_NAME --image frontend . az containerapp create --name $FRONTEND --resource-group $RESOURCE_GROUP --environment $ENVIRONMENT --image $ACR_NAME.azurecr.io/frontend --target-port 8000 --env-vars BACKEND_API_URL=https://$BACKEND_API_URL STORAGE_ACCOUNT_NAME=xxxxxx --ingress 'external' --registry-server $ACR_NAME.azurecr.io --query properties.configuration.ingress.fqdn You will be presented with the URL that the Frontend is serving the Web page. Of course we can attach a custom domain, and our own SSL , but we can always make our tests and check our deployment. Make some tests, and watch how Redis Cache brings the results immediately back, while new selected Cities need a second or two to bring back the response and the Blob Image. Closing In a few steps we deployed our Web Application utilizing Azure Container Apps, the latest technology for Applications, using Containers for our Images, build with the standard Dockerfile, but deployed directly to Azure Container Registry and then to Container Apps. We showcased how we can build our own API endpoints and call other APIs and also the use of Redis Cache for Caching and faster data retrieval. The Solution can expand, adding Private Connectivity , Custom Domains and Firewalls, Authentication and Key Vault as well.KonstantinosPassadisJan 07, 2024Learn Expert2.8KViews2likes0CommentsWelcome to the Azure Application Services AMA!
Welcome to the Azure Application Services Ask Microsoft Anything (AMA)! This live hour gives you the opportunity to ask questions and provide feedback. Please introduce yourself by replying to this thread. Post your questions in a new thread within the Azure AMA space, by clicking on, "Start a New Conversation" at the top of the page.EricStarkerJun 10, 2021Former Employee2.8KViews2likes4CommentsThat's a wrap! 6/8 Low Code Application Development AMA
Thank you for joining us and voicing your questions and feedback during this fun and action-packed hour. We'll attach a summary document of what was covered as soon as it is available. See you next time - we have an Azure Application Services AMA in this space on 6/10 so hope to see you there!EricStarkerJun 08, 2021Former Employee662Views2likes0Comments6/10 - Azure Application Services AMA!
We are very excited to announce an Azure Application Services AMA! The AMA will take place on Thursday, June 10, 2021 from 9:00 a.m. to 10:00 a.m. PT in the Azure AMA space. Add the event to your calendar and view in your time zone here. An AMA is a live text-based online event similar to a “YamJam” on Yammer or an “Ask Me Anything” on Reddit. This AMA gives you the opportunity to connect with Microsoft product experts who will be on hand to answer your questions and listen to feedback. The space will be open 24 hours before the event, so feel free to post your questions anytime beforehand during that period if it fits your schedule or time zone better.EricStarkerMay 26, 2021Former Employee5.5KViews2likes7CommentsProtecting your Identities from attacks like consent phishing
Hi Cloud Friends, Today, developers build apps by integrating user and enterprise data from cloud platforms to enhance and personalize experiences. These cloud platforms are rich in data, but in turn have attracted malicious actors who attempt to gain unauthorized access to that data. One such attack is consent phishing, in which attackers trick users into granting a malicious app access to sensitive data or other resources. Instead of trying to steal the user's password, an attacker asks for permission for an app controlled by the attacker to access valuable data. These apps are often named to mimic legit apps, such as “0365 Access” or “Newsletter App”. Here is one way to counteract these attacks. 1. Restricting users from registering new apps to Azure AD: 2. Preventing the users for giving consents to apps: When you make these settings you need to know that as an administrator you will have to make the apps available to the users. So this means that you as an administrator will have more work. As an administrator for the respective app (enterprise application), you should configure the consent for the necessary permissions on behalf of the user. But really do not flip the "big switch" that all users can give consent of permissions for ALL apps. Enormously important is also the training for the users. In many cases, such apps are not described correctly, or the spelling is wrong. Training your users regularly is another way to counter these attacks. I hope this article was useful. Best regards, Tom Wechsler5.4KViews2likes2Comments
Resources
Tags
- web apps77 Topics
- AMA47 Topics
- azure functions40 Topics
- Desktop Apps11 Topics
- mobile apps9 Topics
- azure kubernetes service3 Topics
- community2 Topics
- azure1 Topic
- Azure Data Explorer AMA1 Topic
- Azure SignalR Service1 Topic