azure
7612 TopicsPreventing and recovering from accidental deletion of an Azure Database for MySQL flexible server
Accidental deletion of critical Azure resources, such as Azure Database for MySQL flexible servers, can disrupt operations. To help avoid such accidental deletions, you can use a couple of options, including Azure Resource Locks and Azure Policy. This post explains how to implement these mechanisms, and how to revive a dropped MySQL flexible server by using the Azure CLI.916Views2likes1Commentđ Strengthening Azure DNS Zone Security with RBAC and Resource Locks
đ DNS security is more than just configuration itâs about protecting critical assets against unauthorized changes and accidental deletions. đ Managing DNS zones effectively requires a layered security approach. đ Two powerful mechanisms in Azure : Role-Based Access Control (RBAC) and Resource Locks đ Role-Based Access Control (RBAC) đ * Granular DNS Access Control * RBAC ensures controlled access management at both the DNS zone and record set levels. * Instead of assigning broad permissions, RBAC enables precise delegation using built-in roles such as: đš Owner â Full control over the DNS zone, including configurations and deletions. đš Contributor â Can modify DNS settings but cannot change access permissions. đš Network Contributor â Can manage networking configurations related to DNS, but not modify records. đš DNS Zone Contributor â Dedicated role for managing DNS zones without broader networking privileges. â Key Advantages of RBAC in DNS Security: â Prevent unauthorized modifications by restricting access to only necessary roles. â Ensure operational integrity by limiting exposure to critical configurations. â Improve governance by aligning roles with organizational security policies. đ Resource Locks đ * Guardrails for DNS Protection * Even with well-defined RBAC settings, accidental deletions can still occur. * Azure Resource Locks add an additional safeguard by preventing changes to a DNS zone or specific record sets. đš Zone Lock ----> Protects an entire DNS zone from being deleted, preserving all associated record sets. đš SOA Lock ----> Prevents unintentional zone deletions while allowing record modifications within the zone. â How Resource Locks Enhance Security: â Shields DNS zones from accidental or malicious deletions. â Maintains continuity by ensuring record sets remain intact. â Strengthens compliance controls for critical infrastructure. đ Best Practices for Securing DNS with RBAC & Resource Locks đ¸ Assign least privilege rolesânever give unnecessary access. đ¸ Implement locks on essential zones to prevent configuration errors. đ¸ Regularly audit access permissions using Azure Policy & Activity Logs. đ¸ Use Automation & Alerts to track modifications for enhanced security. đš Implementing RBAC & Resource Locks ensures your cloud environment remains secure, operational, and fault-tolerant.207Views0likes1CommentURGENT: Accidental Deletion of Microsoft Clarity Project - Manual Restoration Request
Hello Microsoft Clarity Support Team, I am writing to request the manual restoration of a project that was accidentally deleted from our account. We urgently need to retrieve the historical session data for business continuity. I attempted to contact support via the email alias, but my messages to email address removed for privacy reasons were rejected with a 550 5.7.124 error ("Sender not allowed"), which is why I am utilizing this forum for immediate assistance. Please find the necessary project details below: Account Email: email address removed for privacy reasons Deleted Project Name (Exact): Aste Helsinki new Date/Time of Deletion: October 22, 2025, 12:51 AM Finland time (EEST) Could you please confirm if a manual restoration from your backend archives is possible for this specific project? Thank you for your prompt attention to this critical issue. Best regards, Kateryna Shchuka Aste Helsinki11Views0likes1CommentMastering the AZ-104 Exam: Goluâs Guide to Passing & Levelling Up Your Cloud Career!
The AZ-104 exam is designed to test your knowledge and skills in administering Azure services. Passing this exam will validate your expertise in Azure administration and enhance your career opportunities in cloud computing. Being the Global Courseware Lead for AZ-104 Course and as a Microsoft Technical Trainer at Microsoft, I got to work with a lot of different versions of this course and since I have a background in Azure Administration and also architecting solutions for various customers over the span of my 10+ years of IT journey, I want to highlight my strategy to prepare for this exam, and most of these best practices hold true for most Microsoft certifications out there. You are in the shoes of Golu today and will be going through the process of planning and preparing for the exam. To ace the exam Golu needs to follow a Seven steps approach (Image Source: âMicrosoft Copilotâ) Step 1: Understand the exam Objectives Before you start preparing for the AZ-104 exam, it's essential to understand the exam objectives. The exam tests your knowledge in various areas such as Identity and governance, compute, storage, networking, and Monitoring. Microsoft provides a detailed exam study guide that outlines the topics covered in the exam. Read through the Study guide to get an understanding of the exam objectives and the skills you need to master. Golu Understanding Exam Objectives (Image Source: âMicrosoft Copilotâ) Here are the topics and Weights for each of these topics, higher the weights more questions you are likely going to see in the exam. Always refer to the AZ-104 Study Guide as these weights do change as the content is updated. Skills Measured Manage Azure identities and governance (20â25%) Implement and manage storage (15â20%) Deploy and manage Azure compute resources (20â25%) Implement and manage virtual networking (15â20%) Monitor and maintain Azure resources (10â15%) Step 2: Develop a Study Plan After you understand the exam objectives, it's time to develop a study plan. The study plan should include the time you'll dedicate to studying, the study materials you'll use, and the practice tests you'll take. Allocate enough time to study all the exam topics thoroughly and break down your study sessions into manageable chunks. This will help you stay motivated and focused throughout the study period. Goluâs Dilemma: How much time do I need for my Prep? Image Source: "Microsoft Copilot" Daily Azure users (2-3+ years): 1 week of focused review may be enough. Intermediate (6-12 months): Plan for 4-6 weeks of study. Beginners (0-6 months): Allow 6-8 weeks, focusing on hands-on labs and foundational concepts. Tip: Track your progress weekly and adjust your plan as needed. Consistency is key! Experience mapping Just make sure you create a study plan and study at a specific time what works for you. Example times are shown below. A lot of Studies confirm that studying at a particular schedule everyday helps move information from short term to long term memory and helps retention of complex topics. Study Schedule PRO TIP: Example study plan download your copy Create a plan adding all topics covered in study guide and add time to complete labs and practice exams, also you can either book the exam at the start of the month which gives you a goal to complete exam in 1 month or 1.5 month, donât take more than 2 months to take the exam otherwise you will lose motivation to take the exam. Copilot Prompt: Create a study plan for AZ-104 exam in csv in a monthly format for the month of <enter month>2026, the table should contain all topics covered in study guide on this link https://learn.microsoft.com/en-us/certifications/resources/study-guides/az-104 and add time to complete labs and practice exams. The plan should be 1 month long: AZ-104 exam study plan example Step 3: Follow Microsoft learn to prepare topic wise (Self Learning) Golu Following Microsoft Learn Path (Image Source: âMicrosoft Copilotâ) The easiest way to find AZ-104 structured content is to leverage Free Microsoft learn content for the AZ-104 exam, Study topic wise as per the study plan Golu has created, Golu can also leverage Microsoft Copilot to simplify some topics and generate summary. Microsoft Learn link: Course AZ-104T00-A: Microsoft Azure Administrator Copilot Prompt: Create a table highlighting the major differences between Active Directory Domain Services and Entra ID and explain these to me like I am studying it for the first time. Here is the link I am studying from Compare Microsoft Entra ID and Active Directory Domain Services Step 4: Familiarize yourself with Azure Services Golu Understanding azure services in the exam (Image Source: âMicrosoft Copilotâ) The AZ-104 exam covers a broad range of Azure AI services. It's essential to have a solid understanding of each service and how it works. All Azure services are available on this link, Golu can create a custom excel sheet and add one liner about each of the services covered in the exam so that he know what the service is used for, this is very useful in exam to eliminate wrong options and also to make guesstimates during the exam, refer, example below Go to Microsoft Copilot and use below prompt: Copilot Prompt: "Use the Study Guide on this link and create a table for AZ-104 services covered in the Exam Study guide for https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104. The table should contain these column's: Service name, category and purpose of the service" | Try in Copilot Chat Example Output for Azure Services Covered in AZ-104 exam. Step 5: Read FAQs Golu reading FAQs (Image Source: âMicrosoft Copilotâ) Frequently Asked Questions (FAQs) about Azure Services is a great way to understand important aspects of various services. You can check there the most common questions and answers about that specific topic; this helps you discover aspects of the service which you might not discover normally. For example: Virtual machine FAQ's Virtual Network FAQ's To get FAQ for any service, search "Service name" + "FAQ" + "MS Learn" or use below copilot prompt Copilot Prompt: âUse the Study Guide on this link and create a table for AZ-104 Study guide for https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104. The table should contain these columns: Service name, category and FAQ link" | Try in Copilot Chat Step 6: Are you ready for the exam? Is Golu Really ready for the exam? Golu Confused (Image Source: âMicrosoft Copilotâ) 6a: Take Practice tests Take multiple AZ-104 practice exams and see how you are scoring on them, if you constantly score above 90%, you are good to take the exam as you know all topics covered in the exam, look at what areas you are scoring poorly during practice and ensure you study the topics to get a better understanding of what is required to answer that question. Eliminate weaker areas to create an agile study plan and repeat this until you score well on these exams at least target >85% Microsoft gives a free practice assessment. This is a good start to see how well you know the Azure Administration topics. Here is a Copilot prompt that can act as a personal tutor for Golu. Copilot Prompt: Give me 5 questions for AZ-104 module 1 from the topic covered in https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 with increasing difficulty, starting with beginner level, if i am able to answer the questions increase the difficulty to intermediate and then advance if i dont answer the question please decrease the level, please make sure not to give me the answers before i have answered all 5 questions in one section. The questions can be yes no, multichoice, case study or just direct textual answer from learner Pro tip: You need 700 score to pass the exam out of 1000, but you won't know how many points each question has in the 40-60 questions of the exam. 6b: Identify and address knowledge Gaps After taking practice tests, identify the areas where you need improvement. Focus your study efforts on these areas and use additional study materials to address your knowledge gaps. Revisit the Microsoft learn documentation and study materials to ensure that you fully understand the topics you struggled with. 6c: Exam Strategies Identify key words in the question and look for options that align with those key words, example if you see SAML, OAuth, Open ID connect or cloud Identity in the question think of Entra ID service, if you see SMB, NFS in the question think of Azure Files service, if you see VHD, Images, log files and Text Files think of Blobs service in the answer. Method of Elimination - eliminate all options that do not feel relevant to the question, and this will narrow down your answer to 1 or 2 options. Increasing the probability of you answering the correct answer. Key Ask - you may be given a long passage to ready make sure you read and understand what is required in the question, which is usually the last 2 lines, read the question twice and mark the question for review if you are unsure Guess the answer? Never leave any question unanswered as there is no negative marking, make informed guess and move on to the next question, never go to the next page without answering the question as sometimes you canât go back to review questions of certain sections Time Management is key to passing the exam, the case studies will take a lot of time to read and answer, so make sure you donât get stuck on any question, you either know it or donât, move on to the next one and review the questions later. Also Mind that case studies can take a lot more time than multi-choice questions.âŻCheck the clock all the time and be conscious about many questions are remaining and see time per question, so that you don't miss out on answering any questions. Create Mind Maps and Flashcards during your preparation- Leverage below copilot prompts to generate topic-wise flashcards and mind maps. Also, you can find the neuroscience of why mind maps and Flashcards are important, along with a lot of procreated mind maps and flashcards on our open-source project website https://aka.ms/MTTBrainwave Copilot Prompt: Please develop a mind map in mermaid for the AZ-104 on https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 | Try in Copilot Chat Mermaid Mind map for Module-01 (Image Source: âMermaid on Microsoft Copilotâ) Copilot Prompt: Please develop flashcards for of AZ-104 course on https://learn.microsoft.com/en-us/credentials/certifications/resources/study-guides/az-104 | Try in Copilot Chat Flashcards output on Microsoft Copilot (Image Source: âMicrosoft Copilotâ) Step 07: Be Prepared for the exam day Exam format The exam format is detailed below and the type of questions you will see on the exam can be understood by going to this link aka.ms/ExploreMicrosoftExams The exam has 40-60 questions with a mix of multi-choice, drag and drop, yes/no questions and case studies, the types of questions can be seen on this simulation sandbox environment. Launch the sandbox Exams with or without labs will have different duration as mentioned below. Exam Objectives (Image Source: âMicrosoft Learnâ) On exam day, make sure you're well-rested and have eaten a good meal. If you are writing the exam in person, arrive at the exam center early. A valid government-issued ID will be required. During the exam, take your time to read the questions carefully and double-check your answers before submitting. Remember to stay calm and focused throughout the exam. If you are taking exam from home or office ensure you follow all guidelines to stay compliant by going to appropriate vendor website, here is a link to test your system if it is good for taking the exam. Test your system And lastly there are no failures if you fail to pass the exam, take this as a learning opportunity and prepare well for the next attempt, All the best for your preparation and do share your stories with me on LinkedIn, it always brings me joy to see your experiences with these exams. Golu is finally ready to pass the exam. (Image Source: âMicrosoft Copilotâ) About the Instructor Neeraj Kumar is a Microsoft Technical Trainer based in the Delhi region of India. With over 10 years of experience, he is deeply passionate about Artificial Intelligence, Azure, and Security. Guided by the mantra âBe 1% better every day,â Neeraj strives for continuous growth and excellence in his field. Feel free to connect with him on LinkedIn https://www.linkedin.com/in/neerajtrainer/ #MicrosoftLearn #SkilledByMTTâŻOverload to Optimal: Tuning Microsoft Fabric Capacity
Co-Authored by: Daya Ram, Sr. Cloud Solutions Architect Optimizing Microsoft Fabric capacity is both a performance and cost exercise. By diagnosing workloads, tuning cluster and Spark settings, and applying data best practices, teams can reduce run times, avoid throttling, and lower total cost of ownershipâwithout compromising SLAs. Use Fabricâs built-in observability (Monitoring Hub, Capacity Metrics, Spark UI) to identify hot spots and then apply cluster- and data-level remediations. For capacity planning and sizing guidance, see Plan your capacity size. Options to Diagnose Capacity Issues 1) Monitoring Hub â Start with the Story of the Run What to use it for: Browse Spark activity across applications (notebooks, Spark Job Definitions, and pipelines). Quickly surface longârunning or anomalous runs; view read/write bytes, idle time, core allocation, and utilization. How to use it From the Fabric portal, open Monitoring (Monitor Hub). Select a Notebook or Spark Job Definition to run and choose Historical Runs. Inspect the Run Duration chart; click on a run to see read/write bytes, idle time, core allocation, overall utilization, and other Spark metrics. What to look for Use the guide: application detail monitoring to review and monitor your application. 2) Capacity Metrics App â Measure the Whole Environment What to use it for: Review capacity-wide utilization and system events (overloads, queueing); compare utilization across time windows and identify sustained peaks. How to use it Open the Microsoft Fabric Capacity Metrics app for your capacity. Review the Compute page (ribbon charts, utilization trends) and the System events tab to see overload or throttling windows. Use the Timepoint page to drill into a 30âsecond interval and see which operations consumed the most compute. What to look for Use the Troubleshooting guide: Monitor and identify capacity usage to pinpoint top CUâconsuming items. 3) Spark UI â Diagnose at Deeper Level Why it matters: Spark UI exposes skew, shuffle, memory pressure, and long stages. Use it after Monitoring Hub/Capacity Metrics to pinpoint the problematic job. Key tabs to inspect Stages: uneven task durations (data skew), heavy shuffle read/write, large input/output volumes. Executors: storage memory, task time (GC), shuffle metrics. High GC or frequent spills indicate memory tuning is needed. Storage: which RDDs/cached tables occupy memory; any disk spill. Jobs: longârunning jobs and gaps in the timeline (driver compilation, nonâSpark code, driver overload). What to look for Set via environment Spark properties or session config. Data skew, Memory usage, High/Low Shuffles: Adjust Apache Spark settings: i.e. spark.ms.autotune.enabled, spark.task.cpus and spark.sql.shuffle.partitions. Section 2: Remediation and Optimization Suggestions A) Cluster & Workspace Settings Runtime & Native Execution Engine (NEE) Use Fabric Runtime 1.3 (Spark 3.5, Delta 3.2) and enable the Native Execution Engine to boost performance; enable at the environment level under Spark compute â Acceleration. Starter Pools vs. Custom Pools Starter Pool: prehydrated, mediumâsize pools; fast session starts, good for dev/quick runs. Custom Pools: size nodes, enable autoscale, dynamic executors. Create via workspace Spark Settings (requires capacity admin to enable workspace customization). High Concurrency Session Sharing Enable High Concurrency to share Spark Sessions across notebooks (and pipelines) to reduce session startup latency and cost; use session tags in pipelines to group notebooks. Autotune for Spark Enable Autotune (spark.ms.autotune.enabled = true) to autoâadjust perâquery: spark.sql.shuffle.partitions Spark.sql.autoBroadcastJoinThreshold spark.sql.files.maxPartitionBytes. Autotune is disabled by default and is in preview; enable per environment or session. B) Dataâlevel best practices Microsoft Fabric offers several approaches to maintain optimal file sizes in Delta tables, review documentation here: Table Compaction - Microsoft Fabric. Intelligent Cache Enabled by default (Runtime 1.1/1.2) for Spark pools: caches frequently read files at node level for Delta/Parquet/CSV; improves subsequent read performance and TCO. OPTIMIZE & ZâOrder Run OPTIMIZE regularly to rewrite files and improve file layout. VâOrder VâOrder (disabled by default in new workspaces) can accelerate reads for readâheavy workloads; enable via spark.sql.parquet.vorder.default = true. Vacuum Run VACUUM to remove unreferenced files (stale data); default retention is 7 days; align retention across OneLake to control storage costs and maintain time travel. Collaboration & Next Steps Engage Data Engineering Team to Define an Optimization Playbook Start with reviewing capacity sizing guidance, clusterâlevel optimizations (runtime/NEE, pools, concurrency, Autotune) and then target data improvements (Zâorder, compaction, caching, query refactors). Triage: Monitor Hub â Capacity Metrics â Spark UI to map workloads and identify highâimpact jobs, and workloads causing throttling. Schedule: Operationalize maintenance: OPTIMIZE (full or selective) during offâpeak windows; enable Auto Compaction for microâbatch/streaming writes; add VACUUM to your cadence with agreed retention. Add regular code review sessions to ensure consistent performance patterns. Fix: Adjust pool sizing or concurrency; enable Autotune; tune shuffle partitions; refactor problematic queries; reârun compaction. Verify: Reârun the job and change, i.e. reduced run time, lower shuffle, improved utilization.82Views0likes0CommentsIntroducing native Service Bus message publishing from Azure API Management (Preview)
Weâre excited to announce a preview capability in Azure API Management (APIM) â you can now send messages directly to Azure Service Bus from your APIs using a built-in policy. This enhancement, currently in public preview, simplifies how you connect your API layer with event-driven and asynchronous systems, helping you build more scalable, resilient, and loosely coupled architectures across your enterprise. Why this matters? Modern applications increasingly rely on asynchronous communication and event-driven designs. With this new integration: Any API hosted in API Management can publish to Service Bus â no SDKs, custom code, or middleware required. Partners, clients, and IoT devices can send data through standard HTTP calls, even if they donât support AMQP natively. You stay in full control with authentication, throttling, and logging managed centrally in API Management. Your systems scale more smoothly by decoupling front-end requests from backend processing. How it works The new send-service-bus-message policy allows API Management to forward payloads from API calls directly into Service Bus queues or topics. High-level flow A client sends a standard HTTP request to your API endpoint in API Management. The policy executes and sends the payload as a message to Service Bus. Downstream consumers such as Logic Apps, Azure Functions, or microservices process those messages asynchronously. All configurations happen in API Management â no code changes or new infrastructure are required. Getting started You can try it out in minutes: Set up a Service Bus namespace and create a queue or topic. Enable a managed identity (system-assigned or user-assigned) on your API Management instance. Grant the identity the âService Bus data senderâ role in Azure RBAC, scoped to your queue/ topic. Add the policy to your API operation: <send-service-bus-message queue-name="orders"> <payload>@(context.Request.Body.As<string>())</payload> </send-service-bus-message> Once saved, each API call publishes its payload to the Service Bus queue or topic. đ Learn more. Common use cases This capability makes it easy to integrate your APIs into event-driven workflows: Order processing â Queue incoming orders for fulfillment or billing. Event notifications â Trigger internal workflows across multiple applications. Telemetry ingestion â Forward IoT or mobile app data to Service Bus for analytics. Partner integrations â Offer REST-based endpoints for external systems while maintaining policy-based control. Each of these scenarios benefits from simplified integration, centralized governance, and improved reliability. Secure and governed by design The integration uses managed identities for secure communication between API Management and Service Bus â no secrets required. You can further apply enterprise-grade controls: Enforce rate limits, quotas, and authorization through APIM policies. Gain API-level logging and tracing for each message sent. Use Service Bus metrics to monitor downstream processing. Together, these tools help you maintain a consistent security posture across your APIs and messaging layer. Build modern, event-driven architectures With this feature, API Management can serve as a bridge to your event-driven backbone. Start small by queuing a single APIâs workload, or extend to enterprise-wide event distribution using topics and subscriptions. Youâll reduce architectural complexity while enabling more flexible, scalable, and decoupled application patterns. Learn more: Get the full walkthrough and examples in the documentation đ here2.2KViews2likes4CommentsTrusted Signing Public Preview Update
Nearly a year ago we announced the Public Preview of Trusted Signing with availability for organizations with 3 years or more of verifiable history to onboard to the service to get a fully managed code signing experience to simplify the efforts for Windows app developers. Over the past year, weâve announced new features including the Preview support for Individual Developers, and we highlighted how the service contributes to the Windows Security story at Microsoft BUILD 2024 in the Unleash Windows App Security & Reputation with Trusted Signing session. During the Public Preview, we have obtained valuable insights on the service features from our customers, and insights into the developer experience as well as experience for Windows users. As we incorporate this feedback and learning into our General Availability (GA) release, we are limiting new customer subscriptions as part of the public preview. This approach will allow us to focus on refining the service based on the feedback and data collected during the preview phase. The limit in new customer subscriptions for Trusted Signing will take effect Wednesday, April 2, 2025, and make the service only available to US and Canada-based organizations with 3 years or more of verifiable history. Onboarding for individual developers and all other organizations will not be directly available for the remainder of the preview, and we look forward to expanding the service availability as we approach GA. Note that this announcement does not impact any existing subscribers of Trusted Signing, and the service will continue to be available for these subscribers as it has been throughout the Public Preview. For additional information about Trusted Signing please refer to Trusted Signing documentation | Microsoft Learn and Trusted Signing FAQ | Microsoft Learn.5.2KViews7likes21Comments