community
2946 TopicsWhat is the formula for counting each number from 1to25 that hits from each drawing?
I want to calculate the daily numbers for cash25 lotto. The numbers are drawn on Monday, Tuesday, Thursday, and Friday of each week. Six number from 1 to 25 are drawn each time. I have 157 rows and 7 columns (A-G). The first column is the date and Columns B-G are each number that was drawn. I know I need to do a count in each column of the total times each number is drawn. What is the formula I should enter? Help, please!26Views0likes1CommentRemoving graphics performance overlay
Hi, all: My machine has what appears to be a video card performance overlay in the upper right hand corner. It looks like this: Problem is, I can't figure out how to remove it. I don't know if it's a Microsoft overlay, an NVIDIA overlay or what it is -- all I know is it's interfering with the top right of all my applications and it's driving me crazy. Any ideas? Thank you in advance!48Views0likes1CommentUser profile
It used to be possible to click on a user's icon and find out a little bit about them, including their profile, if they wrote one. Now, the only profile I can see is my own, and I can only see that when I'm logged in. What good is a profile if only the user can see it? Maybe I'm missing a setting someplace. John29Views0likes1CommentAI Security Ideogram: Practical Controls and Accelerated Response with Microsoft
Overview As organizations scale generative AI, two motions must advance in lockstep: hardening the AI stack (“Security for AI”) and using AI to supercharge SecOps (“AI for Security”). This post is a practical map—covering assets, common attacks, scope, solutions, SKUs, and ownership—to help you ship AI safely and investigate faster. Why both motions matter, at the same time Security for AI (hereafter ‘ Secure AI’ ) guards prompts, models, apps, data, identities, keys, and networks; it adds governance and monitoring around GenAI workloads (including indirect prompt injection from retrieved documents and tools). Agents add complexity because one prompt can trigger multiple actions, increasing the blast radius if not constrained. AI for Security uses Security Copilot with Defender XDR, Microsoft Sentinel, Purview, Entra, and threat intelligence to summarize incidents, generate KQL, correlate signals, and recommend fixtures and betterments. Promptbooks make automations easier, while plugins provide the opportunity to use out of the box as well as custom integrations. SKU: Security Compute Units (SCU). Responsibility: Shared (customer uses; Microsoft operates). The intent of this blog is to cover Secure AI stack and approaches through matrices and mind map. This blog is not intended to cover AI for Security in detail. For AI for Security, refer Microsoft Security Copilot. The Secure AI stack at a glance At a high level, the controls align to the following three layers: AI Usage (SaaS Copilots & prompts) — Purview sensitivity labels/DLP for Copilot and Zero Trust access hardening prevent oversharing and inadvertent data leakage when users interact with GenAI. AI Application (GenAI apps, tools, connectors) — Azure AI Content Safety (Prompt Shields, cross prompt injection detection), policy mediation via API Management, and Defender for Cloud’s AI alerts reduce jailbreaks, XPIA/UPIA, and tool based exfiltration. This layer also includes GenAI agents. AI Platform & Model (foundation models, data, MLOps) — Private Link, Key Vault/Managed HSM, RBAC controlled workspaces and registries (Azure AI Foundry/AML), GitHub Advanced Security, and platform guardrails (Firewall/WAF/DDoS) harden data paths and the software supply chain end-to-end. Let’s understand the potential attacks, vulnerabilities and threats at each layer in more detail: 1) Prompt/Model protection (jailbreak, UPIA/system prompt override, leakage) Scope: GenAI applications (LLM, apps, data) → Azure AI Content Safety (Prompt Shields, content filters), grounded-ness detection, safety evaluations in Azure AI Foundry, and Defender for Cloud AI threat protection. Responsibility: Shared (Customer/Microsoft). SKU: Content Safety & Azure OpenAI consumption; Defender for Cloud – AI Threat Protection. 2) Cross-prompt Injection (XPIA) via documents & tools Strict allow-lists for tools/connectors, Content Safety XPIA detection, API Management policies, and Defender for Cloud contextual alerts reduce indirect prompt injection and data exfiltration. Responsibility: Customer (config) & Microsoft (platform signals). SKU: Content Safety, API Management, Defender for Cloud – AI Threat Protection. 3) Sensitive data loss prevention for Copilots (M365) Use Microsoft Purview (sensitivity labels, auto-labeling, DLP for Copilot) with enterprise data protection and Zero Trust access hardening to prevent PII/IP exfiltration via prompts or Graph grounding. Responsibility: Customer. SKU: M365 E5 Compliance (Purview), Copilot for Microsoft 365. 4) Identity & access for AI services Entra Conditional Access (MFA/device), ID Protection, PIM, managed identities, role based access to Azure AI Foundry/AML, and access reviews mitigate over privilege, token replay, and unauthorized finetuning. Responsibility: Customer. SKU: Entra ID P2. 5) Secrets & keys Protect against key leakage and secrets in code using Azure Key Vault/Managed HSM, rotation policies, Defender for DevOps and GitHub Advanced Security secret scanning. Responsibility: Customer. SKU: Key Vault (Std/Premium), Defender for Cloud – Defender for DevOps, GitHub Advanced Security. 6) Network isolation & egress control Use Private Link for Azure OpenAI and data stores, Azure Firewall Premium (TLS inspection, FQDN allow-lists), WAF, and DDoS Protection to prevent endpoint enumeration, SSRF via plugins, and exfiltration. Responsibility: Customer. SKU: Private Link, Firewall Premium, WAF, DDoS Protection. 7) Training data pipeline hardening Combine Purview classification/lineage, private storage endpoints & encryption, human-in-the-loop review, dataset validation, and safety evaluations pre/post finetuning. Responsibility: Customer. SKU: Purview (E5 Compliance / Purview), Azure Storage (consumption). 8) Model registry & artifacts Use Azure AI Foundry/AML workspaces with RBAC, approval gates, versioning, private registries, and signed inferencing images to prevent tampering and unauthorized promotion. Responsibility: Customer. SKU: AML; Azure AI Foundry (consumption). 9) Supply chain & CI/CD for AI apps GitHub Advanced Security (CodeQL, Dependabot, secret scanning), Defender for DevOps, branch protection, environment approvals, and policy-as-code guardrails protect pipelines and prompt flows. Responsibility: Customer. SKU: GitHub Advanced Security; Defender for Cloud – Defender for DevOps. 10) Governance & risk management Microsoft Purview AI Hub, Compliance Manager assessments, Purview DSPM for AI, usage discovery and policy enforcement govern “shadow AI” and ensure compliant data use. Responsibility: Customer. SKU: Purview (E5 Compliance/addons); Compliance Manager. 11) Monitoring, detection & incident Defender for Cloud ingests Content Safety signals for AI alerts; Defender XDR and Microsoft Sentinel consolidate incidents and enable KQL hunting and automation. Responsibility: Shared. SKU: Defender for Cloud; Sentinel (consumption); Defender XDR (E5/E5 Security). 12) Existing landing zone baseline Adopt Azure Landing Zones with AI-ready design, Microsoft Cloud Security Benchmark policies, Azure Policy guardrails, and platform automation. Responsibility: Customer (with Microsoft guidance). SKU: Guidance + Azure Policy (included); Defender for Cloud CSPM. Mapping attacks to controls This heatmap ties common attack themes (prompt injection, cross-prompt injection, sensitive data loss, identity & keys, network egress, training data, registries, supply chain, governance, monitoring, and landing zone) to the primary Microsoft controls you’ll deploy. Use it to drive backlog prioritization. Quick decision table (assets → attacks → scope → solution) Use this as a guide during design reviews and backlog planning. The rows below are a condensed extract of the broader map in your workbook. Asset Class Possible Attack Scope Solution Data Sensitive info disclosure / Risky AI usage Microsoft AI Purview DSPM for AI; Purview DSPM for AI + IRM Unknown interactions for enterprise AI apps Microsoft AI Purview DSPM for AI Unethical behavior in AI apps Microsoft AI Purview DSPM for AI + Comms Compliance Sensitive info disclosure / Risky AI usage Non-Microsoft AI Purview DSPM for AI + IRM Unknown interactions for enterprise AI apps Non-Microsoft AI Purview DSPM for AI Unethical behavior in AI apps Non-Microsoft AI Purview DSPM for AI + Comms Compliance Models (MaaS) Supply-chain attacks (ML registry / DevOps of AI) OpenAI LLM OOTB built-in; Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Secure registries/workspaces compromise OpenAI LLM OOTB built-in Secure models running inside containers OpenAI LLM OOTB built-in Training data poisoning OpenAI LLM OOTB built-in Model theft OpenAI LLM OOTB built-in Prompt injection (XPIA) OpenAI LLM OOTB built-in; Azure AI Foundry – Content Safety / Prompt Shield Crescendo OpenAI LLM OOTB built-in Jailbreak OpenAI LLM OOTB built-in Supply-chain attacks (ML registry / DevOps of AI) Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Secure registries/workspaces compromise Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Secure models running inside containers Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Training data poisoning Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Model theft Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Prompt injection (XPIA) Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Crescendo Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time Jailbreak Non-OpenAI LLM Azure AI Foundry – Content Safety / Prompt Shield; Defender for AI – Run-time GenAI Applications (SaaS) Jailbreak Microsoft Copilot SaaS OOTB built-in Prompt injection (XPIA) Microsoft Copilot SaaS OOTB built-in Wallet abuse Microsoft Copilot SaaS OOTB built-in Credential theft Microsoft Copilot SaaS OOTB built-in Data leak / exfiltration Microsoft Copilot SaaS OOTB built-in Insecure plugin design Microsoft Copilot SaaS Responsibility: Provider/Creator Example 1: Microsoft plugin: responsibility to secure lies with Microsoft Example 2: 3rd party custom plugin: responsibility to secure lies with the 3rd party provider. Example 3: customer-created plugin: responsibility to secure lies with the plugin creator. Shadow AI Microsoft Copilot SaaS or non-Microsoft SaaS gen AI APPS: Purview DSPM for AI (endpoints where browser extension is installed) + Defender for Cloud Apps AGENTS: Entra agent ID (preview) + Purview DSPM for AI Jailbreak Non-Microsoft GenAI SaaS SaaS provider Prompt injection (XPIA) Non-Microsoft GenAI SaaS SaaS provider Wallet abuse Non-Microsoft GenAI SaaS SaaS provider Credential theft Non-Microsoft GenAI SaaS SaaS provider Data leak / exfiltration Non-Microsoft GenAI SaaS Purview DSPM for AI Insecure plugin design Non-Microsoft GenAI SaaS SaaS provider Shadow AI Microsoft Copilot SaaS or non-Microsoft SaaS GenAI APPS: Purview DSPM for AI (endpoints where browser extension is installed) + Defender for Cloud Apps AGENTS: Entra agent ID (preview) + Purview DSPM for AI Agents (Memory) Memory injection Microsoft PaaS (Azure AI Foundry) agents Defender for AI – Run-time* Memory exfiltration Microsoft PaaS (Azure AI Foundry) agents Defender for AI – Run-time* Memory injection Microsoft Copilot Studio agents Defender for AI – Run-time* Memory exfiltration Microsoft Copilot Studio agents Defender for AI – Run-time* Memory injection Non-Microsoft PaaS agents Defender for AI – Run-time* Memory exfiltration Non-Microsoft PaaS agents Defender for AI – Run-time* Identity Tool misuse / Privilege escalation Enterprise Entra for AI / Entra Agent ID – GSA Gateway Token theft & replay attacks Enterprise Entra for AI / Entra Agent ID – GSA Gateway Agent sprawl & orphaned agents Enterprise Entra for AI / Entra Agent ID – GSA Gateway AI agent autonomy Enterprise Entra for AI / Entra Agent ID – GSA Gateway Credential exposure Enterprise Entra for AI / Entra Agent ID – GSA Gateway PaaS General AI platform attacks Azure AI Foundry (Private Preview) Defender for AI General AI platform attacks Amazon Bedrock Defender for AI* (AI-SPM GA, Workload protection is on roadmap) General AI platform attacks Google Vertex AI Defender for AI* (AI-SPM GA, Workload protection is on roadmap) Network / Protocols (MCP) Protocol-level exploits (unspecified) Custom / Enterprise Defender for AI * *roadmap OOTB = Out of the box (built-in) This table consolidates the mind map into a concise reference showing each asset class, the threats/attacks, whether they are scoped to Microsoft or non-Microsoft ecosystems, and the recommended solutions mentioned in the diagram. Here is a mind map corresponding to the table above, for easier visualization: Mind map as of 30 Sep 2025 (to be updated in case there are technology enhancements or changes by Microsoft) OWASP-style risks in SaaS & custom GenAI apps—what’s covered Your map calls out seven high frequency risks in LLM apps (e.g., jailbreaks, cross prompt injection, wallet abuse, credential theft, data exfiltration, insecure plugin design, and shadow LLM apps/plugins). For Security Copilot (SaaS), mitigations are built-in/OOTB; for non-Microsoft AI apps, pair Azure AI Foundry (Content Safety, Prompt Shields) with Defender for AI (runtime), AISPM via MDCSPM (build-time), and Defender for Cloud Apps to govern unsanctioned use. What to deploy first (a pragmatic order of operations) Land the platform: Existing landing zone with Private Link to models/data, Azure Policy guardrails, and Defender for Cloud CSPM. Lock down identity & secrets: Entra Conditional Access/PIM and Key Vault + secret scanning in code and pipelines. Protect usage: Purview labels/DLP for Copilot; Content Safety shields and XPIA detection for custom apps; APIM policy mediation. Govern & monitor: Purview AI Hub and Compliance Manager assessments; Defender for Cloud AI alerts into Defender XDR/Sentinel with KQL hunting & playbooks. Scale SecOps with AI: Light up Copilot for Security across XDR/Sentinel workflows and Threat Intelligence/EASM. The below table shows the different AI Apps and the respective pricing SKU. There exists a calculator to estimate costs for your different AI Apps, Pricing - Microsoft Purview | Microsoft Azure. Contact your respective Microsoft Account teams to understand the mapping of the above SKUs to dollar value. Conclusion: Microsoft’s two-pronged strategy—Security for AI and AI for Security—empowers organizations to safely scale generative AI while strengthening incident response and governance across the stack. By deploying layered controls and leveraging integrated solutions, enterprises can confidently innovate with AI while minimizing risk and ensuring compliance.144Views0likes0CommentsCreate an Active Student badge on Microsoft Learn
Create an Active Student badge on Microsoft Learn Description: I suggest adding an official Active Student” badge in the Microsoft Community and Microsoft Learn platforms. This badge would Highlight students’ commitment to learning. Encourage continuous participation through visible recognition. Connect learning achievements (Learn) with community contributions (Community Hub). Provide a public credential that can be showcased on a CV or professional profile. Such a symbolic addition would strengthen motivation, visibility, and the bridge between Microsoft Learn and the Community.Solved110Views1like4CommentsIssues Testing Azure Stack HCI on Hyper-V (Single Node) – Onboarding and Azure Arc Integration
i am currently testing azure stack hci in our company environment. i have downloaded the latest version of azure stack hci and deployed it on a single-node hyper-v vm setup for evaluation. while i can access the local azure stack hci portal, i am facing several issues when trying to onboard the host into the azure portal. challenges i am facing: i am running azure stack hci on hyper-v manager with only one node (lab environment). could this be a limitation for azure portal onboarding? even though i downloaded the latest azure stack hci build, the local portal shows the host as "not eligible." i am not sure why this is happening. i tried to push two simple vms to azure arc/local using scripts, but i received an error that hyper-v components are not running, even though hyper-v is active. i also ran into an issue with windows admin center because my subscription is not pay-as-you-go. is this a blocker for testing scenarios? my questions: is it possible to push vms into azure arc (or manage them locally) in this single-node hyper-v test environment? can this be done using infrastructure as code with terraform and powershell, even without the original azure stack hci hardware? what are the best practices for testing azure stack hci in a lab or non-production setup without purchasing physical nodes? any guidance, examples, or workarounds would be greatly appreciated. our company is exploring azure hci, but we want to validate the setup in a lab before considering hardware purchases.5Views0likes0CommentsWindows 11 lock screen bug
We are having an issue where some users cannot log back into windows 11 after going to a lock screen. Windows will tell you the password or username is incorrect even though the credentials are defiantly correct. The only solution I've found so far is to reboot the machine. What could be causing this? Our users are loosing unsaved work from having to reboot through out the day.182Views0likes3CommentsMicrosoft Learning Rooms Weekly Round up 10/2
Below is a summary of what is happening within our Learning Rooms (Groups) so that you can stay in the loop and discover something new. To get frequent updates of their content you must follow/join the group that interests you. Make sure to join one of our Learning Rooms (Groups) today! Learning Room (Group) Details Date/Time Link to discussion/event/blog Microsoft Hero Community Event: Get started with a modern zero trust remote access solution: Microsoft Global Secure Access Saturday, Oct 04, 2025, 09:00 AM PDT Get started with a modern zero trust remote access solution: Microsoft Global Secure Access | Microsoft Community Hub Microsoft Tech Talks Event: She Powers BI – Women’s DIY Mentorship Cohort 🌸 Monday, Oct 06, 2025, 08:30 AM PDT She Powers BI – Women’s DIY Mentorship Cohort 🌸 | Microsoft Community Hub Azure Cloud Commanders Event: The Integrated World of Microsoft Cloud Frameworks Tuesday, Oct 07, 2025, 06:00 PM PDT The Integrated World of Microsoft Cloud Frameworks | Microsoft Community Hub Microsoft Hero Community Event: Unlocking document insights with Azure AI October 7, 2025 19:00 PM AEST October 7, 2025 10:00 AM CET 📢 https://www.linkedin.com/in/akanksha-malik/ 🖇️ https://streamyard.com/watch/M6qvUYdv58tt?wt.mc_id=MVP_350258 Microsoft Hero Community Event: D365 Field Service 101 October 7, 2025 18:00 PM AEST October 19, 2025 09:00 AM CET 📢 https://www.linkedin.com/in/jeevarajankumar/ 🖇️ https://streamyard.com/watch/RtDkftSxhn7P?wt.mc_id=MVP_350258 Microsoft Hero Community Event: Azure Functions and network security.. Can it be done? October 11, 2025 06:00 PM CET 📢 https://www.linkedin.com/in/rexdekoning/ 🖇️ https://streamyard.com/watch/RHzXr5bpYHFY?wt.mc_id=MVP_350258 Azure Cloud Commanders Event: Security Best Practices for Azure Kubernetes Services (AKS) Monday, Oct 20, 2025, 12:00 PM PDT Security Best Practices for Azure Kubernetes Services (AKS) | Microsoft Community Hub Microsoft Hero Community Event: FSI and Gen AI: Wealth management advisor with Azure Foundry Agents and MCP October 21, 2025 19:00 PM AEST October 21, 2025 10:00 AM CET 📢 https://www.linkedin.com/in/priyankashah/ 🖇️ https://streamyard.com/watch/Vb5rUWMBN9YN?wt.mc_id=MVP_350258 Azure Cloud Commanders Event: ASO(cial) Superpowers: Streamlined App Deployments with Azure Service Operator Thursday, Oct 23, 2025, 11:00 AM PDT ASO(cial) Superpowers: Streamlined App Deployments with Azure Service Operator | Microsoft Community Hub Microsoft Hero Community Event: The Role of Sentence Syntax in Security Copilot: Structured Storytelling for Effective Defence October 25, 2025 06:00 PM CET 📢 https://www.linkedin.com/in/monaghadiri/ 🖇️ https://streamyard.com/watch/EtPkn2EZkauD?wt.mc_id=MVP_350258 Azure Cloud Commanders Event: Mastering Full Stack with Azure: Effortless Infrastructure and Well-Architected Framework Sunday, Nov 02, 2025, 08:00 PM PST Mastering Full Stack with Azure: Effortless Infrastructure and Well-Architected Framework | Microsoft Community Hub Azure Cloud Commanders Discussion: Microsoft Applied Skills Sweepstakes Microsoft Applied Skills Sweepstakes | Microsoft Community Hub Copilot & Power Platform with Rishona Discussion: Updates for the week (1/10/25) Upcoming webinar: Power BI + Dataverse: From Dashboards to Decisions & Copilot Memory Updates for the week (1/10/25) | Microsoft Community Hub SkillUp with Copilot Studio & Power Platform Discussion: ブログ(Vol.2 Power Platform コネクター(MSN 天気)を利用) を GItHub に公開 ブログ(Vol.2 Power Platform コネクター(MSN 天気)を利用) を GItHub に公開 | Microsoft Community Hub SkillUp with Copilot Studio & Power Platform Blog: Power Apps Code Apps(Preview) を利用して Single Page Application を開発 Power Apps Code Apps(Preview) を利用して Single Page Application を開発 | Microsoft Community Hub Microsoft MSFarsi Community Blog: Optimizing Azure Firewall: From Rule-Based to Policy-Based Optimizing Azure Firewall: From Rule-Based to Policy-Based | Microsoft Community Hub Microsoft Hero Community Blog: 🚀✨ Are you ready for a power-packed, productive, and inspiring October? ✨🚀 🚀✨ Are you ready for a power-packed, productive, and inspiring October? ✨🚀 | Microsoft Community Hub38Views2likes0CommentsMovement in spreadsheet as form after enter
When using an Excel spreadsheet as a form, is there a way to move from one cell location directly to another cell location after pressing enter in the first cell? What I want to do is have the user enter the width in cell B7, hit enter, and have B9 become the "active" cell, then enter the length in B9, hit enter, and have B11 become the "active" cell, so on and so forth. Essentially I want to skip from one input cell to another without having to go through all of the rows. Not all "moves" from one cell to another would be exactly two rows down, other locations in the spreadsheet would require moves of 3 or 4 rows to get to the next "active" cell. All of my moves would be in the same column though. Thanks to all who look at this.Solved47Views0likes2Comments