copilot
454 TopicsPartner Case Study | Cognizant
Cognizant, a 2025 Microsoft Partner of the Year award winner and global services company, is redefining enterprise transformation across industries. By democratizing AI and embedding intelligent automation, Cognizant helps organizations modernize systems, reduce costs, and accelerate growth. Their unique approach—combining deep Microsoft expertise, strategic change management, and a culture of innovation—delivers measurable results, including millions in savings, productivity boosts, and record-setting employee engagement. Organizations across industries face mounting technical debt, fragmented legacy systems, and the urgent need to consolidate onto modern, scalable platforms. Three Cognizant clients—one in healthcare, one in financial services, and one in retail—faced similar pressures. Regulatory compliance and governance added further complexity, especially in healthcare and finance. Cognizant’s clients needed a forward-looking technology roadmap to streamline operations, unlock data-driven insights, and support automation for better employee and customer experiences. Cognizant’s approach: Empowerment through Microsoft intelligent automation Using Microsoft Power Platform, Copilot agents, and robust governance frameworks, Cognizant delivers solutions that are secure, scalable, and tailored to each client’s needs. Ongoing training, upskilling, and community-building ensure sustained adoption and a strong digital culture. An essential part of that digital culture is Cognizant’s belief that innovation with AI should be accessible to employees at every level—not just technical teams. “At Cognizant, intelligent automation is more than a technology capability—it’s a strategic enabler for business transformation,” said Chakradhar Gooty Agraharam, Global Intelligent Automation Leader at Cognizant. “By combining Microsoft Power Platform, Microsoft Dynamics 365, and Copilot innovations with our deep industry expertise, we help clients accelerate automation at scale, unlock new efficiencies, and deliver measurable outcomes. Our partnership with Microsoft empowers us to lead with innovation, speed, and precision—driving hyper automation strategies that redefine what’s possible for enterprises worldwide.” For their healthcare, financial services, and retail engagements, Cognizant selected solutions that met each client's immediate needs while supporting long-term scalability. They implemented strong governance frameworks—using the Microsoft Power Platform Center of Excellence Starter Kit alongside their own methodologies—to maintain regulatory compliance and secure operations. By empowering clients to adopt Microsoft intelligent automation tools, including Copilot agents, Microsoft Power Automate, and Power Platform, Cognizant helped their clients streamline processes, unlock insights, and enhance both employee and customer experiences. Continue reading here Explore all case studies or submit your own Subscribe to case studies tag to follow all new case study posts. Don't forget to follow this blog to receive email notifications of new stories!49Views0likes0Comments📣 Getting Started with AI and MS Copilot - Arabic
👋 مرحبًا بالمعلمين والمعلمات! نأمل أن تكونوا بخير 🌟 هل ترغبون في استكشاف عالم الذكاء الاصطناعي مع Microsoft Copilot؟ ندعوكم لحضور جلسة "مقدمة إلى الذكاء الاصطناعي مع Copilot من مايكروسوفت"، المصمّمة خصيصًا للمعلمين الذين يبدؤون رحلتهم مع Copilot. في هذه الجلسة التفاعلية والعملية، سنقوم معًا برسم "وجهة أحلامنا" باستخدام الذكاء الاصطناعي، ونتعرّف على أساسيات الذكاء الاصطناعي التوليدي، كيفية كتابة تعليمات (Prompts) فعّالة، وأفضل الطرق لتوظيف هذه الأدوات داخل الصف الدراسي. 📌 الجلسة ستكون باللغة العربية، مع أمثلة واقعية، مواد جاهزة، ومساحة مخصصة لطرح الأسئلة والتجربة العملية. 📅 اللقاء سيتم عبر Join the meeting nowFrom AI pilots to public decisions: what it really takes to close the intelligence gap
Across the public sector, the conversation about AI has shifted. The question is no longer whether AI can generate insight—most leaders have already seen impressive pilots. The harder question is whether those insights survive the realities of government: public scrutiny, auditability, cross‑department delivery, and the need to explain decisions in plain language. That challenge was recently articulated by Sadaf Mozaffarian, writing in Smart Cities World, in the context of city‑scale AI deployments. Governments don’t need more experiments. They need decision‑ready intelligence—intelligence that can be acted on safely, governed consistently, and defended when outcomes are questioned. What’s emerging now is a more operational lens on AI adoption, one that exposes two issues many pilots quietly avoid. Decision latency is the real enemy In government, decision latency is not about slow analytics, it’s the time lost between having a signal and being able to act on it with confidence. Much of the focus in AI discussions is on accuracy, bias, or model performance. But in cities, the more damaging problem is often this latency. When data is fragmented across departments, policies live in PDFs, and institutional knowledge walks out the door at 5pm, leaders may have insight but still can’t decide fast enough. AI pilots often demonstrate answers in isolation, but they don’t reduce the friction between insight, approval, and execution. Decision‑ready intelligence directly attacks this problem. It brings together: Operational data already trusted by the organization Policy and regulatory context that constrains decisions Human checkpoints that reflect how accountability actually works The result isn’t faster answers—it’s faster decisions that stick, because they align with how governments are structured to operate. Institutional memory is infrastructure Cities invest heavily in physical infrastructure—roads, pipes, facilities—but far less deliberately in institutional memory. Yet planning rationales, inspection notes, precedent cases, and prior decisions are often what make or break today’s choices. Consider a routine enforcement or permitting decision that looks reasonable on current data, but quietly contradicts a prior settlement, a regulator’s interpretation, or a lesson learned during a past inquiry. AI systems that don’t account for this history don’t just miss context, they create risk. Decision‑ready intelligence treats institutional memory as a first‑class asset. It ensures that when AI supports a decision, it does so with: Access to relevant historical records and prior outcomes Clear lineage back to source documents and policies Logging that preserves not just what was decided, but why This is what allows governments to move faster without relearning the same lessons under audit pressure. Why this matters now Public sector AI initiatives rarely fail because of a lack of ambition. They stall because trust questions—governance, records, explainability—arrive too late. By the time leaders ask, “Can we stand behind this decision?” the system was never designed to answer. Decision‑ready intelligence flips that sequence. Governance is not bolted on after the pilot; it’s built into the operating model from the start. That’s what allows agencies to scale from a single use case to repeatable patterns across departments. A practical starting point The cities making progress aren’t trying to transform everything at once. They start small but visible: Identify one cross‑department “moment of truth” Define what must be logged, retained, and explainable Connect just enough data, policy, and work context to support that decision From there, they reuse the same patterns—governed data products, policy knowledge bases, and human‑in‑the‑loop workflows—to scale responsibly. AI in government will ultimately be judged the same way every public investment is judged: by outcomes, fairness, and public confidence. Closing the intelligence gap isn’t about smarter models. It’s about designing decision systems that reflect how governments actually work—and are held accountable. Learn more by reading Sadaf's full article: Closing the intelligence gap: how cities turn AI experiments into operational impact95Views0likes0CommentsGetting Started with AI and MS Copilot - English
🚀 Ready to explore AI and Microsoft Copilot in a fun, hands-on way? Join our session: “Introduction to AI and Microsoft Copilot”—designed for educators who are just getting started! ✅ Learn the fundamentals of generative AI ✅ Master the art of creating effective prompts ✅ Discover practical ways to use these tools in your classroom ✅ Access ready-to-use teaching resources ✅ Practice with 10 interactive exercises 📅 Don’t miss this opportunity to boost your teaching with AI! #MicrosoftCopilot #AIinEducation #Educators #Innovation #TeachingTools Getting Started with AI and MS Copilot - English | Meeting-Join | Microsoft TeamsPartner Blog | Copilot monetization for SMBs: Start with Copilot Chat, scale with agents
This post kicks off a five-part series for Microsoft partners on the Copilot monetization opportunity for small and medium-sized businesses (SMBs). Each post follows a repeatable approach aligned to the Microsoft Customer Engagement Methodology (MCEM) and the Win Formula—from building credibility as Customer Zero to driving adoption and measurable outcomes, then extending value with agents and specializations. If you’re looking for practical ways to turn your customers’ AI interest into secure, scalable outcomes and repeatable revenue, you’re in the right place. SMBs want proof, not hype. They want an AI path that fits how they work today, stays governed, and delivers results. SMBs are the backbone of the global economy, accounting for 90% of all firms with around 400 million enterprises worldwide, according to the World Economic Forum’s SME Resource Hub. At the same time, 82% of leaders are rethinking core aspects of their strategy and operations, under constant pressure to do more with less. That combination is driving a shift from AI curiosity to AI decisions. Continue reading here59Views0likes0CommentsIntegrating Microsoft Foundry with OpenClaw: Step by Step Model Configuration
Step 1: Deploying Models on Microsoft Foundry Let us kick things off in the Azure portal. To get our OpenClaw agent thinking like a genius, we need to deploy our models in Microsoft Foundry. For this guide, we are going to focus on deploying gpt-5.2-codex on Microsoft Foundry with OpenClaw. Navigate to your AI Hub, head over to the model catalog, choose the model you wish to use with OpenClaw and hit deploy. Once your deployment is successful, head to the endpoints section. Important: Grab your Endpoint URL and your API Keys right now and save them in a secure note. We will need these exact values to connect OpenClaw in a few minutes. Step 2: Installing and Initializing OpenClaw Next up, we need to get OpenClaw running on your machine. Open up your terminal and run the official installation script: curl -fsSL https://openclaw.ai/install.sh | bash The wizard will walk you through a few prompts. Here is exactly how to answer them to link up with our Azure setup: First Page (Model Selection): Choose "Skip for now". Second Page (Provider): Select azure-openai-responses. Model Selection: Select gpt-5.2-codex , For now only the models listed (hosted on Microsoft Foundry) in the picture below are available to be used with OpenClaw. Follow the rest of the standard prompts to finish the initial setup. Step 3: Editing the OpenClaw Configuration File Now for the fun part. We need to manually configure OpenClaw to talk to Microsoft Foundry. Open your configuration file located at ~/.openclaw/openclaw.json in your favorite text editor. Replace the contents of the models and agents sections with the following code block: { "models": { "providers": { "azure-openai-responses": { "baseUrl": "https://<YOUR_RESOURCE_NAME>.openai.azure.com/openai/v1", "apiKey": "<YOUR_AZURE_OPENAI_API_KEY>", "api": "openai-responses", "authHeader": false, "headers": { "api-key": "<YOUR_AZURE_OPENAI_API_KEY>" }, "models": [ { "id": "gpt-5.2-codex", "name": "GPT-5.2-Codex (Azure)", "reasoning": true, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 400000, "maxTokens": 16384, "compat": { "supportsStore": false } }, { "id": "gpt-5.2", "name": "GPT-5.2 (Azure)", "reasoning": false, "input": ["text", "image"], "cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }, "contextWindow": 272000, "maxTokens": 16384, "compat": { "supportsStore": false } } ] } } }, "agents": { "defaults": { "model": { "primary": "azure-openai-responses/gpt-5.2-codex" }, "models": { "azure-openai-responses/gpt-5.2-codex": {} }, "workspace": "/home/<USERNAME>/.openclaw/workspace", "compaction": { "mode": "safeguard" }, "maxConcurrent": 4, "subagents": { "maxConcurrent": 8 } } } } You will notice a few placeholders in that JSON. Here is exactly what you need to swap out: Placeholder Variable What It Is Where to Find It <YOUR_RESOURCE_NAME> The unique name of your Azure OpenAI resource. Found in your Azure Portal under the Azure OpenAI resource overview. <YOUR_AZURE_OPENAI_API_KEY> The secret key required to authenticate your requests. Found in Microsoft Foundry under your project endpoints or Azure Portal keys section. <USERNAME> Your local computer's user profile name. Open your terminal and type whoami to find this. Step 4: Restart the Gateway After saving the configuration file, you must restart the OpenClaw gateway for the new Foundry settings to take effect. Run this simple command: openclaw gateway restart Configuration Notes & Deep Dive If you are curious about why we configured the JSON that way, here is a quick breakdown of the technical details. Authentication Differences Azure OpenAI uses the api-key HTTP header for authentication. This is entirely different from the standard OpenAI Authorization: Bearer header. Our configuration file addresses this in two ways: Setting "authHeader": false completely disables the default Bearer header. Adding "headers": { "api-key": "<key>" } forces OpenClaw to send the API key via Azure's native header format. Important Note: Your API key must appear in both the apiKey field AND the headers.api-key field within the JSON for this to work correctly. The Base URL Azure OpenAI's v1-compatible endpoint follows this specific format: https://<your_resource_name>.openai.azure.com/openai/v1 The beautiful thing about this v1 endpoint is that it is largely compatible with the standard OpenAI API and does not require you to manually pass an api-version query parameter. Model Compatibility Settings "compat": { "supportsStore": false } disables the store parameter since Azure OpenAI does not currently support it. "reasoning": true enables the thinking mode for GPT-5.2-Codex. This supports low, medium, high, and xhigh levels. "reasoning": false is set for GPT-5.2 because it is a standard, non-reasoning model. Model Specifications & Cost Tracking If you want OpenClaw to accurately track your token usage costs, you can update the cost fields from 0 to the current Azure pricing. Here are the specs and costs for the models we just deployed: Model Specifications Model Context Window Max Output Tokens Image Input Reasoning gpt-5.2-codex 400,000 tokens 16,384 tokens Yes Yes gpt-5.2 272,000 tokens 16,384 tokens Yes No Current Cost (Adjust in JSON) Model Input (per 1M tokens) Output (per 1M tokens) Cached Input (per 1M tokens) gpt-5.2-codex $1.75 $14.00 $0.175 gpt-5.2 $2.00 $8.00 $0.50 Conclusion: And there you have it! You have successfully bridged the gap between the enterprise-grade infrastructure of Microsoft Foundry and the local autonomy of OpenClaw. By following these steps, you are not just running a chatbot; you are running a sophisticated agent capable of reasoning, coding, and executing tasks with the full power of GPT-5.2-codex behind it. The combination of Azure's reliability and OpenClaw's flexibility opens up a world of possibilities. Whether you are building an automated devops assistant, a research agent, or just exploring the bleeding edge of AI, you now have a robust foundation to build upon. Now it is time to let your agent loose on some real tasks. Go forth, experiment with different system prompts, and see what you can build. If you run into any interesting edge cases or come up with a unique configuration, let me know in the comments below. Happy coding!2.2KViews1like1CommentAgentCon New York - Come One Come All for FREE
On March 9, 2026, #AgentCon lands at Nasdaq, Times Square, bringing together developers, engineers, and innovators shaping the future of AI agents. Expect deep‑dive talks, hands‑on learning, practical demos and plenty of networking with the AI community. This isn’t just another AI event, it’s where builders meet to talk real code. ➡️ Register now!88Views0likes0Comments📣 MSLE Office Hours — Português
Olá, 👋 Espero que estejam bem! Tem dúvidas sobre o Programa MSLE? Nós temos as respostas! Participe das nossas Office Hours do MSLE: um espaço para conectar, aprender e receber apoio personalizado. ✅ Tire suas dúvidas sobre o programa ✅ Explore recursos e boas práticas ✅ Conecte-se com outros educadores e com nossa equipe do MSLE Traga suas perguntas, ideias e curiosidades — estamos aqui para ajudar você a aproveitar ao máximo sua experiência com o MSLE! No horário indicado, favor realizar acesso ao link: Teams meeting.📣 MSLE Office Hours — Português
Olá, 👋 Espero que estejam bem! Tem dúvidas sobre o Programa MSLE? Nós temos as respostas! Participe das nossas Office Hours do MSLE: um espaço para conectar, aprender e receber apoio personalizado. ✅ Tire suas dúvidas sobre o programa ✅ Explore recursos e boas práticas ✅ Conecte-se com outros educadores e com nossa equipe do MSLE Traga suas perguntas, ideias e curiosidades — estamos aqui para ajudar você a aproveitar ao máximo sua experiência com o MSLE! No horário indicado, favor realizar acesso ao link: Teams meeting.📣 Getting Started with AI and MS Copilot — Português
Olá, 👋 📢 Quer explorar IA e Microsoft Copilot de forma prática para o aprendizado? Participe da sessão “Introdução à IA com o uso do MS Copilot”, pensada especialmente para docentes que estão começando a usar o Copilot. Vamos aprender os fundamentos da IA generativa, como criar boas instruções e aplicar essas ferramentas na sala de aula. 📌 Sessão com exemplos práticos, materiais para utilizar e um espaço ideal para praticar e tirar dúvidas. No horário indicado, favor realizar acesso ao link: Teams meeting.