microsoft ai
22 TopicsAI in Windows 11
Access Copilot and agents right from the taskbar; find answers across your files, email, and meetings, and turn ideas into polished content using voice or text. AI is right there where you already work, so you can move faster, stay in your flow, and make better decisions without switching context, opening other apps or moving to the browser. And if you do have a Copilot+ PC, you can use fluid voice dictation across apps, find files with natural language search, take action on anything on your screen, and refine writing anywhere, even offline. Jeremy Chapman, Microsoft 365 Director, shows how whether you’re planning projects, collaborating with teammates, or building solutions, you can move faster, stay focused, and turn context into real outcomes. Stop searching across apps. New Copilot capabilities in Windows Search understand your work context and surfaces answers using data from your Microsoft 365 environment. Get started with Copilot experiences in Windows 11. Run AI tasks without interrupting your workflow. Agents stay visible and trackable in the Windows 11 taskbar. Watch here. Interact with content on your screen using Click to Do. Extract text, send content to Microsoft 365 Copilot, or convert a static table into a usable Excel file. Take a look. QUICK LINKS: 00:00 — Ask Copilot 00:55 — Use voice with Copilot 02:30 — Agents on Windows 11 taskbar 04:19 — Copilot in File Explorer 05:19 — Copilot+ PC capabilities 07:04 — Click to Do 07:52 — Writing Assistance with Copilot 09:15 — Wrap up Link References Check out https://aka.ms/Windows11AI Unfamiliar with Microsoft Mechanics? As Microsoft’s official video series for IT, you can watch and share valuable content and demos of current and upcoming tech from the people who build it at Microsoft. Subscribe to our YouTube: https://www.youtube.com/c/MicrosoftMechanicsSeries Talk with other IT Pros, join us on the Microsoft Tech Community: https://techcommunity.microsoft.com/t5/microsoft-mechanics-blog/bg-p/MicrosoftMechanicsBlog Watch or listen from anywhere, subscribe to our podcast: https://microsoftmechanics.libsyn.com/podcast Keep getting this insider knowledge, join us on social: Follow us on Twitter: https://twitter.com/MSFTMechanics Share knowledge on LinkedIn: https://www.linkedin.com/company/microsoft-mechanics/ Enjoy us on Instagram: https://www.instagram.com/msftmechanics/ Loosen up with us on TikTok: https://www.tiktok.com/@msftmechanics Video Transcript: -Windows does a lot more than launch and run apps. Now with built-in AI, it can do much more for you and you don’t need special skills to make that happen. There are capabilities that light up on any hardware that runs Windows 11, and some that go even further on Copilot+ PCs with on‑device AI processing. Let’s go ahead and start with what anyone running Windows 11 can use right now. So to pull up AI experiences with advanced reasoning, you’ll start with the Search box in the Taskbar, where your familiar search still remains the same, but now you can also use it with AI prompts. So here, I’ll type, “When is my performance review due?” And by drawing on information from my Microsoft Teams and Outlook calendar, Copilot identifies my performance review meeting so I can prepare for it. It’s an experience powered by Microsoft 365 Copilot, which uses Work IQ to understand my work context. -Next with AI, it’s often easier just to say what you want and you can now use voice with Microsoft 365 Copilot because of its multi-modal support. You’ll use a long press on the Copilot key or Windows key + C if your device doesn’t have that, to activate voice control. And now I can interact with Copilot to help build a presentation that I’m working on. So for this slide I’ll ask, “Robin sent me a stat about incorporating organic design. I think it was in email, maybe Teams?” And it takes my voice command, it’s using Work IQ with Microsoft 365 Copilot to run intelligent searches, figuring out which Robin I mean while pulling in relevant context and shared information about the meeting from across my Microsoft 365 apps. - [Copilot] Robin mentioned that incorporating organic design has been shown to boost employee creativity by 15%. That’s a pretty cool stat! - Not bad. Can you turn that into a catchy statement on this slide here? - [Copilot] Absolutely. How about this? Creativity matters. Create the space for it. - Love it. I’ll need Amber to sign off on this. So when’s my next meeting with Amber? - [Copilot] Your next one-on-one with Amber is on Thursday at 10:30 in the morning. That should be a great time to review it together. - [Jeremy] Thanks, Copilot. - [Copilot] You got it. Happy to help. Let me know if there’s anything else you want to fine tune before that one on one. -This uses advanced speech‑to‑text and tightly integrates on‑device input with cloud AI, so it works on any connected Windows 11 device. Now let’s try something more challenging. Some AI tasks take longer than a quick prompt‑and‑response, and some need to run in parallel while you keep working. That’s where Agents on the Windows 11 taskbar can help. So I’m going to start by tapping into the new Windows Search box. Now, this uses new Windows shell integration, so that long running agents can be viewed similar to apps. So I just need to start with the @ symbol to pull up my agents Now I can find, open, monitor and work with my agents directly from the taskbar. So in this case, I’m going to choose the Researcher agent. I’ll ask Researcher to compare public sentiment with our design principles. I like the direction it’s thinking, so I’ll go ahead and confirm. And this agent works hard, often for 10 minutes or more to research and generate its content. And you can work on other things or with other agents while each performs their work. -As agents run, there are status indicators directly on the taskbar, similar to when you download large files, where you can track progress and see once it’s complete. So, your agents stay visible and easy to check on as you work, not buried in browser tabs. Now let’s return to our completed Researcher run. The notification tells me that Researcher is finished with this turn and in the taskbar, I can even see a green checkmark on the Researcher icon. When I zoom in, there’s a short summary. And I can tap in to review it. -Now, this actually took around eight or so minutes to process in real time. Everything here was grounded using Work IQ for information that was in my company. And you’ll see its answer is very well-informed and extremely comprehensive using our study for public sentiment vs. core design principles, it’s laying out its reasoning and all of its cited sources. Of course, Windows is also where you can go to find and open your files and now, your SharePoint and OneDrive cloud files will show up right inside the File Explorer. Using File Explorer Home, you can easily get to your recent files, your favorites and files shared with you. -Then the new Copilot control lets you Ask Microsoft 365 Copilot for file insights like summaries, context, or next steps for documents. So for this Design Principles doc here, I’ll ask Copilot to review it and tell me what percentage of employees prefer workspaces that incorporate sustainable materials. And in just a few seconds, based on information deeply nested within that document, it finds that over 70% say they do and even provides supporting context. So, you don’t have to open the file or leave your flow to find the right one, whether that’s local or in the cloud. And everything I’ve shown so far works on any Windows 11 device with a Microsoft 365 work or school account and access to Copilot. -Now let’s look at what’s unique to Copilot+ PCs, where on‑device AI and small language models deliver fast, private processing. So I’ll highlight a few of the capabilities that work on a Copilot + PC even if you don’t have Microsoft 365. First, the new Fluid Dictation works across all apps and uses on-device models for quicker, more natural voice typing as well. You can enable voice access in Settings, which on first run guides you through the experience and what it can do to interact with Windows. -So I’m going to show an experience working across two common text editors, Notepad and Word. You can start it using either the microphone icon in the taskbar, or by saying, “Voice access, wake up. Open Notepad.” It uses powerful AI running on your local device to automatically correct grammar, add punctuation, and, um, even remove filler words that you, uh, speak. Select all. Copy. Open Word. Paste. And that was just scratching the surface for what Voice access with Fluid Dictation can do. And here are some of the common commands that you can use to interact with Windows and your apps. -Second, to help you quickly find your files anywhere, improved Windows search uses semantic understanding across local files and Microsoft 365. You don’t need exact names, just describe what you remember. For example, this broad search here for project updates pulls up relevant files and folders of content using hybrid semantic search, and they might contain the word project or maybe synonyms, or contain related content in context of the files or even images within the files. -Next, Click to Do lets you interact with anything on your screen. You can take actions on content or ask Microsoft 365 Copilot a question about what’s on your screen without needing to switch context. So in this case, I’ll going to pull up this PDF file and you’ll see that it opens the file in the Edge browser. Now, if I scroll down, you can see that I have a stylized table on my screen, which by the way, could be text or an image. So I’ll hit the Windows Key + left mouse click to open Click to Do. And you can also use Windows key + Q. Now you’ll see that it’s recognizing all of the text in the screenshot. I can copy it as a CSV, Save or Share it. I’ll use Convert to table with Excel. And it instantly opens Excel and becomes a usable table and you can work directly with the data. -From here, if you also use Microsoft 365 at work or school with a Copilot+ PC, even more powerful capabilities light up. Writing Assistance with Microsoft 365 Copilot helps you quickly craft content with AI-powered rewriting and proofreading, and because it runs locally, it even works offline. This enables you to use generative AI from any app with text field input. So I’m going to go ahead and use our line-of-business app here for project planning. There’s a description and business justification field, and I’ll add a bit more detail here. -And this works everywhere, kind of like your clipboard, so when I select text, the Writing Assistance button appears. Now with it, I can choose options to rewrite it in different ways. In this case, I’ll choose professional. It rewrites my text entry and then gives me three options. So I’ll go ahead and choose the third option here, I like that one, so I’ll go ahead and replace my previous text with it. And that can be used on any line-of-business or other app without any code changes because it’s just built into Windows. -And finally, if you are a developer, new native support in the Model Context Protocol in Windows gives your agents a standardized way to connect with apps, tools, and files to automate tasks. You can use built-in agent connectors for File Explorer and Windows Settings, allowing your agents to manage local file operations and to modify defined device configurations. -Windows 11’s built-in AI moves the intelligence closer to you right in the flow of your work. To learn more, check out aka.ms/Windows11AI and keep watching Microsoft Mechanics for the latest updates and thanks for watching.381Views0likes0CommentsBuilding with Azure OpenAI Sora: A Complete Guide to AI Video Generation
In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API. Table of Contents Introduction to Sora Models Azure AI Foundry vs. OpenAI API Structure API Integration: Request Body Parameters Video Generation Modes Cost Analysis per Generation Technical Limitations & Constraints Resolution & Duration Support Implementation Best Practices Introduction to Sora Models Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions: Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds) Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview) Azure AI Foundry vs. OpenAI API Structure Key Architectural Differences Sora 1 uses Azure's traditional deployment-based API structure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/... Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields Job Management: Uses /jobs/{id} for status polling Content Download: Uses /video/generations/{generation_id}/content/video Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720") Job Management: Uses /videos/{video_id} for status polling Content Download: Uses /videos/{video_id}/content Why This Matters? This architectural difference requires conditional request formatting in your code: const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; } API Integration: Request Body Parameters Sora 1 API Parameters Standard Text-to-Video Request: { "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" } Parameter Details: model (String, Required): Your Azure deployment name prompt (String, Required): Natural language description of the video (max 32000 chars) height (String, Required): Video height in pixels width (String, Required): Video width in pixels n_seconds (String, Required): Duration (1-20 seconds) n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution) Sora 2 API Parameters Text-to-Video Request: { "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" } Image-to-Video Request (uses FormData): const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP Video-to-Video Remix Request: Endpoint: POST .../videos/{video_id}/remix Body: Only { "prompt": "your new description" } The original video's structure, motion, and framing are reused while applying the new prompt Parameter Details: model (String, Optional): Your deployment name prompt (String, Required): Video description size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280") seconds (String, Optional): "4", "8", or "12" (defaults to "4") input_reference (File, Optional): Reference image for image-to-video mode remix_video_id (String, URL parameter): ID of video to remix Video Generation Modes 1. Text-to-Video (Both Models) The foundational mode where you provide a text prompt describing the desired video. Implementation: const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), }); Best Practices: Include shot type (wide, close-up, aerial) Describe subject, action, and environment Specify lighting conditions (golden hour, dramatic, soft) Add camera movement if desired (pans, tilts, tracking shots) 2. Image-to-Video (Sora 2 Only) Generate a video anchored to or starting from a reference image. Key Requirements: Supported formats: JPEG, PNG, WebP Image dimensions must exactly match the selected video resolution Our implementation automatically resizes uploaded images to match Implementation Detail: // Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage); 3. Video-to-Video Remix (Sora 2 Only) Create variations of existing videos while preserving their structure and motion. Use Cases: Change weather conditions in the same scene Modify time of day while keeping camera movement Swap subjects while maintaining composition Adjust artistic style or color grading Endpoint Structure: POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview Implementation: let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; } Cost Analysis per Generation Sora 1 Pricing Model Base Rate: ~$0.05 per second per variant at 720p Resolution Scaling: Cost scales linearly with pixel count Formula: const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier; Examples: 720p (1280×720), 12 seconds, 1 variant: $0.60 1080p (1920×1080), 12 seconds, 1 variant: $1.35 720p, 12 seconds, 2 variants: $1.20 Sora 2 Pricing Model Flat Rate: $0.10 per second per variant (no resolution scaling in public preview) Formula: const totalCost = 0.10 * duration * variants; Examples: 720p (1280×720), 4 seconds: $0.40 720p (1280×720), 12 seconds: $1.20 720p (720×1280), 8 seconds: $0.80 Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters. Cost Comparison Scenario Sora 1 (720p) Sora 2 (720p) Winner 4s video $0.20 $0.40 Sora 1 12s video $0.60 $1.20 Sora 1 12s + audio N/A (no audio) $1.20 Sora 2 (unique) Image-to-video N/A $0.40-$1.20 Sora 2 (unique) Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities. Technical Limitations & Constraints Sora 1 Limitations Resolution Options: 9 supported resolutions from 480×480 to 1920×1080 Includes square, portrait, and landscape formats Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080 Duration: Flexible: 1 to 20 seconds Any integer value within range Variants: Depends on resolution: 1080p: Variants disabled (n_variants must be 1) 720p: Max 2 variants Other resolutions: Max 4 variants Concurrent Jobs: Maximum 2 jobs running simultaneously Job Expiration: Videos expire 24 hours after generation Audio: No audio generation (silent videos only) Sora 2 Limitations Resolution Options (Public Preview): Only 2 options: 720×1280 (portrait) or 1280×720 (landscape) No square formats No 1080p support in current preview Duration: Fixed options only: 4, 8, or 12 seconds No custom durations Defaults to 4 seconds if not specified Variants: Not prominently supported in current API documentation Focus is on single high-quality generations with audio Concurrent Jobs: Maximum 2 jobs (same as Sora 1) Job Expiration: 24 hours (same as Sora 1) Audio: Native audio generation included (dialogue, sound effects, ambience) Shared Constraints Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third. Job Lifecycle: queued → preprocessing → processing/running → completed Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video. Generation Time: Typical: 1-5 minutes depending on resolution, duration, and API load Can occasionally take longer during high demand Resolution & Duration Support Matrix Sora 1 Support Matrix Resolution Aspect Ratio Max Variants Duration Range Use Case 480×480 Square 4 1-20s Social thumbnails 480×854 Portrait 4 1-20s Mobile stories 854×480 Landscape 4 1-20s Quick previews 720×720 Square 4 1-20s Instagram posts 720×1280 Portrait 2 1-20s TikTok/Reels 1280×720 Landscape 2 1-20s YouTube shorts 1080×1080 Square 1 1-20s Premium social 1080×1920 Portrait 1 1-20s Premium vertical 1920×1080 Landscape 1 1-20s Full HD content Sora 2 Support Matrix Resolution Aspect Ratio Duration Options Audio Generation Modes 720×1280 Portrait 4s, 8s, 12s ✅ Yes Text, Image, Video Remix 1280×720 Landscape 4s, 8s, 12s ✅ Yes Text, Image, Video Remix Note: Sora 2's limited resolution options in public preview are expected to expand in future releases. Implementation Best Practices 1. Job Status Polling Strategy Implement adaptive backoff to avoid overwhelming the API: const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; } 2. Handling Different Response Structures Sora 1 Video Download: const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`; Sora 2 Video Download: const videoUrl = `${root}/videos/${jobId}/content`; 3. Error Handling try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification } 4. Image Preprocessing for Image-to-Video Always resize images to match the target video resolution: async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); } 5. Cost Tracking Implement cost estimation before generation and tracking after: // Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId }); 6. Progressive User Feedback Provide detailed status updates during the generation process: const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`); Conclusion Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways: Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows Optimize costs: Calculate estimates upfront and track actual usage for better budget management Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge. Resources: Azure AI Foundry Sora Documentation OpenAI Sora API Reference Azure OpenAI Service Pricing This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.519Views0likes0CommentsHow I can app for my Bonus card on Microsoft
Applying for your Bonus Card on Microsoft platforms is simple and convenient. You can access exclusive deals, track your savings, and manage your purchases seamlessly by integrating your Bonus Card with Microsoft services. Visit the official Microsoft Store or AppSource to download the Bonus Card application and start saving instantly. Stay connected with the latest offers by linking your card to Microsoft Rewards for additional benefits. http://www.bonusah.nlAI Agents in Production: From Prototype to Reality - Part 10
This blog post, the tenth and final installment in a series on AI agents, focuses on deploying AI agents to production. It covers evaluating agent performance, addressing common issues, and managing costs. The post emphasizes the importance of a robust evaluation system, providing potential solutions for performance issues, and outlining cost management strategies such as response caching, using smaller models, and implementing router models.1.4KViews3likes1CommentWebinar Series for Microsoft AI Agents
Join us for an exciting and insightful webinar series where we delve into the revolutionary world of Microsoft Copilot Agents in SharePoint, Agent builder, Copilot Studio and Azure AI Foundry! Discover how the integration of AI and intelligent agents is set to transform the future of business processes, making them more efficient, intelligent, and adaptive. In this webinar series, we will explore: The Power of Microsoft Copilot Agents: Learn how these advanced AI-driven agents can assist you in automating routine tasks, providing intelligent insights, and enhancing collaboration within your organization. Seamless Integration with Microsoft Graph: See how Copilot Agents work seamlessly with Microsoft Graph data to improve information retrieval, boost productivity, and automate mundane tasks. Real-World Applications: See real-world examples of how businesses are leveraging Copilot Agents to drive innovation and achieve their goals. Future Trends and Innovations: Get a glimpse into the future of AI in business processes and how it will continue to evolve and shape the way we work. Join us for the Webinars every week, at 11:30am PST/1:30pm CST/2:30 EST: (Click on the webinar name to join the live meeting on the actual date/time or use the .ics file at the bottom of the page to save the date on your calendar) April 2nd: Agents with SharePoint - Watch this Webinar recording for an overview of SharePoint Agents and its key capabilities to enable your organization with powerful Agents helping you search for information within seconds in large SharePoint libraries with 100's of documents. April 9th: Agents with Agent Builder - Watch this Webinar recording for an overview of Agent Builder and its key capabilities to enable organization with "No code" Agents that can be created by any business user within minutes. April 16th: Agents with Copilot Studio- Join us for an overview of Copilot Studio and its key capabilities to enable organization with "Low code" Agents that can help create efficiency with existing business processes. We will feature a few real-life demo examples and answer any questions. April 24th: Agents with Azure AI Foundry - Join us for an overview of Azure AI Foundry and its key capabilities to enable your organization with AI Agents. We will feature a demo of AI agents for prior authorization and provide resources to accelerate your next project. Don't miss this opportunity to stay ahead of the curve and unlock the full potential of AI and Copilot Agents in your organization. Register now and be part of the future of business transformation! Speakers: Jaspreet Dhamija, Sr. MW Copilot Specialist - Linkedin Michael Gannotti, Principal MW Copilot Specialist - LinkedIn Melissa Nelli, Sr. Biz Apps Technical Specialist - LinkedIn Matthew Anderson, Director Azure Apps - LinkedIn Marcin Jimenez, Sr. Cloud Solution Architect - LinkedIn Thank you!What runs GPT-4o and Microsoft Copilot? | Largest AI supercomputer in the cloud | Mark Russinovich
Microsoft has built the world’s largest cloud-based AI supercomputer that is already exponentially bigger than it was just 6 months ago, paving the way for a future with agentic systems.18KViews2likes0Comments