azure ad
21 TopicsBuilding with Azure OpenAI Sora: A Complete Guide to AI Video Generation
In this comprehensive guide, we'll explore how to integrate both Sora 1 and Sora 2 models from Azure OpenAI Service into a production web application. We'll cover API integration, request body parameters, cost analysis, limitations, and the key differences between using Azure AI Foundry endpoints versus OpenAI's native API. Table of Contents Introduction to Sora Models Azure AI Foundry vs. OpenAI API Structure API Integration: Request Body Parameters Video Generation Modes Cost Analysis per Generation Technical Limitations & Constraints Resolution & Duration Support Implementation Best Practices Introduction to Sora Models Sora is OpenAI's groundbreaking text-to-video model that generates realistic videos from natural language descriptions. Azure AI Foundry provides access to two versions: Sora 1: The original model focused primarily on text-to-video generation with extensive resolution options (480p to 1080p) and flexible duration (1-20 seconds) Sora 2: The enhanced version with native audio generation, multiple generation modes (text-to-video, image-to-video, video-to-video remix), but more constrained resolution options (720p only in public preview) Azure AI Foundry vs. OpenAI API Structure Key Architectural Differences Sora 1 uses Azure's traditional deployment-based API structure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/... Parameters: Uses Azure-specific naming like n_seconds, n_variants, separate width/height fields Job Management: Uses /jobs/{id} for status polling Content Download: Uses /video/generations/{generation_id}/content/video Sora 2 adapts OpenAI's v1 API format while still being hosted on Azure: Endpoint Pattern: https://{resource-name}.openai.azure.com/openai/deployments/{deployment-name}/videos Parameters: Uses OpenAI-style naming like seconds (string), size (combined dimension string like "1280x720") Job Management: Uses /videos/{video_id} for status polling Content Download: Uses /videos/{video_id}/content Why This Matters? This architectural difference requires conditional request formatting in your code: const isSora2 = deployment.toLowerCase().includes('sora-2'); if (isSora2) { requestBody = { model: deployment, prompt, size: `${width}x${height}`, // Combined format seconds: duration.toString(), // String type }; } else { requestBody = { model: deployment, prompt, height, // Separate dimensions width, n_seconds: duration.toString(), // Azure naming n_variants: variants, }; } API Integration: Request Body Parameters Sora 1 API Parameters Standard Text-to-Video Request: { "model": "sora-1", "prompt": "Wide shot of a child flying a red kite in a grassy park, golden hour sunlight, camera slowly pans upward.", "height": "720", "width": "1280", "n_seconds": "12", "n_variants": "2" } Parameter Details: model (String, Required): Your Azure deployment name prompt (String, Required): Natural language description of the video (max 32000 chars) height (String, Required): Video height in pixels width (String, Required): Video width in pixels n_seconds (String, Required): Duration (1-20 seconds) n_variants (String, Optional): Number of variations to generate (1-4, constrained by resolution) Sora 2 API Parameters Text-to-Video Request: { "model": "sora-2", "prompt": "A serene mountain landscape with cascading waterfalls, cinematic drone shot", "size": "1280x720", "seconds": "12" } Image-to-Video Request (uses FormData): const formData = new FormData(); formData.append('model', 'sora-2'); formData.append('prompt', 'Animate this image with gentle wind movement'); formData.append('size', '1280x720'); formData.append('seconds', '8'); formData.append('input_reference', imageFile); // JPEG/PNG/WebP Video-to-Video Remix Request: Endpoint: POST .../videos/{video_id}/remix Body: Only { "prompt": "your new description" } The original video's structure, motion, and framing are reused while applying the new prompt Parameter Details: model (String, Optional): Your deployment name prompt (String, Required): Video description size (String, Optional): Either "720x1280" or "1280x720" (defaults to "720x1280") seconds (String, Optional): "4", "8", or "12" (defaults to "4") input_reference (File, Optional): Reference image for image-to-video mode remix_video_id (String, URL parameter): ID of video to remix Video Generation Modes 1. Text-to-Video (Both Models) The foundational mode where you provide a text prompt describing the desired video. Implementation: const response = await fetch(endpoint, { method: 'POST', headers: { 'Content-Type': 'application/json', 'api-key': apiKey, }, body: JSON.stringify({ model: deployment, prompt: "A train journey through mountains with dramatic lighting", size: "1280x720", seconds: "12", }), }); Best Practices: Include shot type (wide, close-up, aerial) Describe subject, action, and environment Specify lighting conditions (golden hour, dramatic, soft) Add camera movement if desired (pans, tilts, tracking shots) 2. Image-to-Video (Sora 2 Only) Generate a video anchored to or starting from a reference image. Key Requirements: Supported formats: JPEG, PNG, WebP Image dimensions must exactly match the selected video resolution Our implementation automatically resizes uploaded images to match Implementation Detail: // Resize image to match video dimensions const targetWidth = parseInt(width); const targetHeight = parseInt(height); const resizedImage = await resizeImage(inputReference, targetWidth, targetHeight); // Send as multipart/form-data formData.append('input_reference', resizedImage); 3. Video-to-Video Remix (Sora 2 Only) Create variations of existing videos while preserving their structure and motion. Use Cases: Change weather conditions in the same scene Modify time of day while keeping camera movement Swap subjects while maintaining composition Adjust artistic style or color grading Endpoint Structure: POST {base_url}/videos/{original_video_id}/remix?api-version=2024-08-01-preview Implementation: let requestEndpoint = endpoint; if (isSora2 && remixVideoId) { const [baseUrl, queryParams] = endpoint.split('?'); const root = baseUrl.replace(/\/videos$/, ''); requestEndpoint = `${root}/videos/${remixVideoId}/remix${queryParams ? '?' + queryParams : ''}`; } Cost Analysis per Generation Sora 1 Pricing Model Base Rate: ~$0.05 per second per variant at 720p Resolution Scaling: Cost scales linearly with pixel count Formula: const basePrice = 0.05; const basePixels = 1280 * 720; // Reference resolution const currentPixels = width * height; const resolutionMultiplier = currentPixels / basePixels; const totalCost = basePrice * duration * variants * resolutionMultiplier; Examples: 720p (1280×720), 12 seconds, 1 variant: $0.60 1080p (1920×1080), 12 seconds, 1 variant: $1.35 720p, 12 seconds, 2 variants: $1.20 Sora 2 Pricing Model Flat Rate: $0.10 per second per variant (no resolution scaling in public preview) Formula: const totalCost = 0.10 * duration * variants; Examples: 720p (1280×720), 4 seconds: $0.40 720p (1280×720), 12 seconds: $1.20 720p (720×1280), 8 seconds: $0.80 Note: Since Sora 2 currently only supports 720p in public preview, resolution doesn't affect cost, only duration matters. Cost Comparison Scenario Sora 1 (720p) Sora 2 (720p) Winner 4s video $0.20 $0.40 Sora 1 12s video $0.60 $1.20 Sora 1 12s + audio N/A (no audio) $1.20 Sora 2 (unique) Image-to-video N/A $0.40-$1.20 Sora 2 (unique) Recommendation: Use Sora 1 for cost-effective silent videos at various resolutions. Use Sora 2 when you need audio, image/video inputs, or remix capabilities. Technical Limitations & Constraints Sora 1 Limitations Resolution Options: 9 supported resolutions from 480×480 to 1920×1080 Includes square, portrait, and landscape formats Full list: 480×480, 480×854, 854×480, 720×720, 720×1280, 1280×720, 1080×1080, 1080×1920, 1920×1080 Duration: Flexible: 1 to 20 seconds Any integer value within range Variants: Depends on resolution: 1080p: Variants disabled (n_variants must be 1) 720p: Max 2 variants Other resolutions: Max 4 variants Concurrent Jobs: Maximum 2 jobs running simultaneously Job Expiration: Videos expire 24 hours after generation Audio: No audio generation (silent videos only) Sora 2 Limitations Resolution Options (Public Preview): Only 2 options: 720×1280 (portrait) or 1280×720 (landscape) No square formats No 1080p support in current preview Duration: Fixed options only: 4, 8, or 12 seconds No custom durations Defaults to 4 seconds if not specified Variants: Not prominently supported in current API documentation Focus is on single high-quality generations with audio Concurrent Jobs: Maximum 2 jobs (same as Sora 1) Job Expiration: 24 hours (same as Sora 1) Audio: Native audio generation included (dialogue, sound effects, ambience) Shared Constraints Concurrent Processing: Both models enforce a limit of 2 concurrent video jobs per Azure resource. You must wait for one job to complete before starting a third. Job Lifecycle: queued → preprocessing → processing/running → completed Download Window: Videos are available for 24 hours after completion. After expiration, you must regenerate the video. Generation Time: Typical: 1-5 minutes depending on resolution, duration, and API load Can occasionally take longer during high demand Resolution & Duration Support Matrix Sora 1 Support Matrix Resolution Aspect Ratio Max Variants Duration Range Use Case 480×480 Square 4 1-20s Social thumbnails 480×854 Portrait 4 1-20s Mobile stories 854×480 Landscape 4 1-20s Quick previews 720×720 Square 4 1-20s Instagram posts 720×1280 Portrait 2 1-20s TikTok/Reels 1280×720 Landscape 2 1-20s YouTube shorts 1080×1080 Square 1 1-20s Premium social 1080×1920 Portrait 1 1-20s Premium vertical 1920×1080 Landscape 1 1-20s Full HD content Sora 2 Support Matrix Resolution Aspect Ratio Duration Options Audio Generation Modes 720×1280 Portrait 4s, 8s, 12s ✅ Yes Text, Image, Video Remix 1280×720 Landscape 4s, 8s, 12s ✅ Yes Text, Image, Video Remix Note: Sora 2's limited resolution options in public preview are expected to expand in future releases. Implementation Best Practices 1. Job Status Polling Strategy Implement adaptive backoff to avoid overwhelming the API: const maxAttempts = 180; // 15 minutes max let attempts = 0; const baseDelayMs = 3000; // Start with 3 seconds while (attempts < maxAttempts) { const response = await fetch(statusUrl, { headers: { 'api-key': apiKey }, }); if (response.status === 404) { // Job not ready yet, wait longer const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; continue; } const job = await response.json(); // Check completion (different status values for Sora 1 vs 2) const isCompleted = isSora2 ? job.status === 'completed' : job.status === 'succeeded'; if (isCompleted) break; // Adaptive backoff const delayMs = Math.min(15000, baseDelayMs + attempts * 1000); await new Promise(r => setTimeout(r, delayMs)); attempts++; } 2. Handling Different Response Structures Sora 1 Video Download: const generations = Array.isArray(job.generations) ? job.generations : []; const genId = generations[0]?.id; const videoUrl = `${root}/${genId}/content/video`; Sora 2 Video Download: const videoUrl = `${root}/videos/${jobId}/content`; 3. Error Handling try { const response = await fetch(endpoint, fetchOptions); if (!response.ok) { const error = await response.text(); throw new Error(`Video generation failed: ${error}`); } // ... handle successful response } catch (error) { console.error('[VideoGen] Error:', error); // Implement retry logic or user notification } 4. Image Preprocessing for Image-to-Video Always resize images to match the target video resolution: async function resizeImage(file: File, targetWidth: number, targetHeight: number): Promise<File> { return new Promise((resolve, reject) => { const img = new Image(); const canvas = document.createElement('canvas'); const ctx = canvas.getContext('2d'); img.onload = () => { canvas.width = targetWidth; canvas.height = targetHeight; ctx.drawImage(img, 0, 0, targetWidth, targetHeight); canvas.toBlob((blob) => { if (blob) { const resizedFile = new File([blob], file.name, { type: file.type }); resolve(resizedFile); } else { reject(new Error('Failed to create resized image blob')); } }, file.type); }; img.onerror = () => reject(new Error('Failed to load image')); img.src = URL.createObjectURL(file); }); } 5. Cost Tracking Implement cost estimation before generation and tracking after: // Pre-generation estimate const estimatedCost = calculateCost(width, height, duration, variants, soraVersion); // Save generation record await saveGenerationRecord({ prompt, soraModel: soraVersion, duration: parseInt(duration), resolution: `${width}x${height}`, variants: parseInt(variants), generationMode: mode, estimatedCost, status: 'queued', jobId: job.id, }); // Update after completion await updateGenerationStatus(jobId, 'completed', { videoId: finalVideoId }); 6. Progressive User Feedback Provide detailed status updates during the generation process: const statusMessages: Record<string, string> = { 'preprocessing': 'Preprocessing your request...', 'running': 'Generating video...', 'processing': 'Processing video...', 'queued': 'Job queued...', 'in_progress': 'Generating video...', }; onProgress?.(statusMessages[job.status] || `Status: ${job.status}`); Conclusion Building with Azure OpenAI's Sora models requires understanding the nuanced differences between Sora 1 and Sora 2, both in API structure and capabilities. Key takeaways: Choose the right model: Sora 1 for resolution flexibility and cost-effectiveness; Sora 2 for audio, image inputs, and remix capabilities Handle API differences: Implement conditional logic for parameter formatting and status polling based on model version Respect limitations: Plan around concurrent job limits, resolution constraints, and 24-hour expiration windows Optimize costs: Calculate estimates upfront and track actual usage for better budget management Provide great UX: Implement adaptive polling, progressive status updates, and clear error messages The future of AI video generation is exciting, and Azure AI Foundry provides production-ready access to these powerful models. As Sora 2 matures and limitations are lifted (especially resolution options), we'll see even more creative applications emerge. Resources: Azure AI Foundry Sora Documentation OpenAI Sora API Reference Azure OpenAI Service Pricing This blog post is based on real-world implementation experience building LemonGrab, my AI video generation platform that integrates both Sora 1 and Sora 2 through Azure AI Foundry. The code examples are extracted from production usage.391Views0likes0CommentsInquiry Regarding Existing Microsoft Applications for End-to-End Operational Management
I would like to inquire whether Microsoft offers any pre-built, production-ready applications—preferably within the Dynamics 365 ecosystem—that are currently in use by customers and proven to be stable, which support the following functionalities: Work Order Management Operational Management Production Planning and Control Resource Management Asset Management Quality Management Inventory Management Barcode Scanning for real-time job tracking (start/finish) Profitability and Financial Reporting Hours Variation Analysis( Planned Vs Actual) Cost Variation Analysis( Planned Vs Actual) We are seeking a solution that integrates these capabilities into a unified platform, ideally with real-time data capture and reporting features. If such a solution exists, we would appreciate details regarding its availability, deployment options, licensing, and customer success stories. Looking forward to your guidanceNew MCPP Subscription in Azure
Hi, Apologies if this is the wrong forum. Earlier this year I renewed my Action Pack subscription and had to link my billing profile at the same time. Once that had gone through, I noticed that I had a new Azure subscription called 'MCPP Subscription' I still also have my Microsoft Partner Network subscription which has all my resource groups etc in. My query is what is this new subscription? Should I start using it? My credits are linked to the old MPN subscription. I've not been able to find anything so far about it either on general internet searches or this community. many thanks23KViews2likes19CommentsPSA: Verified Publisher (Wrong) Assumptions
First and foremost, thank you, Nilesh Vishwakarma and Deepack, for sharing your internal knowledge to dispel the unnecessarily confusing topic of App Verified Publisher. I'll dive into the details of the issues, the confusion, and what I gathered during a few meetings with Microsoft Support. But first, for those seeking the simple TLDR guidance (specifically for the Verified MPN): The organization/tenant the App is under must have its own Partner Account; you can't come in with your own MPN for your clients; it needs to be their (your clients') MPN Account. Even if you created the App (the App Owner), you need some sort of Global Admin type access, and it seems like you also need some access in the associated MPN account. VERY IMPORTANT: Even if you can see and click on the Verified Publisher link to add an MPN ID, any errors you encounter are likely due to lacking rights in one or both systems. VERY IMPORTANT: The Microsoft documentation for troubleshooting this issue may lead you astray. After several days of troubleshooting on my own—reading, researching, and even involving Microsoft Support—the issue was finally resolved when my client created an MPN account, logged into Entra, clicked the Verify link, and entered their MPN ID. The takeaway here? Assume you're at the mercy of the zero trust hammer, and for something as simple as single sign-on authentication, your clients must become Microsoft Partners. For those not interested in the juicy details or listening to a grumpy gray beard rant, you are hereby released; the above are the main points to remember when applying a Verified Publisher to your App. For the rest of you, read on, and hopefully, you'll pick up more details and maybe even find it entertaining... First, if you are creating a simple App for SSO (work/school) for use on your client's website, you need to create the App on the tenant that holds the AD/Entra accounts. This makes sense. Furthermore, a publisher is required to be on the hook for and prove the validity of the App, visible to users during the consent. This also makes sense. However, what doesn't make sense is that the App Owner cannot assign the MPN ID, and even more so, anyone who is creating that App for their client needs also to have access to their client's MPN Account, and, for that matter, that the client should even need that MPN Account. Let's start with an extreme example... I'm building a website for a construction company that doesn't have an IT team. Because they have AD/Entra and I'm a loyal "Microsoft Partner" (see what I did there), we decided to go with Azure and Entra. First, I need to explain the access I think I need to the client (who is not technical). I'm on my way, creating the App and website, and I get to this Verified Publisher thing. Okay, no problem... I can click on this link to add my MPN ID. Nope, that's the first error. After finally figuring out that they need to become a Microsoft Partner, you help walk your (again, non-technical) client through this process. Okay, so you are back at the Verified Publisher with your (ahem, I mean their) new MPN ID. Again, you click on the link and add their MPN ID, and another error occurs: "The MPN ID you provided does not exist, or you do not have access to it. Please provide a valid MPN ID and try again." Okay, first, the MPN ID is valid; second, it exists; and third, what access do I need? This kitchen sink error message is not very helpful. But hey, there's a "learn more" link. Okay, so you go through hours of rummaging through the docs, and nothing is really apparent, but look at this—there's a Troubleshooting section. On the first bullet point, step four provides a link to verify the MPN (https://partner.microsoft.com/en-us/pcv/accountsettings/connectedpartnerprofile), but this reveals a "Sorry, something went wrong. Please try again later" error page. With humility, you create one of your two allowed support tickets... and when going through the same steps, they mention something about access and start naming the people that could apply the Verified Publisher... so, we have to call our (non-technical) client again to come on the chat to be walked through how to add the MPN ID, and boom, it works. At this point, you're probably part happy, part embarrassed (for no reason), and part angry. Here's the problem—well, one of them. If you don't have access to do something, we are all trained to see disabled buttons/links; if the developers are really good, they change the CSS/cursor on hover, and if they're really, really good, they provide a tooltip explaining the access that is needed to click the button or link. BUT, for all that is good, don't provide a clickable link, a non-disabled textbox, and a non-disabled button to add something we can't add in the first place. Furthermore, do not provide an overly elaborate yet somehow vague validation message. I expect more from Microsoft. Secondly—and this is a self-preservation thing—requiring our clients also to become Microsoft Partners dilutes the value of being a Microsoft Partner. I see a cartoon of a word bubble above a technical person saying, "I'm a Microsoft Partner," and in the next frame, it shows a roofer, a janitor, a homemaker, and a plethora of a thousand people, all with the same word bubble saying, "Yeah, so are we!!!" Although this is a funny view of it, the reality is that it was hard enough to explain to clients why you want to add your MPN ID to their Azure subscription or what that means... but now, why would they, when they have that same feeling of ownership as with the App, and now armed with their own MPN ID? It truly is diluting the value of a Microsoft Partnership and, further, making it harder to justify adding our MPN ID to signal to MS who we are serving. This is directly tied to how we can obtain a Silver or Gold Partnership. This seems to have been built without consideration for agencies, small businesses, and clients outside the tech field.I can't see my customer in the reports in partner earned credit
I have a client who has an Azure Subscription (Azure Plan) and my Guest user has an Admin RBAC on the subscription scope, but I can't see my customer in the reports on partner earned credit. For that reason, I am not able to reach the partner designation because my partner punctuation in the designation does not appear. My user is a guest in the customer tenant and has already linked my partner ID in the Management Partner Blade. I need help. Thanks in advance for your time.Registrieren im Cloud-Partner-Programm
Hallo, ich verfüge über ein Azure-Konto und will mich nun für das Cloud-Partner-Programm registrieren. Es wird aber kein E-Mail-Account akzeptiert. Ich kann mich weder mit dem Azure-Konto anmelden, noch ein neues erstellen. Was mache ich falsch? Vielen Dank Klaus Götzer Translation I have an Azure account and now I want to register for the Cloud Partner Program. However, no e-mail account is accepted. I can't sign in with the Azure account, nor can I create a new one. What am I doing wrong? Thank you very much Klaus GötzerHow to integrate Microsoft User Authentication using Microsoft Entra ID: A Step-by-Step Guide to Use
Microsoft Entra ID, also known as Azure AD (Active Directory), offers numerous advantages. Whether you're prioritizing security or seeking a well-organized and automated User Management system, this tool is your go-to for building a secure authentication system, be it for a web app, mobile app, or any other application.3.1KViews2likes0Comments