[05/29] From concept to production, transform bringing ideas to life
We're excited to announce Sora, now available in AI Foundry. Learn more about Sora in Azure OpenAI, and its API and video playground, here. To dive deeper into video playground experience, check out this blog post
Why does this matter?
AI is no longer *just* about text. Here’s why: multimodal models enable deeper understanding. Today AI doesn’t just understand words: it understands context, visuals, and motion. From prompt to product, imagine going from a product description to a full marketing campaign. In many ways, the rise of multimodal AI today is comparable to the inception of photography in the 19th century—introducing a new creative medium that, like photography, didn’t replace painting but expanded the boundaries of artistic expression.
Just a little over a month ago, we released OpenAI's GPT-image-1, and now we’re thrilled to announce Sora, available in public preview in AI Foundry. These models are designed to unlock creativity, accelerate content creation, and empower entire industries. Video in particular has remained the most complex and underdeveloped modality. Teams across industries from marketing to education are bottlenecked by the cost, time, and tools required to create high fidelity video content. What makes these models especially powerful on Azure is the enterprise-grade foundation they run on. Customers get the benefit of first-party infrastructure, network security, identity, billing, and governance—all in one place. That’s huge when you're deploying generative AI at scale. Both models are API-first, meaning they can be deeply integrated into workflows, apps, or customer-facing tools. And for teams looking to explore before they build, we offer an intuitive Image Playground and Video Playground to experiment with different models side by side: in iterative workflows like video generation, go from experimentation to production seamlessly.
Of course, it’s not just about capabilities—it’s also about responsibility. These models are designed with safety and provenance in mind, including features like C2PA integration and abuse monitoring, so enterprises can innovate with confidence.
Automate pipelines and scale production workflows
Sora in Azure OpenAI is uniquely available as an API-so creative teams can build it directly in their tool. Currently Sora supports text-to-video, with image-to-video coming soon, up to 20s in duration, 1080p resolution, and landscape/portrait/square aspect ratios. This level of integration is especially valuable for customers like WPP, where the inability to easily show early concepts and scale big ideas through production in video has long been a creative and operational bottleneck. Through the Sora API, customers can collaborate more effectively with clients by delivering personalized, scalable solutions. The Sora API in particular is great for asynchronous tasks.
"API access integrates and speeds up my workflow in a way that other services just couldn't. It just makes my life so much easier” - WPP
Want to see this in action? Check out our visionary lab using both gpt-image-1 and sora, tailored to industry use cases
Go from experimentation to production with video playground
With built in features such as port to VS Code, customers can seamlessly go from experimentation to production:
- Iterate faster: Experiment with text prompts and adjust generation controls like aspect ratio, resolution and duration
- Optimize prompts: Re-write prompt syntax with AI and visually compare outcomes across variations using prebuilt industry prompts
- API based interface: What works in video playground translates directly into code, with predictability
Industry Use Cases
Sora
🌱 Sustainability
|
|
📚 Education
|
|
✈️ Travel & Lifestyle
|
|
GPT-image-1
🎮 Gaming |
|
📢 Advertising & Marketing
|
|
🛒 E-commerce
|
|
Explore how you can integrate these powerful tools into your operations and start transforming your business today