partner
973 TopicsSecuring AI apps and agents on Microsoft Marketplace
Why security must be designed in—not validated later AI apps and agents expand the security surface beyond that of traditional applications. Prompt inputs, agent reasoning, tool execution, and downstream integrations introduce opportunities for misuse or unintended behavior when security assumptions are implicit. These risks surface quickly in production environments where AI systems interact with real users and data. Deferring security decisions until late in the lifecycle often exposes architectural limitations that restrict where controls can be enforced. Retrofitting security after deployment is costly and can force tradeoffs that affect reliability, performance, or customer trust. Designing security early establishes clear boundaries, enables consistent enforcement, and reduces friction during Marketplace review, onboarding, and long‑term operation. In the Marketplace context, security is a foundational requirement for trust and scale. This post is part of a series on building and publishing well-architected AI apps and Agents on Microsoft Marketplace. How AI apps and agents expand the attack surface Without a clear view of where trust boundaries exist and how behavior propagates across systems, security controls risk being applied too narrowly or too late. AI apps and agents introduce security risks that extend beyond those of traditional applications. AI systems accept open‑ended prompts, reason dynamically, and often act autonomously across systems and data sources. These interaction patterns expand the attack surface in several important ways: New trust boundaries introduced by prompts and inputs, where unstructured user input can influence reasoning and downstream actions Autonomous behavior, which increases the blast radius when authentication or authorization gaps exist Tool and integration execution, where agents interact with external APIs, plugins, and services across security domains Dynamic model responses, which can unintentionally expose sensitive data or amplify errors if guardrails are incomplete Each API, plugin, or external dependency becomes a security choke point where identity validation, audit logging, and data handling must be enforced consistently—especially when AI systems span tenants, subscriptions, or ownership boundaries. Using OWASP GenAI Top 10 as a threat lens The OWASP GenAI Top 10 provides a practical, industry‑recognized lens for identifying and categorizing AI‑specific security threats that extend beyond traditional application risks. Rather than serving as a checklist, the OWASP GenAI Top 10 helps teams ask the right questions early in the design process. It highlights where assumptions about trust, input handling, autonomy, and data access can break down in AI‑driven systems—often in ways that are difficult to detect after deployment. Common risk categories highlighted by OWASP include: Prompt injection and manipulation, where malicious input influences agent behavior or downstream actions Sensitive data exposure, including leakage through prompts, responses, logs, or tool outputs Excessive agency, where agents are granted broader permissions or action scope than intended Insecure integrations, where tools, plugins, or external systems become unintended attack paths Highly regulated industries, sensitive data domains, or mission‑critical workloads may require additional risk assessment and security considerations that extend beyond the OWASP categories. The OWASP GenAI Top 10 allows teams to connect high‑level risks to architectural decisions by creating a shared vocabulary that sets the foundation for designing guardrails that are enforceable both at design time and at runtime. Designing security guardrails into the architecture Security guardrails must be designed into the architecture, shaping where and how policies are enforced, evaluated, and monitored throughout the solution lifecycle. Guardrails operate at two complementary layers: Design time, where architectural decisions determine what is possible, permitted, or blocked by default Runtime, where controls actively govern behavior as the AI app or agent interacts with users, data, and systems When architectural boundaries are not defined early, teams often discover that critical controls—such as input validation, authorization checks, or action constraints—cannot be applied consistently without redesign: Tenancy boundaries, defining how isolation is enforced between customers, environments, or subscriptions Identity boundaries, governing how users, agents, and services authenticate and what actions they can perform Environment separation, limiting the blast radius of experimentation, updates, or failures Control planes, where configuration, policy, and behavior can be adjusted without redeploying core logic Data planes, controlling how data is accessed, processed, and moved across trust boundaries Designing security guardrails into the architecture transforms security from reactive to preventative, while also reducing friction later in the Marketplace journey. Clear enforcement boundaries simplify review, clarify risk ownership, and enable AI apps and agents to evolve safely as capabilities and integrations expand. Identity as a security boundary for AI apps and agents Identity defines who can access the system, what actions can be taken, and which resources an AI app or agent is permitted to interact with across tenants, subscriptions, and environments. Agents often act on behalf of users, invoke tools, and access downstream systems autonomously. Without clear identity boundaries, these actions can unintentionally bypass least‑privilege controls or expand access beyond what users or customers expect. Strong identity design shapes security in several key ways: Authentication and authorization, determines how users, agents, and services establish trust and what operations they are allowed to perform Delegated access, constraints agents to act with permissions tied to user intent and context Service‑to‑service trust, ensures that all interactions between components are explicitly authenticated and authorized Auditability, traces actions taken by agents back to identities, roles, and decisions A zero-trust approach is essential in this context. Every request—whether initiated by a user, an agent, or a backend service—should be treated as untrusted until proven otherwise. Identity becomes the primary control plane for enforcing least privilege, limiting blast radius, and reducing downstream integration risk. This foundation not only improves security posture, but also supports compliance, simplifies Marketplace review, and enables AI apps and agents to scale safely as integrations and capabilities evolve. Protecting data across boundaries Data may reside in customer‑owned tenants, subscriptions, or external systems, while the AI app or agent runs in a publisher‑managed environment or a separate customer environment. Protecting data across boundaries requires teams to reason about more than storage location. Several factors shape the security posture: Data ownership, including whether data is owned and controlled by the customer, the publisher, or a third party Boundary crossings, such as cross‑tenant, cross‑subscription, or cross‑environment access patterns Data sensitivity, particularly for regulated, proprietary, or personally identifiable information Access duration and scope, ensuring data access is limited to the minimum required context and time When these factors are implicit, AI systems can unintentionally broaden access through prompts, retrieval‑augmented generation, or agent‑initiated actions. This risk increases when agents autonomously select data sources or chain actions across multiple systems. To mitigate these risks, access patterns must be explicit, auditable, and revocable. Data access should be treated as a continuous security decision, evaluated on every interaction rather than trusted by default once a connection exists. This approach aligns with zero-trust principles, where no data access is implicitly trusted and every request is validated based on identity, context, and intent. Runtime protections and monitoring For AI apps and agents, security does not end at deployment. In customer environments, these systems interact continuously with users, data, and external services, making runtime visibility and control essential to a strong security posture. AI behavior is also dynamic: the same prompt, context, or integration can produce different outcomes over time as models, data sources, and agent logic evolve, so monitoring must extend beyond infrastructure health to include behavioral signals that indicate misuse, drift, or unintended actions. Effective runtime protections focus on five core capabilities: Vulnerability management, including regular scanning of the full solution to identify missing patches, insecure interfaces, and exposure points Observability, so agent decisions, actions, and outcomes can be traced and understood in production Behavioral monitoring, to detect abnormal patterns such as unexpected tool usage, unusual access paths, or excessive action frequency Containment and response, enabling rapid intervention when risky or unauthorized behavior is detected Forensics readiness, ensuring system-state replicability and chain-of-custody are retained to investigate what happened, why it happened, and what was impacted Monitoring that only tracks availability or performance is insufficient. Runtime signals must provide enough context to explain not just what happened, but why an AI app or agent behaved the way it did, and which identities, data sources, or integrations were involved. Equally important is integration with broader security event and incident management workflows. Runtime insights should flow into existing security operations so AI-related incidents can be triaged, investigated, and resolved alongside other enterprise security events—otherwise AI solutions risk becoming blind spots in a customer’s operating environment. Preparing for incidents and abuse scenarios No AI app or agent operates in a perfectly controlled environment. Once deployed, these systems are exposed to real users, unpredictable inputs, evolving data, and changing integrations. Preparing for incidents and abuse scenarios is therefore a core security requirement, not a contingency plan. AI apps and agents introduce unique incident patterns compared to traditional software. In addition to infrastructure failures, teams must be prepared for prompt abuse, unintended agent actions, data exposure, and misuse of delegated access. Because agents may act autonomously or continuously, incidents can propagate quickly if safeguards and response paths are unclear. Effective incident readiness starts with acknowledging that: Abuse is not always malicious, misuse can stem from ambiguous prompts, unexpected context, or misunderstood capabilities Agent autonomy may increase impact, especially when actions span multiple systems or data sources Security incidents may be behavioral, not just technical, requiring interpretation of intent and outcomes Preparing for these scenarios requires clearly defined response strategies that account for how AI systems behave in production. AI solutions should be designed to support pause, constrain, or revoke agent capabilities when risk is detected, and to do so without destabilizing the broader system or customer environment. Incident response must also align with customer expectations and regulatory obligations. Customers need confidence that AI‑related issues will be handled transparently, proportionately, and in accordance with applicable security and privacy standards. Clear boundaries around responsibility, communication, and remediation help preserve trust when issues arise. How security decisions shape Marketplace readiness From initial review to customer adoption and long‑term operation, security posture is a visible and consequential signal of readiness. AI apps and agents with clear boundaries—around identity, data access, autonomy, and runtime behavior—are easier to evaluate, onboard, and trust. When security assumptions are explicit, Marketplace review becomes more predictable, customer expectations are clearer, and operational risk is reduced. Ambiguous trust boundaries, implicit data access, or uncontrolled agent actions can introduce friction during review, delay onboarding, or undermine customer confidence after deployment. Marketplace‑ready security is therefore not about meeting a minimum bar. It is about enabling scale. Well-designed security allows AI apps and agents to integrate into enterprise environments, align with customer governance models, and evolve safely as capabilities expand. When security is treated as a first‑class architectural concern, it becomes an enabler rather than a blocker—supporting faster time to market, stronger customer trust, and sustainable growth through Microsoft Marketplace. What’s next in the journey Security for AI apps and agents is not a one‑time decision, but an ongoing design discipline that evolves as systems, data, and customer expectations change. By establishing clear boundaries, embedding guardrails into the architecture, and preparing for real‑world operation, publishers create a foundation that supports safe iteration, predictable behavior, and long‑term trust. This mindset enables AI apps and agents to scale confidently within enterprise environments while meeting the expectations of customers adopting solutions through Microsoft Marketplace. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success86Views5likes0CommentsSeamless Marketplace private offers: creation to customer use
Private offers are a core mechanism for bringing negotiated commercial terms into Microsoft Marketplace. They allow publishers and channel partners to offer negotiated pricing, flexible billing structures, and custom terms; while enabling customers to purchase through the same Microsoft governed procurement, billing, and subscription experience they already use for Azure purchases. As Marketplace adoption grows, private offers increasingly involve channel partners, including resellers, system integrators, and Cloud Solution Providers. While commercial relationships vary, the Marketplace lifecycle remains consistent. Understanding that lifecycle—and where responsibilities differ by selling model—is essential to executing private offers efficiently and at scale. Join us April 15 for Marketplace Partner Office Hours, where Microsoft Marketplace experts Stephanie Brice and Christine Brown walk through how to execute private offers end to end—from creation to customer purchase and activation—across direct and partner‑led selling models. The session will include a live demonstration and Q&A, with practical guidance on flexible billing, channel scenarios, and common pitfalls. This article walks through the private offer lifecycle to help partners establish a clear, repeatable operating model to successfully transact in Microsoft Marketplace. Why private offers are structured the way they are Private offers are designed to align with how enterprise customers already procure software through Microsoft. Customers purchase through governed billing accounts, defined Azure role-based access control (RBAC) enforced roles, and Azure subscriptions that support cost management and compliance. Rather than bypassing these controls, private offers integrate negotiated deals directly into Microsoft Marketplace. This allows customers to: Apply purchases to existing Microsoft agreements (Microsoft Customer Agreement (MCA) or Enterprise Agreement (EA)) Preserve internal approval workflows Manage Marketplace subscriptions alongside other Azure resources Private offers also support flexible billing schedules. This is especially important for enterprise customers managing budget cycles, approvals, and cash flow. Flexible billing allows partners to align charges to agreed timelines—such as billing on a specific day of the month or spreading payments across defined milestones—while still transacting through Microsoft Marketplace. Customers can align Marketplace charges with internal finance processes without requiring separate contracts or off‑platform invoicing. For publishers and partners, this design creates a predictable lifecycle that scales across direct and channel‑led motions. Each stage exists for a specific reason and understanding that intent helps reduce delays and rework. Learn more: Private offers overview One lifecycle, multiple selling models All private offers—regardless of selling model—follow the same three stages: Creation of a private offer based on a publicly transactable Marketplace offer Acceptance, purchase, and configuration of the private offer Activation or deployment, based on how the solution is delivered What varies by model is who creates the offer, who sets margin, and who owns the customer relationship—not how Microsoft Marketplace processes the transaction. 1. Creation: Starting with a transactable public offer Every private offer begins with a publicly transactable Marketplace offer enabled for Sell through Microsoft. Private offers inherit the structure, pricing model, and delivery architecture of that public offer and its associated plan. If a public offer is listed as Contact me or otherwise non‑transactable, it must be updated before any private offers—direct to customer or channel‑led—can be created. Creation flows by selling model: Customer private offers (CPO) The publisher creates a private offer in Partner Center for a specific customer, based on the Azure subscription (Customer Azure Billing ID) provided by the customer. The publisher defines negotiated pricing, duration, billing terms (including any flexible billing schedule), and custom conditions. Multiparty private offers (MPO) The publisher creates a private offer in Partner Center and extends it to a specific channel partner. The partner adds margin and completes the offer before sending it to the customer. Resale enabled offers (REO) The publisher authorizes a channel partner in Partner Center to resell a publicly transactable Marketplace offer. Once authorized, the channel partner can independently create private offers for customers without publisher involvement in each deal. Cloud Solution Provider (CSP) private offers A CSP hosts the customer’s Azure environment (typically for SMB customers) and acts on behalf of the customer. The publisher creates a private offer in Partner Center for a CSP partner, extending margin so the CSP can sell the solution to customers through the CSP motion. In all cases, the private offer remains anchored to the same underlying public Marketplace offer. 2. Acceptance and purchase: What happens in Marketplace Microsoft Marketplace provides a consistent purchasing experience while supporting different partner‑led models behind the scenes. Customer private offer, multiparty private offer, resale enabled private offer For these models, the customer experience is the same and includes three steps: Accepting the private offer The customer accepts the negotiated terms (price, duration, custom terms) in Azure portal. This is the legal acceptance step under the customer’s MCA or EA. Purchasing or subscribing The customer associates the offer to the appropriate billing account and Azure subscription. This enables billing and fulfillment. Configuring the solution After subscription, the customer is redirected to the partner’s landing page. This step connects the Marketplace purchase to the partner’s system, enabling provisioning, subscription activation, and setup. Learn more: Accept the private offer Purchase and subscribe to the private offer In large enterprises, acceptance and purchase are often completed by different roles, supporting governance and auditability. CSP private offers In the CSP model, the CSP partner—not the end customer—accepts and purchases the private offer on the customer’s behalf. Microsoft invoices the CSP partner, and the CSP bills the end customer under their existing CSP relationship. Key distinctions: The end customer does not interact with the Marketplace private offer CSP private offers do not decrement customer Microsoft Azure Consumption Commitment (MACC) because there is no MACC in the CSP agreement Customer pricing and billing occur outside Marketplace Learn more: ISV to CSP private offers 3. Activation or deployment: Defined by delivery model, not selling motion Activation or deployment is determined by how the solution is built, not whether the deal is direct to customer or channel‑led. SaaS offers The solution runs in the publisher’s environment. After subscription, activation occurs through the SaaS fulfillment process, typically involving customer onboarding or account configuration. No Azure resources are deployed into the customer’s tenant. Deployable offer types (virtual machines, containers, Azure managed applications) The solution runs in the customer’s Azure tenant. Deployment provisions resources into the selected Azure subscription according to the offer’s architecture. Channel partners may support onboarding or deployment, but Marketplace activation or deployment reflects the technical delivery model—not the commercial route. Setting expectations that scale Successful partners set expectations early by separating commercial steps from technical activation: The customer transacts under an Enterprise Agreement (EA) or Microsoft Customer Agreement (MCA) The private offer includes custom pricing and any flexible billing schedule based on the publicly transactable offer The customer accepts negotiated terms in Microsoft Marketplace The purchase and subscribe steps associate the offer to the billing account and Azure subscription, the configure step triggers the notification to activate or deploy the solution for customer use Billing starts based on SaaS fulfillment or Azure resource deployment Choosing the right model While the lifecycle is consistent, each model supports different strategies: Customer private offers allow the publisher to negotiate terms directly with the customer Multiparty private offers enable close channel collaboration while sharing margin Resale enabled offers support scale by empowering channel partners to transact independently CSP private offers align with customer segments led with this motion The right choice depends on partner strategy, not on how Marketplace processes the transaction. Learn more: Transacting on Microsoft Marketplace Bringing it all together Private offers turn negotiated agreements into scalable, governed transactions inside Microsoft Marketplace. Regardless of whether a deal is direct or channel‑led, the underlying lifecycle remains the same, rooted in a transactable public offer, executed through Microsoft‑managed purchasing, and activated based on how the solution is delivered. By understanding that lifecycle and intentionally choosing the right direct or channel model and billing structure, partners can reduce friction, set clearer expectations, and scale Marketplace transactions with confidence. When aligned correctly, private offers become more than a deal construct; they become a repeatable operating model for Marketplace growth.118Views1like0CommentsHow Microsoft Marketplace and ecosystem partnerships are reshaping enterprise go-to-market
Author Juhi Saha is CEO at Partner1, a two-time Inc. Power Partner Award winner and an official Microsoft Partner Led Network. Partner1 helps B2B software and services companies maximize the value of their partner ecosystems and transform partnerships into scalable profit engines. Specializing in channel development and strategic alliances, Partner1 empowers organizations to unlock their partnership potential through expert guidance, partnership program design, and actionable growth strategies. By focusing on partner-driven growth, Partner1 helps businesses, from startups to scale-ups, maximize revenue, accelerate market expansion, and build a lasting competitive advantage. ________________________________________________________________________________________________________________________________________________________________ Key takeaways from recent NYC founder and investor events “It’s no longer the era of go fast. It’s the era of go faster.” That sentiment, shared by an investor during one of our recent New York City gatherings, captures a broader shift underway in how startups are expected to scale. Speed is no longer just a function of product development or hiring. It is increasingly a function of how effectively companies leverage platforms, ecosystems, and commercial infrastructure that already exist. Over the past several weeks, Partner1 hosted two curated events bringing together founders, investors, and ecosystem leaders to explore how startups are accessing enterprise customers and accelerating growth through partnerships. The conversations centered on a practical question that continues to surface across early-stage and growth-stage companies: how do startups break into enterprise and scale in a market defined by AI, platforms, and increasingly complex buying environments? What emerged from these discussions is a clear pattern: the traditional model of building a product, hiring a sales team, and scaling through direct enterprise relationships is being supplemented, and in many cases replaced, by ecosystem-led growth. Partnerships are no longer a downstream channel decision. They are becoming a primary system through which companies access customers, accelerate revenue, and compete. Across both sessions, with perspectives from leaders at Microsoft, NVIDIA, Plug and Play Tech Center, and investors including Trajectory Ventures, several consistent themes emerged around how this shift is playing out in practice. Marketplace is becoming the default commercial infrastructure Evaluate your Marketplace readiness- understand how Microsoft Marketplace supports discovery, procurement, and scalable growth, and were your solution fits today. One of the most concrete shifts discussed was the role of Marketplace as the commercial backbone for modern software transactions. Marketplace is no longer positioned as an optional distribution channel. It is increasingly how Microsoft goes to market with software companies of all sizes, and how customers expect to discover, evaluate, and procure solutions. This shift is being driven by practical realities. Enterprise procurement has historically been one of the most significant sources of friction in software sales. Vendor onboarding, legal negotiations, billing complexity, and fragmented purchasing processes extend deal cycles and introduce risk. Marketplace addresses these issues directly by standardizing terms, consolidating billing, and pre-vetting vendors through the publisher agreement. These are not cosmetic improvements. They materially change how quickly transactions can occur. During the discussions, the Marketplace opportunity was reinforced with both data and real examples. Marketplace is enabling larger deals, faster sales cycles, and measurable revenue growth for companies that treat it as a core go-to-market motion and speakers shared examples from companies like Neo4j, Pangaea Data and ShookIoT. The examples shared ranged from small, niche startups closing their largest deals through Marketplace to companies significantly expanding their customer base by leveraging Microsoft’s commercial infrastructure. What stands out is that these outcomes are not isolated. They are becoming repeatable. As customer awareness of Marketplace increases, it is increasingly seen as the fastest path to the right solution, regardless of who built it. Several startups shared how their deals languished in procurement and were excited to hear from other companies in attendance around how they successfully used Marketplace to speed up procurement. Rethinking scale: why “Microsoft is too big” is the wrong assumption A recurring concern from founders was whether they are too early or too small to meaningfully engage with Microsoft. This perception is common, but it does not reflect how the ecosystem is evolving. The perspective shared by Microsoft leaders was clear. AI-native startups are not peripheral to the ecosystem. They are central to it. Supporting startups is not about proximity to large partners. It is about helping early-stage companies build faster, reduce risk, and reach enterprise customers sooner. This dynamic was described as a balance. Startups bring speed, specialization, and differentiated AI use cases. Microsoft brings global reach, enterprise relationships, and a mature commercial engine. When aligned, that combination becomes a multiplier. Multiple conversations touched on how Marketplace is where this alignment materializes. It serves as the convergence point between innovation and demand. Whether a company is early-stage or scaling, it provides a consistent path to reach customers and transact at enterprise scale. The implication is direct. Companies should not wait to be “big enough.” They should start early with Microsoft Marketplace and design for this motion from the beginning. The results will be reduced friction and enable them to reach enterprise customers faster. Co-sell is evolving from access to alignment Many founders approach partnerships with a familiar question: how do we get Microsoft sellers to pay attention to us? That framing is increasingly misaligned with how the system actually works. The more scalable model described in the sessions is based on alignment rather than attention. Becoming co-sell eligible is important, particularly as solutions begin to align with Azure consumption and commercial priorities. However, co-sell eligibility is a starting point. It allows a solution to be recognized within Microsoft’s system and to count toward seller objectives. The more important shift is where growth actually comes from. The fastest growing motion is not seller-led. It is partner-to-partner. System integrators and channel partners already have established customer relationships. They are the ones driving adoption at scale. Microsoft’s investment in channel-led growth reflects this, with partner-led motions representing one of the highest growth vectors. The takeaway for founders is practical: instead of asking how to get seller attention, the better question is how to become easy for partners to sell. Alignment to platform, customer need, and partner incentives drives outcomes more reliably than individual relationships. Partnerships are not a channel. They are a go-to-market system One of the most consistent misconceptions observed across attendees was treating partnerships as a secondary channel, but insights from the panelists as well as conversations during networking sessions highlighted how partnerships function as an integrated system that shapes how companies build, sell, and scale. Marketplace, co-sell eligibility, and partner-to-partner relationships are interconnected. Product decisions influence how easily a solution can be transacted. Marketplace presence influences discovery and procurement. Partner relationships determine how widely a solution can be distributed. This system view is especially important in AI. As solutions become more complex, both buyers and sellers are optimizing for simplicity and speed. Centralized platforms and ecosystems provide a way to meet those requirements. Companies that treat partnerships as a system create compounding advantages. Those that treat them as an add-on often struggle to gain traction, even with strong products. Expanding beyond enterprise: a multi-segment opportunity While many startups initially focus on landing large enterprise customers, the opportunity within the Microsoft ecosystem is broader. Microsoft’s reach extends across enterprise, mid-market, and SMB segments. With the rise of AI and agent-based solutions, there is increasing focus on embedding applications into environments where customers already operate, such as Microsoft 365, and leveraging channel partners to scale distribution. This creates a unified go-to-market path that spans multiple segments. Startups can reach enterprise customers while also expanding into mid-market and SMB through the same ecosystem infrastructure. Channel partners play a critical role in this expansion. They provide access, distribution, and scale that would be difficult to replicate through direct sales alone. For startups, this represents a meaningful opportunity to grow faster and more efficiently across segments. Investor perspective: partnerships as a signal of maturity From an investor standpoint, partnerships are increasingly a signal of go-to-market maturity. The ability to leverage platforms, align with ecosystem dynamics, and accelerate revenue through structured partnerships is becoming a differentiator. Going back to the investor’s comment that “It’s no longer the era of go fast. It’s the era of go faster. I am going to ask all my portfolio companies about their marketplace strategy.” - this reflects a broader shift in evaluation criteria. Marketplace and ecosystem alignment are not viewed as optional enhancements. They are becoming central to how companies compress time to revenue and scale efficiently. When evaluating companies with similar technical capabilities, investors are looking closely at how founders approach distribution. Companies with a clear strategy for leveraging ecosystems and Marketplace are often better positioned to scale with less friction and more capital efficiency. A practical starting point The guidance shared across both events was consistent and actionable. Start early. Do not wait for a specific stage to engage with the ecosystem. Build on the platform with clear, differentiated use cases that solve real customer problems. Treat Marketplace as a core go-to-market motion. This includes investing in strong listings, clear pricing, and a working knowledge of Marketplace capabilities such as private offers and partner-led transactions. Design for partner-to-partner distribution. Ensure that your solution is easy for others to position, sell, and deploy within existing customer environments. At a fundamental level, the objective is to reduce friction. Companies that are easy to buy, easy to deploy, and easy for partners to sell are the ones that scale most effectively. Enterprise growth is no longer driven solely by direct sales execution. It is increasingly shaped by how well a company integrates into an ecosystem that already has distribution, demand, and commercial infrastructure. For startups building in AI and enterprise software, the question is no longer whether to engage with platforms like Microsoft. It is how early and how intentionally they design for it. The companies that do this well are not simply participating in the ecosystem. They are using it to accelerate outcomes that would be difficult to achieve on their own. Live in NYC on April 21st: Hear from Redis, Datadog, Eden and Microsoft on how strategic Marketplace partnerships are built and scaled in practice Strategic partnerships across hyperscalers, database providers, observability platforms, and application ecosystems are no longer abstract concepts, but important GTM relationships. As customers' infrastructure becomes more complex, they require solutions that are interoperable, scalable, and easy to implement. With the rise of AI, marketplaces have become critical enablers of technology adoption. With each product offering a wide range of integrations, it's the first-party relationships between providers that set these solutions apart, delivering best-in-class support for customers' infrastructure. Partnerships, like those between Microsoft, Datadog, Eden, and Redis, accelerate and derisk enterprise cloud transformations, with the Microsoft Marketplace playing a central role in how services are delivered and scaled. Eden's migration platform, Exodus, enables zero-downtime database migrations, while Datadog is deeply integrated to ensure that these autonomous migrations are fully observed. Azure Managed Redis is a first-party Azure service that is becoming foundational for customers optimizing their data infrastructure for modern and agentic AI workloads. Eden and Datadog's autonomous migration service for Azure Managed Redis is now available on Microsoft Marketplace, making it easy for enterprises to get the most out of new Redis products. As enterprises make this shift, a broader pattern is emerging in which marketplaces are not just procurement vehicles but also enablers of ecosystem execution, particularly in the context of AI. Many AI initiatives fall short not because of model capability, but because underlying infrastructure and data environments are not properly optimized. Migrations, when executed well, become an opportunity to modernize architecture, improve performance, and prepare for scalable AI and agent deployments. Through coordinated partnerships across Microsoft, Eden, Datadog, and Redis, companies are aligning product, sales, and delivery into a unified operating model that accelerates time to value and reduces risk for enterprise customers. This is all before discussing AI as an autonomous agent for deploying new infrastructure via marketplaces. If you want to understand how these partnership models are being built in practice, and how to use marketplaces and ecosystem alignment to unlock growth and AI readiness in your own organization, this event will provide a direct view into how leading companies are executing today. Sign up here and follow for more events with partners for partners by Partner1 and Microsoft. Resources Marketplace readiness assessment Learn more about Microsoft Marketplace: Microsoft Marketplace overview - Marketplace customer documentation | Microsoft Learn Explore Microsoft Marketplace Microsoft Marketplace | cloud solutions, AI apps, and agents Join Microsoft Marketplace community: Microsoft Marketplace community | Microsoft Community Hub112Views1like0CommentsProduction ready architectures for AI apps and agents on Marketplace
Why “production‑ready” architecture matters for Marketplace AI apps and agents A working AI prototype is not the same as a production‑ready AI app in Microsoft Marketplace. Marketplace solutions are expected to operate reliably in real customer environments, alongside mission‑critical workloads and under enterprise constraints. As a result, AI apps published through Marketplace must meet a higher bar than “it works in a demo.” Production‑ready Marketplace AI apps must assume: Alignment with enterprise expectations and the Azure Well‑Architected Framework, including cost optimization, security, reliability, operational excellence, and performance efficiency Architectural decisions made early are difficult to reverse, especially once customers, tenants, and billing relationships are in place A higher trust bar from customers, who expect Marketplace solutions to be Microsoft‑vetted, certified, and safe to run in production Customers come to Marketplace expecting solutions that are ready to run, ready to scale, and ready to be supported—not experiments. This post focuses on the architectural principles and patterns required to meet those expectations. Specific services and implementation details are covered later in the series. This post is part of a series on building and publishing well-architected AI apps and Agents on Microsoft Marketplace. Aligning offer type and architecture early sets you up for success A strong indicator of a smooth Marketplace journey is early alignment between offer type and solution architecture. Offer type defines more than how an AI app is listed—it establishes clear roles and responsibilities between publishers and customers, which in turn shape architectural boundaries. Across all other offer types, architecture must clearly answer three questions: Who owns the runtime? Where does the AI execute? Who controls updates and ongoing operations? These decisions will vary depending on whether the solution resides in the customer’s or publisher’s tenant based on the attributes associated with the following transactable marketplace offer types: SaaS offers, where the AI runtime lives in the publisher’s environment and architecture must support multi‑tenancy, strong isolation, and centralized operations Container offers, where workloads run in the customer’s Kubernetes environment and architecture emphasizes portability and clear operational assumptions Virtual Machine offers, where preconfigured environments run in the customer’s subscription and architecture is more tightly coupled to the OS and infrastructure footprint Azure Managed Applications, where the solution is deployed into the customer's subscription and architecture must balance customer control with defined lifecycle boundaries. What makes this model distinctive is its flexibility: an Azure Managed Application can package containers, virtual machines, or a combination of both — making it a natural fit for solutions that require customer-controlled infrastructure without sacrificing publisher-managed operations. The packaging choice shapes the underlying architecture, but the managed application wrapper is what defines how the solution is deployed, updated, and governed within the customer's environment. Architecture decisions naturally reinforce Marketplace requirements and reduce certification and operational friction later. Key factors that benefit from early alignment include: Roles and responsibilities, such as who operates the AI runtime and who is responsible for uptime, patching, scaling, and ongoing operations Proximity to data, particularly for AI solutions that rely on customer‑specific or proprietary data, where placement affects performance, data movement, and compliance Core architectural building blocks of AI apps Designing a production‑ready AI app starts with treating the solution as a system, not a single service. AI apps—especially agent‑based solutions—are composed of multiple cooperating layers that together enable reasoning, action, and safe operation at scale. At a high level, most production‑ready AI apps include the following building blocks: Interaction layer, which serves as the entry point for users or systems and is responsible for authentication, request shaping, and consistent responses Orchestration layer, which coordinates reasoning, tool selection, workflow execution, and retrieval‑augmented generation (RAG) flows across multi‑step interactions Model endpoints, which provide inference and generation capabilities and introduce distinct latency, cost, and dependency characteristics Data sources, including vector stores, operational data, documents, and logs that the AI system reasons over Control planes, such as identity, configuration, policy enforcement, feature flags, and secrets management, which govern behavior without redeploying core logic Observability, which enables tracing, monitoring, and diagnosis of agent decisions, actions, and outcomes Networking, which connects components using a zero‑trust posture where every call is authenticated and outbound access is explicitly controlled Together, these components form the foundation of most Marketplace‑ready AI architectures. How they are composed—and where boundaries are drawn—varies by offer type, tenancy model, and customer requirements. Specific services, patterns, and implementation guidance for each layer are explored later in the series. Tenancy design choices as an early architectural decision One of the earliest and most consequential architectural decisions is where the AI solution is hosted. Does it run in the publisher’s tenant, or is it deployed into the customer’s tenant? This choice establishes foundational boundaries and is difficult to change later without significant redesign. If the solution runs in the publisher’s tenant, it is inherently multi‑tenant and must be designed with strong logical isolation across customers. If it runs in the customer’s tenant, deployments are typically single‑tenant by default, with isolation provided through infrastructure boundaries. Many Marketplace AI apps fall between these extremes, making it essential to define the tenancy model early. Common tenancy approaches include: Publisher‑hosted, multi‑tenant solutions, where a shared AI runtime serves multiple customers and requires strict isolation of customer data, inference requests, identity, and cost attribution Customer‑hosted, single‑tenant deployments, where each customer operates an isolated instance within their own Azure subscription, often preferred for regulated or tightly controlled environments Hybrid models, which combine centralized AI services with customer‑hosted data or execution layers and require carefully defined trust and access boundaries Tenancy decisions influence several core architectural dimensions, including: Identity and access boundaries, which define how users and agents authenticate and act across tenants Data isolation, including how customer data is stored, processed, and protected Model usage patterns, such as shared models versus tenant‑specific models Cost allocation and scale, including how usage is tracked and attributed per customer These considerations are not implementation details—they shape how the AI system behaves, scales, and is governed in production. Reference architecture guidance for multi‑tenant AI and machine learning solutions in the Azure Architecture Center explores these tradeoffs in more detail. Understanding your customer’s needs Designing a production‑ready AI architecture starts with understanding the environment your customers expect your solution to operate in. Marketplace customers vary widely in their security posture, compliance obligations, operational practices, and tolerance for change. Architectures that reflect those realities reduce friction during onboarding, certification, and long‑term operation. Key customer considerations that shape architecture include: Security and compliance expectations, such as industry regulations, internal governance policies, or regional data requirements Target environments, including whether customers expect solutions to run in their own Azure subscription or are comfortable consuming centrally hosted services Change and outage windows, where operational constraints or seasonal restrictions require predictable and controlled updates Architectural alignment with customer needs is not about designing for every edge case. It is about making intentional tradeoffs that reflect how customers will deploy, operate, and depend on your AI solution in production. Specific security controls, compliance enforcement mechanisms, and operational policies are explored later in the series. This section establishes the architectural mindset required to support them. Separating environments for safe iteration Production AI systems must evolve continuously while remaining stable for customers. Separating environments is how publishers enable safe iteration without destabilizing live usage—and how customers maintain confidence when adopting and operating AI solutions in their own environments. From the publisher’s perspective, environment separation enables: Iteration on prompts, models, and orchestration logic without impacting production customers Validation of behavior changes before rollout, especially for AI‑driven systems where small changes can produce materially different outcomes Controlled release strategies that reduce operational risk From the customer’s perspective, environment separation shapes how the solution fits into their own development and operational practices: Where the solution is deployed across development, staging, and production environments How deployments are repeated or promoted, particularly when the solution runs in the customer’s tenant Whether environments can be recreated predictably, or whether customers are forced to manually reconfigure deployments with each iteration When AI solutions are deployed into the customer’s tenant, environment design becomes especially important. Customers should not be required to reverse‑engineer deployment logic, recreate environments from scratch, or re‑establish trust boundaries every time the solution evolves. These concerns should be addressed architecturally, not deferred to operational workarounds. Environment separation is therefore not just a DevOps choice—it is an architectural decision. It influences identity boundaries, deployment topology, validation strategies, and the shared operational contract between publisher and customer. Designing for AI‑specific scalability patterns AI workloads do not scale like traditional web or CRUD‑based applications. While front‑end and API layers may follow familiar scaling patterns, AI systems introduce behaviors that require different architectural assumptions. Production‑ready AI architectures must account for: Bursty inference demand, where usage can spike unpredictably based on user behavior or downstream automation Long‑running or multi‑step agent workflows, which may span tools, data sources, and time Model‑driven latency and cost characteristics, which influence throughput and responsiveness independently of application logic As a result, scalability decisions often vary by layer. Horizontal scaling is typically most effective in interaction, orchestration, and retrieval components, while model endpoints may require separate capacity planning, isolation, or throttling strategies. Treating identity as an architectural boundary Identity is foundational to Marketplace AI apps, but architecture must plan for it explicitly. Identity decisions define trust boundaries across users, agents, and services, and shape how the solution scales, secures access, and meets compliance requirements. Key architectural considerations include: Microsoft Entra ID as a foundation, where identity is treated as a core control plane rather than a late‑stage integration How users sign in, including: Their own corporate Microsoft Entra ID tenant B2B scenarios where one Entra ID tenant trusts another B2C identity providers for customer‑facing experiences How tenants authenticate, particularly in multi‑tenant or cross‑organization scenarios How AI agents act on behalf of users, including delegated access, authorization scope, and auditability How services communicate securely, using a zero‑trust posture where every call is authenticated and authorized Treating identity as an architectural boundary helps ensure that trust relationships remain explicit, enforceable, and consistent across tenants and environments. This foundation is critical for supporting secure operation, compliance enforcement, and future tenant‑linking scenarios. Designing for observability and auditability Production‑ready AI apps must be observable and auditable by design. Marketplace customers expect visibility into how systems behave in production, and publishers need clear insight to diagnose issues, operate reliably, and meet enterprise trust and compliance expectations. Key architectural considerations include: End‑to‑end observability, covering user interactions, agent reasoning steps, tool invocations, and downstream service calls Clear audit trails, capturing who initiated an action, what the AI system did, and how decisions were executed—especially when agents act on behalf of users Tenant‑aware visibility, ensuring logs, metrics, and traces are correctly attributed without exposing data across tenants Operational transparency, enabling effective troubleshooting, incident response, and continuous improvement without ad‑hoc instrumentation For AI systems, observability goes beyond infrastructure health. It must also account for AI‑specific behavior, such as prompt execution, model selection, retrieval outcomes, and tool usage. Without this visibility, diagnosing failures, validating changes, or explaining outcomes becomes difficult in real customer environments. Auditability is equally critical. Identity, access, and action histories must be traceable to support security reviews, regulatory obligations, and customer trust—particularly in regulated or enterprise settings. Common architectural pitfalls in Marketplace AI apps Even experienced teams run into similar challenges when moving from an AI prototype to a production‑ready Marketplace solution. The following pitfalls often surface when architectural decisions are deferred or made implicitly. Common pitfalls include: Treating AI as a single service instead of a system, where model inference is implemented without considering orchestration, data access, identity, observability, and operational boundaries Hard‑coding tenant assumptions, such as assuming a single tenant, identity model, or deployment topology, which becomes difficult to unwind as customer requirements diversify Not planning for a resilient model strategy, leaving the architecture fragile when model versions change, capabilities evolve, or providers introduce breaking behavior Assuming data lives within the same boundary as the solution, when in practice it may reside in a different tenant, subscription, or control plane Tightly coupling prompt logic to application code, making it harder to iterate on AI behavior, validate changes, or manage risk without full redeployments Assuming issues can be fixed after go‑live, which underestimates the cost and complexity of changing architecture once customers, subscriptions, and trust relationships are in place While these pitfalls may be caused by a lack of technical skill on the customer’s side, they could typically emerge when architectural decisions are postponed in favor of speed, or when AI behavior is treated as an isolated concern rather than part of a production system. What’s next in the journey The architectural decisions made early—around offer type, tenancy, identity, environments, and observability—establish the foundation on which everything else is built. When these choices are intentional, they reduce friction as the solution evolves, scales, and adapts to real customer needs. The next set of posts builds on this foundation, exploring different dimensions of operating, securing, and evolving Marketplace AI apps in production. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success143Views5likes1CommentOneDrive, Assignments, and Learning Accelerators are now Generally Available in Microsoft 365 LTI
Enhance your LMS with the power of Microsoft 365 Today, Microsoft is announcing general availability of the OneDrive and Assignments (including Learning Accelerators) experiences as part of Microsoft 365 LTI®—bringing seamless integration of Microsoft 365 tools into learning platforms to simplify workflows and enhance teaching and learning whether you’re using Canvas, Schoology, Brightspace, Blackboard, Moodle or other LMS platforms. Microsoft 365 LTI makes it easier than ever for educators and students to leverage the full suite of Microsoft 365 Education tools within existing workflows. And now, the OneDrive, Assignments and Learning Accelerators (Reading Progress, Speaker Progress and more) experiences previewed in July with the new Microsoft 365 LTI build on all the capabilities of the classic tools and add additional features in one convenient tool. Educators and students benefit from a more seamless and up-to-date LMS experience with Microsoft 365 Education. Teach and learn with confidence knowing that Microsoft 365 LTI is backed by Microsoft's industry-leading security and compliance tools with Microsoft 365 Education. Deploy and access the new Microsoft 365 LTI in your LMS today with the overview and deployment guides. IMPORTANT: If you have deployed the Microsoft 365 LTI previously, you do not need to redeploy in your LMS – however, we do recommend reviewing the deployment guide for any new recommendations or deployment guidance, and revisit your Admin Settings to check your M365 Admin Consent status and review the apps enabled for your educators to have access to in their courses. Classic LTI retirements Microsoft OneDrive LTI, OneNote LTI, Teams Assignments LTI and Reflect LTI are set to retire next September 17, 2026. The Microsoft 365 LTI replaces these separate LTI tools going forward and we encourage you to start proactively migrating your course this term. You will find migration guidance in our admin documentation to help take steps now. We will continue to provide any additional migration guidance as necessary OneDrive and Microsoft 365 files with embedded editors and new placements The new Microsoft 365 LTI tool expands beyond the capabilities of the existing OneDrive LTI tool with capabilities for Word, PowerPoint, and Excel, including Microsoft 365 Copilot - and is now available within your LMS experience by embedding or linking documents, videos, PDFs, and images into course materials like assignments, discussions, modules, announcements and more. Microsoft 365 LTI orchestrates management of permissions to prevent oversharing, and with dedicated course-level storage to support proper document lifecycle management, assignment workflows, and use of Microsoft 365 Copilot. With Canvas, Collaborations are supported along with students editing and submitting Microsoft 365 documents as an external tool assignment without leaving your LMS. This functionality replaces the classic Microsoft OneDrive LTI which will retire September 17, 2026. Learning Accelerators and AI-enhanced assignments available in your LMS - without the requirement for Microsoft Teams With Assignments in Microsoft 365 LTI, you will be able to use Learning Accelerators, multiple-document submissions, AI rubric and instructions generation, AI-assisted feedback, auto-graded Forms and other assignment capabilities directly within your learning management system (LMS), without the need to create and sync a Microsoft Team for your class. Assignments in Microsoft 365 LTI no longer require Teams access, enabling more LMS users to benefit from AI-enhanced experiences that were formerly exclusive to Microsoft Teams for Education. And Assignments can be created, managed, completed, and graded, without leaving your LMS with grades and feedback available to sync automatically to the LMS gradebook. New: Improve student speaking and presenting skills in 13 languages with Speaker Progress Exciting new AI Feedback features for educators to leverage, students can practice for in-class presentations or save class time by presenting and turning in their presentations for grading. This capability is included automatically in the new Microsoft 365 LTI tool. Existing, Teams-based assignments will continue to work and can be copied to new courses, so no migration is necessary. The assignments functionality in Microsoft 365 LTI replaces the classic Teams Assignments which will retire September 17, 2026. Dive into the new Microsoft 365 LTI to streamline your LMS experience We are bringing our Microsoft 365 Education capabilities for learning management systems together into a single, unified tool to streamline the user experience. Educators will be able to access Learning Accelerators, Reflect, OneDrive, Teams, and more in their LMS courses, without having to enable multiple tools separately, and without overcrowding menus where LTI tools surface. Whether adding content to a module, creating an assignment, or scheduling a meeting for a class, you will be able to easily access Microsoft 365 Education related features directly in your LMS workflow. Microsoft 365 LTI is available for supported LMS platforms, including Canvas by Instructure, PowerSchool Schoology Learning, Blackboard by Anthology, D2L/Brightspace, Moodle™, and for any LTI 1.3 Advantage compliant platform. Migration guidance and tools Guidance for migrating users from the classic LTI tools to the Microsoft 365 LTI can be found in our First Time Configuration guide. We strongly recommend guiding users to leverage the new experiences for OneDrive, Assignments, Reflect and OneNote Class Notebooks in the Microsoft 365 LTI as the classic experiences are set to retire on September 17 th , 2026. We are working on additional guidance to help with migration of existing content ahead of classic LTI retirements, and more information will be available soon. Compliance and regulatory resources Visit the Microsoft Service Trust Portal to learn how Microsoft cloud services protect your data, and how you can manage cloud data security and compliance for your organization. You will find our latest HECVAT assessment along with other resources for Microsoft 365 LTI and all Microsoft apps and services. For more information, and to keep up with future product announcements Please visit the Microsoft Tech Community Education Blog and subscribe to keep up with what’s new in Microsoft Education. We also hold bi-monthly office hours every first and third Thursday where lots of LMS + Microsoft 365 customers come to discuss scenarios and get assistance from peers, please join us. Microsoft 365 LTI Office Hours 1 st and 3 rd Thursday of each month at 11am EST Join link: https://aka.ms/LTIOfficeHours How to get help or send feedback For any issues deploying the integration, our Education Support team is here to help. Please visit https://aka.ms/EduSupport Once deployed, there are links to Contact Support and Send Feedback from right within the app. These can be found in the user voice menu in the upper right on any view that appears within the LMS. Learn more about Microsoft feedback for your organization. Learning Tools Interoperability® (LTI®) is a trademark of the 1EdTech Consortium, Inc. (1edtech.org) The word Moodle and associated Moodle logos are trademarks or registered trademarks of Moodle Pty Ltd or its related affiliates.933Views1like1Comment