publish
268 TopicsJoin Marketplace at Microsoft Build!
The Marketplace team will be at Microsoft Build, June 2-3 in San Francisco, CA! We hope you'll join us in the Hub to meet with experts on how to build, publish, and monetize apps and agents with Microsoft Marketplace. "Favorite" the Marketplace lightning talk which covers the start-to-finish publishing process and highlights benefits and incentives available from Microsoft for software developers: Monetize apps and agents with Microsoft Marketplace Check out the full catalog to explore sessions across the topics: Cloud Platform & Data, Developer Tools & Frameworks, Apps & Agents, Model Training, Windows, and Responsible AI. Can't make it to San Francisco? You can always register for the digital experience. See you there!Why governance is essential for scaling AI apps and agents in Microsoft Marketplace
As AI apps and agents become more autonomous and integrated across enterprise environments, governance is no longer a secondary consideration—it is foundational to building solutions customers can confidently adopt and operate at scale. In this Microsoft Marketplace blog, learn how governance transforms powerful AI capabilities into controlled, accountable solutions by establishing responsibility for system actions, defining acceptable behavior boundaries, and enabling ongoing review and auditability. The article outlines how effective governance for AI apps and agents spans three core dimensions—policy, enforcement, and evidence—ensuring that AI behavior in production environments remains intentional, explainable, and aligned with customer expectations. For software development companies building and publishing AI-powered solutions through Microsoft Marketplace, readiness is increasingly defined not by raw technical capability, but by control, accountability, and trust in real-world deployment scenarios. If you’re designing, publishing, or scaling AI solutions through Microsoft Marketplace, this guidance can help you strengthen enterprise trust and ensure your apps and agents are built for long-term operational success. Read the full article: Governing AI apps and agents for Marketplace | Microsoft Community HubQuality and evaluation framework for successful AI apps and agents in Microsoft Marketplace
Why quality in AI is different — and why it matters for Marketplace Traditional software quality spans many dimensions — from performance and reliability to correctness and fault tolerance — but once those characteristics are specified and validated, system behavior is generally stable and repeatable. Quality is assessed through correctness, reliability, performance, and adherence to specifications. AI apps and agents change this equation. Their behavior is inherently non-deterministic and context‑dependent. The same prompt can produce different responses depending on model version, retrieval context, prior interactions, or environmental conditions. For agentic systems, quality also depends on reasoning paths, tool selection, and how decisions unfold across multiple steps — not just on the final output. This means an AI app can appear functional while still falling short on quality: producing responses that are inconsistent, misleading, misaligned with intent, or unsafe in edge cases. Without a structured evaluation framework, these gaps often surface only in production — in customer environments, after trust has already been extended. For Microsoft Marketplace, this distinction matters. Buyers expect AI apps and agents to behave predictably, operate within clear boundaries, and remain fit for purpose as they scale. Quality measurement is what turns those expectations into something observable — and that visibility is what determines Marketplace readiness. This post is part of a series on building and publishing well-architected AI apps and agents on Microsoft Marketplace. How quality measurement shapes Marketplace readiness AI apps and agents that can demonstrate quality — with documented evaluation frameworks, defined release criteria, and evidence of ongoing measurement — are easier to evaluate, trust, and adopt. Quality evidence reduces friction during Marketplace review, clarifies expectations during customer onboarding, and supports long-term confidence in production. When quality is visible and traceable, the conversation shifts from "does this work?" to "how do we scale it?" — which is exactly where publishers want to be. Publishers who treat quality as a first-class discipline build the foundation for safe iteration, customer retention, and sustainable growth through Microsoft Marketplace. That foundation is built through the decisions, frameworks, and evaluation practices established long before a solution reaches review. What "quality" means for AI apps and agents Quality for AI apps and agents is not a single metric — it spans interconnected dimensions that together define whether a system is doing what it was built to do, for the people it was built to serve. The HAX Design Library — Microsoft's collection of human-AI interaction design patterns — offers practical guidance for each one. These dimensions must be defined before evaluation begins. You can only measure what you have first described. Accuracy and relevance — does the output reflect the right answer, grounded in the right context? HAX patterns Make clear what the system can do (G1) and notify users when the AI is uncertain (G10) help publishers design systems where accuracy is visible and outputs are understood in the right context — not treated as universally authoritative. Safety and alignment — does the output stay within intended use, without harmful, biased, or policy-violating content? HAX patterns Mitigate social biases (G6) and Support efficient correction (G9) help ensure outputs stay within acceptable boundaries — and that users can identify and address issues before they cause downstream harm. Consistency and reliability — does the system behave predictably across users, sessions, and environments? HAX patterns Remember recent interactions (G12) and notify users about changes (G18) keep behavior coherent within sessions and ensure updates to the model or prompts are never silently introduced. Fitness for purpose — does the system do what it was designed to do, for the people it was designed to serve, in the conditions it will actually operate in? HAX patterns make clear how well the system can do what it does (G2) and Act on the user's context and goals (G4) ensure the system responds to what users actually need — not just what they literally typed. These dimensions work together — and gaps in any one of them will surface in production, often in ways that are difficult to trace without a deliberate evaluation framework. Designing an evaluation framework before you ship Evaluation frameworks should be built alongside the solution. At the end, gaps are harder and costlier to close. The discipline mirrors the design-in approach that applies to security and governance: decisions made early shape what is measurable, what is improvable, and what is ready to ship. A well-structured evaluation framework defines five things: What to measure — the quality dimensions that matter most for this solution and its intended use cases. For AI apps and agents, this typically includes task adherence, response coherence, groundedness, and safety — alongside the fitness-for-purpose dimensions defined in the previous section. How to measure it — the methods, tools, and benchmarks used to assess quality consistently. Effective evaluation combines AI-assisted evaluators (which use a model as a judge to score outputs), rule-based evaluators (which apply deterministic logic), and human review for edge cases and safety-relevant responses that automated methods cannot fully capture. Who evaluates — the right combination of automated metrics, human review, and structured customer feedback. No single method is sufficient; the framework defines how each is applied and when human judgment takes precedence. When to evaluate — at defined milestones: during development to establish a baseline, pre-release to validate against acceptance thresholds, at rollout to catch regression, and continuously in production to detect drift as models, prompts, and data evolve. What triggers re-evaluation — model updates, prompt changes, new data sources, tool additions, or meaningful shifts in customer usage patterns. Re-evaluation should be a scheduled and triggered discipline, not an ad hoc response to visible failures. The framework becomes a shared artifact — used by the publisher to release safely, and by customers to understand what quality commitments they are adopting when they deploy the solution in their environment. Evaluate your AI agents - Microsoft Foundry | Microsoft Learn Evaluation methods for AI apps and agents Quality must be assessed across complementary approaches — each designed to surface a different category of risk, at a different stage of the solution lifecycle. Automated metric evaluation — evaluators assess agent responses against defined criteria at scale. Some use AI models as judges to score outputs like task adherence, coherence, and groundedness; others apply deterministic rules or text similarity algorithms. Automated evaluation is most effective when acceptance thresholds are defined upfront — for example, a minimum task adherence pass rate before a release proceeds. Safety evaluation — a dedicated evaluation category that identifies potential content risks, policy violations, and harmful outputs in generated responses. Safety evaluators should run alongside quality evaluators, not as a separate afterthought. Human-in-the-loop evaluation — structured expert review of edge cases, borderline outputs, and safety-relevant responses that automated metrics cannot fully capture. Human judgment remains essential for interpreting context, intent, and impact. Red-teaming and adversarial testing — probing the system with challenging, unexpected, or intentionally misused inputs (including prompt injection attempts and tool misuse) to surface failure modes before customers encounter them. Microsoft provides dedicated AI red teaming guidance for agent-based systems. Customer feedback loops — structured collection of real-world signals from users interacting with the system in production. Production feedback closes the gap between what was tested and what customers actually experience. Each method has a distinct role. The evaluation framework defines when and how each is applied — and which results are required before a release proceeds, a change is accepted, or a capability is expanded. Defining release criteria and ongoing quality gates Quality evaluation only drives improvement when it is connected to clear release criteria. In an LLMOps model, those criteria are automated gates embedded directly into the CI/CD pipeline, applied consistently at every stage of the release cycle. In continuous integration (CI), automated evaluations run with every change — whether that change is a prompt update, a model version, a new tool, or a data source modification. CI gates catch regressions early, before they reach customers, by validating outputs against predefined quality thresholds for task adherence, coherence, groundedness, and safety. In continuous deployment (CD), quality gates determine whether a build is eligible to proceed. Release criteria should define: Minimum acceptable thresholds for each quality dimension — a release does not proceed until those thresholds are met Known failure modes that block release outright versus those that are tracked, monitored, and accepted within defined risk tolerances Deployment constraints — conditions under which a release is paused, rolled back, or progressively expanded to a subset of users before full rollout Ongoing evaluation must be scheduled and triggered. As models, prompts, tools, and customer usage patterns evolve, the baseline shifts. LLMOps treats re-evaluation as a continuous discipline: run evaluations, identify weak areas, adjust, and re-evaluate before changes propagate. This connects directly to governance. Quality evidence — the record of what was measured, when, and against what criteria — is part of the audit trail that makes AI behavior accountable, explainable, and trustworthy over time. For more on the governance foundation this builds on, see Governing AI apps and agents for Marketplace readiness. Quality across the publisher-customer boundary Clear quality ownership reduces friction at onboarding, builds confidence during operation, and protects both parties when behavior deviates. In the Marketplace context, quality is a shared responsibility — but the boundaries are distinct. Publishers are responsible for: Designing and running the evaluation framework during development and release Defining quality dimensions and thresholds that reflect the solution's intended use Providing customers with transparency into what quality means for this solution — without exposing proprietary prompts or internal logic Customers are responsible for: Validating that the solution performs appropriately in their specific environment, with their data and their users Configuring feedback and monitoring mechanisms that surface quality signals in their tenant Treating quality evaluation as a shared ongoing responsibility, not a one-time publisher guarantee When both sides understand their role, quality stops being a handoff and becomes a foundation — one that supports adoption, sustains trust, and enables both parties to respond confidently when behavior shifts. What's next in the journey A strong quality framework sets the baseline — but keeping that quality visible as solutions scale is its own discipline. The next posts in this series explore what comes after the framework is in place: API resilience, performance optimization, and operational observability for AI apps and agents running in production environments. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success158Views0likes0CommentsDesigning AI guardrails for apps and agents in Marketplace
Why guardrails are essential for AI apps and agents AI apps and agents introduce capabilities that go beyond traditional software. They reason over natural language, interact with data across boundaries, and—in the case of agents—can take autonomous actions using tools and APIs. Without clearly defined guardrails, these capabilities can unintentionally compromise confidentiality, integrity, and availability, the foundational pillars of information security. From a confidentiality perspective, AI systems often process sensitive prompts, contextual data, and outputs that may span customer tenants, subscriptions, or external systems. Guardrails ensure that data access is explicit, scoped, and enforced—rather than inferred through prompts or emergent model behavior. From an availability perspective, AI apps and agents can fail in ways traditional software does not — such as runaway executions, uncontrolled chains of tool calls, or usage spikes that drive up cost and degrade service. Guardrails address this by setting limits on how the system executes, how often it calls tools, and how it behaves when something goes wrong. For Marketplace-ready AI apps and agents, guardrails are foundational design elements that balance innovation with security, reliability, and responsible AI practices. By making behavioral boundaries explicit and enforceable, guardrails enable AI systems to operate safely at scale—meeting enterprise customer expectations and Marketplace requirements from day one. This post is part of a series on building and publishing well-architected AI apps and agents on Microsoft Marketplace. Using Open Worldwide Application Security Project (OWASP) GenAI Top 10 as a guardrail design lens The OWASP GenAI Top 10 provides a practical framework for reasoning about AI‑specific risks that are not fully addressed by traditional application security models. It helps teams identify where assumptions about trust, input handling, autonomy, and data access are most likely to break down in AI‑driven systems. However, not all OWASP risks apply equally to every AI app or agent. Their relevance depends on factors such as: Agent autonomy, including whether the system can take actions without human approval Data access patterns, especially cross‑tenant, cross‑subscription, or external data retrieval Integration surface area, meaning the number and type of tools, APIs, and external systems the agent connects to Because of this variability, OWASP should not be treated as a checklist to implement wholesale. Doing so can lead teams to over‑engineer controls in low‑risk areas while leaving critical gaps in places where autonomy, data movement, or tool execution create real exposure. Instead, OWASP is most effective when used as a design lens — to inform where guardrails are needed and what behaviors require explicit boundaries. Understanding risks and enforcing boundaries are two different things. OWASP tells you where to look; guardrails are what you actually build. The goal is not to eliminate all risk, but to use OWASP insights to design selective, intentional guardrails that align with the system's architecture, autonomy, and operating context. Translating AI risks into architectural guardrails OWASP GenAI Top 10 helps identify where AI systems are vulnerable, but guardrails are what make those risks enforceable in practice. Guardrails are most effective when they are implemented as architectural constraints—designed into the system—rather than as runtime patches added after risky behavior appears. In AI apps and agents, many risks emerge not from a single component, but from how prompts, tools, data, and actions interact. Architectural guardrails establish clear boundaries around these interactions, ensuring that risky behavior is prevented by design rather than detected too late. Common guardrail categories map naturally to the types of risks highlighted in OWASP: Input and prompt constraints Address risks such as prompt injection, system prompt leakage, and unintended instruction override by controlling how inputs are structured, validated, and combined with system context. Action and tool‑use boundaries Mitigate risks related to excessive agency and unintended actions by explicitly defining which tools an AI app or agent can invoke, under what conditions, and with what scope. Data access restrictions Reduce exposure to sensitive information disclosure and cross‑boundary leakage by enforcing identity‑aware, context‑aware access to data sources rather than relying on prompts to imply intent. Output validation and moderation Help contain risks such as misinformation, improper output handling, or policy violations by treating AI output as untrusted and subject to validation before it is acted on or returned to users. What matters most is where these guardrails live in the architecture. Effective guardrails sit at trust boundaries—between users and models, models and tools, agents and data sources, and control planes and data planes. When guardrails are embedded at these boundaries, they can be applied consistently across environments, updates, and evolving AI capabilities. By translating identified risks into architectural guardrails, teams move from risk awareness to behavioral enforcement. This shift is foundational for building AI apps and agents that can operate safely, predictably, and at scale in Marketplace environments. Design‑time guardrails: shaping allowed behavior before deployment The OWASP GenAI Top 10 provides a practical framework for reasoning about AI specific risks that are not fully addressed by traditional application security models. It helps teams identify where assumptions about trust, input handling, autonomy, and data access are most likely to break down in AI driven systems. However, not all OWASP risks apply equally to every AI app or agent. Their relevance depends on factors such as: Agent autonomy, including whether the system can take actions without human approval Data access patterns, especially cross-tenant, cross subscription, or external data retrieval Integration surface area, meaning the number and type of tools, APIs, and external systems the agent connects to Because of this variability, OWASP should not be treated as a checklist to implement wholesale. Doing so can lead teams to over engineer controls in low risk areas while leaving critical gaps in places where autonomy, data movement, or tool execution create real exposure. Instead, OWASP is most effective when used as a design lens — to inform where guardrails are needed and what behaviors require explicit boundaries. Understanding risks and enforcing boundaries are two different things. OWASP tells you where to look; guardrails are what you actually build. The goal is not to eliminate all risk, but to use OWASP insights to design selective, intentional guardrails that align with the system's architecture, autonomy, and operating context. Runtime guardrails: enforcing boundaries as systems operate For Marketplace publishers, the key distinction between monitoring and runtime guardrails is simple: Monitoring tells you what happened after the fact. Runtime guardrails are inline controls that can block, pause, throttle, or require approval before an action completes. If you want prevention, the control has to sit in the execution path. At runtime, guardrails should constrain three areas: Agent decision paths (prevent runaway autonomy) Cap planning and execution. Limit the agent to a maximum number of steps per request, enforce a maximum wall‑clock time, and stop repeated loops. Apply circuit breakers. Terminate execution after a specified number of tool failures or when downstream services return repeated throttling errors. Require explicit escalation. When the agent’s plan shifts from “read” to “write,” pause and require approval before continuing. Tool invocation patterns (control what gets called, how, and with what inputs) Enforce allowlists. Allow only approved tools and operations, and block any attempt to call unregistered endpoints. Validate parameters. Reject tool calls that include unexpected tenant identifiers, subscription scopes, or resource paths. Throttle and quota. Rate‑limit tool calls per tenant and per user, and cap token/tool usage to prevent cost spikes and degraded service. Cross‑system actions (constrain outbound impact at the boundary you control) Runtime guardrails cannot “reach into” external systems and stop independent agents operating elsewhere. What publishers can do is enforce policy at your solution’s outbound boundary: the tool adapter, connector, API gateway, or orchestration layer that your app or agent controls. Concrete examples include: Block high‑risk operations by default (delete, approve, transfer, send) unless a human approves. Restrict write operations to specific resources (only this resource group, only this SharePoint site, only these CRM entities). Require idempotency keys and safe retries so repeated calls do not duplicate side effects. Log every attempted cross‑system write with identity, scope, and outcome, and fail closed when policy checks cannot run. Done well, runtime guardrails produce evidence, not just intent. They show reviewers that your AI app or agent enforces least privilege, prevents runaway execution, and limits blast radius—even when the model output is unpredictable. Guardrails across data, identity, and autonomy boundaries Guardrails don't work in silos. They are only effective when they align across the three core boundaries that shape how an AI app or agent operates — identity, data, and autonomy. Guardrails must align across: Identity boundaries (who the agent acts for) — represent the credentials the agent uses, the roles it assumes, and the permissions that flow from those identities. Without clear identity boundaries, agent actions can appear legitimate while quietly exceeding the authority that was actually intended. Data boundaries (what the agent can see or retrieve) — ensuring access is governed by explicit authorization and context, not by what the model infers or assumes. A poorly scoped data boundary doesn't just create exposure — it creates exposure that is hard to detect until something goes wrong. Autonomy boundaries (what the agent can decide or execute) — defining which actions require human approval, which can proceed automatically, and which are never permitted regardless of context. Autonomy without defined limits is one of the fastest ways for behavior to drift beyond what was ever intended. When these boundaries are misaligned, the consequences are subtle but serious. An agent may act under the authority of one identity, access data scoped to another, and execute with broader autonomy than was ever granted — not because a single control failed, but because the boundaries were never reconciled with each other. This is how unintended privilege escalation happens in well-intentioned systems. Balancing safety, usefulness, and customer trust Getting guardrails right is less about adding controls and more about placing them well. Too restrictive, and legitimate workflows break down, safe autonomy shrinks, and the system becomes more burden than benefit. Too permissive, and the risks accumulate quietly — surfacing later as incidents, audit findings, or eroded customer trust. Effective guardrails share three characteristics that help strike that balance: Transparent — customers and operators understand what the system can and cannot do, and why those limits exist Context-aware — boundaries tighten or relax based on identity, environment, and risk, without blocking safe use Adjustable — guardrails evolve as models and integrations change, without compromising the protections that matter most When these characteristics are present, guardrails naturally reinforce the foundational principles of information security — protecting confidentiality through scoped data access, preserving integrity by constraining actions to authorized paths, and supporting availability by preventing runaway execution and cascading failures. How guardrails support Marketplace readiness For AI apps and agents in Microsoft Marketplace, guardrails are a practical enabler — not just of security, but of the entire Marketplace journey. They make complex AI systems easier to evaluate, certify, and operate at scale. Guardrails simplify three critical aspects of that journey: Security and compliance review — explicit, architectural guardrails give reviewers something concrete to assess. Rather than relying on documentation or promises, behavior is observable and boundaries are enforceable from day one. Customer onboarding and trust — when customers can see what an AI system can and cannot do, and how those limits are enforced, adoption decisions become easier and time to value shortens. Clarity is a competitive advantage. Long-term operation and scale — as AI apps evolve and integrate with more systems, guardrails keep the blast radius contained and prevent hidden privilege escalation paths from forming. They are what makes growth manageable. Marketplace-ready AI systems don't describe their guardrails — they demonstrate them. That shift, from assurance to evidence, is what accelerates approvals, builds lasting customer trust, and positions an AI app or agent to scale with confidence. What’s next in the journey Guardrails establish the foundation for safe, predictable AI behavior — but they are only the beginning. The next phase extends these boundaries into governance, compliance, and day‑to‑day operations through policy definition, auditing, and lifecycle controls. Together, these mechanisms ensure that guardrails remain effective as AI apps and agents evolve, scale, and operate within enterprise environments. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit can connect you with code templates for AI solution patterns Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success241Views1like1CommentReshaping enterprise go-to-market with Microsoft Marketplace and ecosystem partnerships
As the pace of enterprise transformation accelerates, we’re seeing a fundamental shift in how organizations go to market—and it’s being powered by ecosystems, not silos. Partner1 recently hosted two industry events where we explored how Microsoft Marketplace is becoming a central engine for this change, helping partners unlock new routes to growth while making it easier for customers to discover, buy, and deploy innovative solutions. From AI-driven offerings to multiparty private offers and deeper channel integrations, Marketplace is redefining how partnerships come together to deliver end-to-end value. It’s not just about listing solutions—it’s about creating scalable, repeatable growth through a connected ecosystem that meets customers where and how they want to buy. If you’re thinking about how to evolve your go-to-market strategy, scale with partners, or tap into new revenue opportunities, this is a conversation you won’t want to miss. Read the full article to see how Marketplace and ecosystem partnerships are reshaping enterprise go-to-market—and what it means for your business. How Microsoft Marketplace and ecosystem partnerships are reshaping enterprise go-to-market | Microsoft Community HubUsing App Advisor to build, publish, and grow in Microsoft Marketplace
In a recent partner office hour webinar, experts walked through how to build, publish, and optimize Marketplace offers using App Advisor, complete with a live demo of the platform in action. This article summarizes the key insights from that session and highlights how you can apply them to your own Marketplace journey. Navigating the path from idea to a successful Microsoft Marketplace offer can feel overwhelming. With countless resources, technical requirements, and go-to-market decisions to make, many software companies struggle to stay focused on what matters most. That’s where App Advisor comes in. App Advisor is a self-service, end-to-end guidance experience designed to help you build, publish, and grow high-performing marketplace offers, all with tailored, actionable recommendations based on your specific scenario. In this article, we break down how App Advisor supports every stage of your offer journey and how you can use it to drive better outcomes faster. What is App Advisor and why it matters App Advisor brings together curated Microsoft guidance, interactive tools, and AI-powered recommendations into a single experience. Instead of searching across scattered documentation, you get step-by-step, personalized guidance aligned to: Your app type and architecture Your chosen marketplace offer model Your current stage in the journey The result? Less guesswork, faster execution, and higher-quality marketplace offers. The four key stages of the Marketplace journey 1. Discover: Validate the opportunity Before building, it’s critical to understand the business value of Microsoft Marketplace. App Advisor helps you: Evaluate the business opportunity of building on Microsoft technology Understand the value of selling through marketplace Explore available partner benefits and incentives A standout feature here is the Marketplace Value Calculator, which allows you to: Estimate revenue potential Compare transaction costs vs. benefits Generate a business case to share with leadership 2. Build: Accelerate development with confidence In the build phase, App Advisor connects you with: Code templates and reference architectures SDKs, APIs, and technical documentation Free tools, cloud credits, and development resources A newly integrated Quick Start Development Toolkit helps you: Match your app idea to proven development patterns Access deployable code repositories Speed up time-to-market with ready-to-use solutions You also gain guidance on building secure, compliant, and well-architected applications, ensuring your solution is marketplace-ready from the start. 3. Publish: Simplify offer creation and launch Publishing in Microsoft Marketplace involves multiple steps—from selecting the right offer type to configuring listings and pricing. App Advisor simplifies this by: Helping you choose the right marketplace offer type Guiding you through pricing models and selling options Providing step-by-step Partner Center instructions Highlighting common pitfalls and certification requirements The platform dynamically adapts guidance based on your selections, ensuring you only see what’s relevant to your scenario. 4. Grow: Optimize, sell, and scale Publishing your offer is just the beginning. Growth requires continuous optimization and active selling. App Advisor supports this with: AI-powered listing optimization Receive a quality score for your marketplace listing Get actionable recommendations across key areas like value proposition, content clarity, and visual assets Improve discoverability and customer engagement by connecting to your offer so you can edit today Go-to-market and sales guidance Learn how to promote your offer effectively Use tracking tools and analytics to refine performance Access co-branded marketing assets and toolkits Advanced selling strategies Create private offers with custom pricing and terms Partner with resellers and channel partners Scale through CSP and multi-party private offers Unlock benefits and incentives Earn cloud credits and marketplace rewards Leverage co-sell opportunities with Microsoft sellers Accelerate deals with incentives and partner programs The benefits of using App Advisor throughout the journey Using App Advisor consistently across all stages delivers significant advantages: ✔ Personalized guidance- No more generic documentation, get recommendations tailored to your app, technology, and goals. ✔ Faster time to market- Reduce delays with step-by-step instructions, pre-built templates, and streamlined workflows. ✔ Higher quality listings- AI-driven insights help you create compelling, optimized marketplace listings that convert. ✔ Improved discoverability- By following best practices, you increase your chances of being found and chosen by customers. ✔ Better sales outcomes- Access tools and strategies to promote, track, and close deals more effectively. ✔ Continuous optimization- Return anytime to refine your listing, improve performance, and unlock new growth opportunities. App Advisor is valuable for everyone Whether you're: A first-time publisher exploring marketplace opportunities An experienced software development company scaling multiple offers A marketing or sales stakeholder optimizing performance App Advisor provides value at every level. Watch the full webinar 👉 Be sure to watch the complete recorded session to see a detailed demo of App Advisor, showcasing how to navigate each stage and apply best practices in real time to get the most out of your Microsoft Marketplace journey. Partner office hour: Build, publish, and optimize marketplace offers with App Advisor By integrating App Advisor into your workflow, you’re not just building marketplace offers; you’re building better, faster, and more successful ones. Get started with App Advisor App Advisor is publicly available and free to use. Partners are encouraged to explore the experience firsthand and share feedback directly within App Advisor. App Advisor: https://aka.ms/appadvisor Marketplace Community: https://aka.ms/community/marketplace96Views5likes0CommentsSecuring AI apps and agents on Microsoft Marketplace
Why security must be designed in—not validated later AI apps and agents expand the security surface beyond that of traditional applications. Prompt inputs, agent reasoning, tool execution, and downstream integrations introduce opportunities for misuse or unintended behavior when security assumptions are implicit. These risks surface quickly in production environments where AI systems interact with real users and data. Deferring security decisions until late in the lifecycle often exposes architectural limitations that restrict where controls can be enforced. Retrofitting security after deployment is costly and can force tradeoffs that affect reliability, performance, or customer trust. Designing security early establishes clear boundaries, enables consistent enforcement, and reduces friction during Marketplace review, onboarding, and long‑term operation. In the Marketplace context, security is a foundational requirement for trust and scale. This post is part of a series on building and publishing well-architected AI apps and Agents on Microsoft Marketplace. How AI apps and agents expand the attack surface Without a clear view of where trust boundaries exist and how behavior propagates across systems, security controls risk being applied too narrowly or too late. AI apps and agents introduce security risks that extend beyond those of traditional applications. AI systems accept open‑ended prompts, reason dynamically, and often act autonomously across systems and data sources. These interaction patterns expand the attack surface in several important ways: New trust boundaries introduced by prompts and inputs, where unstructured user input can influence reasoning and downstream actions Autonomous behavior, which increases the blast radius when authentication or authorization gaps exist Tool and integration execution, where agents interact with external APIs, plugins, and services across security domains Dynamic model responses, which can unintentionally expose sensitive data or amplify errors if guardrails are incomplete Each API, plugin, or external dependency becomes a security choke point where identity validation, audit logging, and data handling must be enforced consistently—especially when AI systems span tenants, subscriptions, or ownership boundaries. Using OWASP GenAI Top 10 as a threat lens The OWASP GenAI Top 10 provides a practical, industry‑recognized lens for identifying and categorizing AI‑specific security threats that extend beyond traditional application risks. Rather than serving as a checklist, the OWASP GenAI Top 10 helps teams ask the right questions early in the design process. It highlights where assumptions about trust, input handling, autonomy, and data access can break down in AI‑driven systems—often in ways that are difficult to detect after deployment. Common risk categories highlighted by OWASP include: Prompt injection and manipulation, where malicious input influences agent behavior or downstream actions Sensitive data exposure, including leakage through prompts, responses, logs, or tool outputs Excessive agency, where agents are granted broader permissions or action scope than intended Insecure integrations, where tools, plugins, or external systems become unintended attack paths Highly regulated industries, sensitive data domains, or mission‑critical workloads may require additional risk assessment and security considerations that extend beyond the OWASP categories. The OWASP GenAI Top 10 allows teams to connect high‑level risks to architectural decisions by creating a shared vocabulary that sets the foundation for designing guardrails that are enforceable both at design time and at runtime. Designing security guardrails into the architecture Security guardrails must be designed into the architecture, shaping where and how policies are enforced, evaluated, and monitored throughout the solution lifecycle. Guardrails operate at two complementary layers: Design time, where architectural decisions determine what is possible, permitted, or blocked by default Runtime, where controls actively govern behavior as the AI app or agent interacts with users, data, and systems When architectural boundaries are not defined early, teams often discover that critical controls—such as input validation, authorization checks, or action constraints—cannot be applied consistently without redesign: Tenancy boundaries, defining how isolation is enforced between customers, environments, or subscriptions Identity boundaries, governing how users, agents, and services authenticate and what actions they can perform Environment separation, limiting the blast radius of experimentation, updates, or failures Control planes, where configuration, policy, and behavior can be adjusted without redeploying core logic Data planes, controlling how data is accessed, processed, and moved across trust boundaries Designing security guardrails into the architecture transforms security from reactive to preventative, while also reducing friction later in the Marketplace journey. Clear enforcement boundaries simplify review, clarify risk ownership, and enable AI apps and agents to evolve safely as capabilities and integrations expand. Identity as a security boundary for AI apps and agents Identity defines who can access the system, what actions can be taken, and which resources an AI app or agent is permitted to interact with across tenants, subscriptions, and environments. Agents often act on behalf of users, invoke tools, and access downstream systems autonomously. Without clear identity boundaries, these actions can unintentionally bypass least‑privilege controls or expand access beyond what users or customers expect. Strong identity design shapes security in several key ways: Authentication and authorization, determines how users, agents, and services establish trust and what operations they are allowed to perform Delegated access, constraints agents to act with permissions tied to user intent and context Service‑to‑service trust, ensures that all interactions between components are explicitly authenticated and authorized Auditability, traces actions taken by agents back to identities, roles, and decisions A zero-trust approach is essential in this context. Every request—whether initiated by a user, an agent, or a backend service—should be treated as untrusted until proven otherwise. Identity becomes the primary control plane for enforcing least privilege, limiting blast radius, and reducing downstream integration risk. This foundation not only improves security posture, but also supports compliance, simplifies Marketplace review, and enables AI apps and agents to scale safely as integrations and capabilities evolve. Protecting data across boundaries Data may reside in customer‑owned tenants, subscriptions, or external systems, while the AI app or agent runs in a publisher‑managed environment or a separate customer environment. Protecting data across boundaries requires teams to reason about more than storage location. Several factors shape the security posture: Data ownership, including whether data is owned and controlled by the customer, the publisher, or a third party Boundary crossings, such as cross‑tenant, cross‑subscription, or cross‑environment access patterns Data sensitivity, particularly for regulated, proprietary, or personally identifiable information Access duration and scope, ensuring data access is limited to the minimum required context and time When these factors are implicit, AI systems can unintentionally broaden access through prompts, retrieval‑augmented generation, or agent‑initiated actions. This risk increases when agents autonomously select data sources or chain actions across multiple systems. To mitigate these risks, access patterns must be explicit, auditable, and revocable. Data access should be treated as a continuous security decision, evaluated on every interaction rather than trusted by default once a connection exists. This approach aligns with zero-trust principles, where no data access is implicitly trusted and every request is validated based on identity, context, and intent. Runtime protections and monitoring For AI apps and agents, security does not end at deployment. In customer environments, these systems interact continuously with users, data, and external services, making runtime visibility and control essential to a strong security posture. AI behavior is also dynamic: the same prompt, context, or integration can produce different outcomes over time as models, data sources, and agent logic evolve, so monitoring must extend beyond infrastructure health to include behavioral signals that indicate misuse, drift, or unintended actions. Effective runtime protections focus on five core capabilities: Vulnerability management, including regular scanning of the full solution to identify missing patches, insecure interfaces, and exposure points Observability, so agent decisions, actions, and outcomes can be traced and understood in production Behavioral monitoring, to detect abnormal patterns such as unexpected tool usage, unusual access paths, or excessive action frequency Containment and response, enabling rapid intervention when risky or unauthorized behavior is detected Forensics readiness, ensuring system-state replicability and chain-of-custody are retained to investigate what happened, why it happened, and what was impacted Monitoring that only tracks availability or performance is insufficient. Runtime signals must provide enough context to explain not just what happened, but why an AI app or agent behaved the way it did, and which identities, data sources, or integrations were involved. Equally important is integration with broader security event and incident management workflows. Runtime insights should flow into existing security operations so AI-related incidents can be triaged, investigated, and resolved alongside other enterprise security events—otherwise AI solutions risk becoming blind spots in a customer’s operating environment. Preparing for incidents and abuse scenarios No AI app or agent operates in a perfectly controlled environment. Once deployed, these systems are exposed to real users, unpredictable inputs, evolving data, and changing integrations. Preparing for incidents and abuse scenarios is therefore a core security requirement, not a contingency plan. AI apps and agents introduce unique incident patterns compared to traditional software. In addition to infrastructure failures, teams must be prepared for prompt abuse, unintended agent actions, data exposure, and misuse of delegated access. Because agents may act autonomously or continuously, incidents can propagate quickly if safeguards and response paths are unclear. Effective incident readiness starts with acknowledging that: Abuse is not always malicious, misuse can stem from ambiguous prompts, unexpected context, or misunderstood capabilities Agent autonomy may increase impact, especially when actions span multiple systems or data sources Security incidents may be behavioral, not just technical, requiring interpretation of intent and outcomes Preparing for these scenarios requires clearly defined response strategies that account for how AI systems behave in production. AI solutions should be designed to support pause, constrain, or revoke agent capabilities when risk is detected, and to do so without destabilizing the broader system or customer environment. Incident response must also align with customer expectations and regulatory obligations. Customers need confidence that AI‑related issues will be handled transparently, proportionately, and in accordance with applicable security and privacy standards. Clear boundaries around responsibility, communication, and remediation help preserve trust when issues arise. How security decisions shape Marketplace readiness From initial review to customer adoption and long‑term operation, security posture is a visible and consequential signal of readiness. AI apps and agents with clear boundaries—around identity, data access, autonomy, and runtime behavior—are easier to evaluate, onboard, and trust. When security assumptions are explicit, Marketplace review becomes more predictable, customer expectations are clearer, and operational risk is reduced. Ambiguous trust boundaries, implicit data access, or uncontrolled agent actions can introduce friction during review, delay onboarding, or undermine customer confidence after deployment. Marketplace‑ready security is therefore not about meeting a minimum bar. It is about enabling scale. Well-designed security allows AI apps and agents to integrate into enterprise environments, align with customer governance models, and evolve safely as capabilities expand. When security is treated as a first‑class architectural concern, it becomes an enabler rather than a blocker—supporting faster time to market, stronger customer trust, and sustainable growth through Microsoft Marketplace. What’s next in the journey Security for AI apps and agents is not a one‑time decision, but an ongoing design discipline that evolves as systems, data, and customer expectations change. By establishing clear boundaries, embedding guardrails into the architecture, and preparing for real‑world operation, publishers create a foundation that supports safe iteration, predictable behavior, and long‑term trust. This mindset enables AI apps and agents to scale confidently within enterprise environments while meeting the expectations of customers adopting solutions through Microsoft Marketplace. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor, Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations to help you replicate and publish your app with ISV Success127Views5likes0CommentsAI apps and agents: choosing your Marketplace offer type
Choosing your Marketplace offer type is one of the earliest—and most consequential—decisions you’ll make when preparing an AI app for Microsoft Marketplace. It’s also one of the hardest to change later. This post is the second in our Marketplace‑ready AI app series. Its goal is not to push you toward a specific option, but to help you understand how Marketplace offer types map to different AI delivery models—so you can make an informed decision before architecture, security, and publishing work begins. This post is part of a series on building and publishing well-architected AI apps and agents on Microsoft Marketplace. Why offer type is an important Marketplace decision Offer type is more than a packaging choice. It defines the operating model of your AI app on Marketplace: How customers acquire your solution Where the AI runtime executes Determining the right security and business boundaries for the AI solution and associated contextual data Who operates and updates the system How transactions and billing are handled Once an offer type is selected, it cannot be changed without creating a new offer. Teams that choose too quickly often discover later that the decision creates friction across architecture, security boundaries, or publishing requirements. Microsoft’s Publishing guide by offer type explains the structural differences between offer types and why this decision must be made up front. How Marketplace offer types map to AI delivery models AI apps differ from traditional software in a few critical ways: Contextual data may need to remain in a specific tenant or geography Agents may operate autonomously and continuously Control over infrastructure often determines trust and compliance How the solution is charged and monetized, including whether pricing is usage‑based, metered, or subscription‑driven (for example, billing per inference, per workflow execution, or as a flat monthly fee) The buyer’s technical capability, including the level of engineering expertise required to deploy and operate the solution (for example, SaaS is generally easier to consume, while container‑based and managed application offers often require stronger cloud engineering and DevOps skills) Marketplace offer types don’t describe features. They define responsibility boundaries—who controls the AI runtime, who owns the infrastructure, and where customer data is processed. At a high level, Marketplace supports four primary delivery models for AI solutions: SaaS Azure Managed Application Azure Container Virtual Machine Each represents a different balance between publisher control and customer control. The sections below explain what each model means in practice. Check out the interactive offer selection wizard in App Advisor for decision support. Below, we unpack each of the offer types. SaaS offers for AI apps SaaS is the most common model for AI apps and agents on Marketplace—and often the default starting point. With a SaaS offer, the AI service runs in the publisher’s Azure environment and is accessed by customers through a centralized endpoint. This model works well for: Multi‑tenant AI platforms and agents Continuous model and prompt updates Rapid experimentation and iteration Usage‑based or subscription billing Because the service is centrally hosted, publishers retain full control over deployment, updates, and operational behavior. For multi-tenant AI apps, this also means making early decisions about Microsoft Entra ID configuration—such as how customers are onboarded, whether access is granted through tenant-level consent or external identities, and how user identities, roles, and data are isolated across tenants to prevent cross-tenant access or data leakage. For official guidance, see the SaaS section of the Marketplace publishing guide and the AI agent overview, which describes SaaS‑based agent deployments. Plan a SaaS offer for Microsoft Marketplace. Azure Managed Applications for AI solutions In this model, the solution is deployed into the customer’s Azure subscription, not the publisher’s. There are two variants: Managed applications, where the publisher retains permissions to operate and update the deployed resources Solution templates, where the customer fully manages the deployment after installation This model is a strong fit when AI workloads must run inside customer‑controlled environments, such as: Regulated or sensitive data scenarios Customer‑owned networking and identity boundaries Infrastructure‑heavy AI solutions that can’t be centralized Willingness or need on part of the customer or IT team to tailor the app to the needs of the end customer specific environment Managed Applications sit between SaaS and fully customer‑run deployments. They offer more customer control than SaaS, while still allowing publishers to manage lifecycle aspects when appropriate. Marketplace guidance for Azure Applications is covered in the publishing guide. For more information, see the following links: Plan an Azure managed application for an Azure application offer. Azure Container offers for AI workloads With container offers, the customer runs the AI workload—typically on AKS—using container images supplied by the publisher. This model is best suited for scenarios that require: Strict data residency Air‑gapped or tightly controlled environments Customer‑managed Kubernetes infrastructure The publisher delivers the container artifacts, but deployment, scaling, and runtime operations occur in the customer’s environment. This shifts operational responsibility, risk and compute costs away from the publisher and toward the customer. Container offer requirements are covered in the Marketplace publishing guide. Plan a Microsoft Marketplace Container offer. Virtual Machine offers for AI solutions Virtual Machine offers still play a role, particularly for specialized or legacy AI solutions. VM offers package a pre‑configured AI environment that customers deploy into their Azure subscription. Compared to other models, they offer: Updates and scaling are more tightly scoped Iteration cycles tend to be longer The solution is more closely aligned with specific OS or hardware requirements They are most commonly used for: Legacy AI stacks Fixed‑function AI appliances Solutions with specialized hardware or driver dependencies VM publishing requirements are also documented in the Marketplace publishing guide. Plan a virtual machine offer for Microsoft Marketplace. Comparing offer types across AI‑specific decision dimensions Rather than asking “which offer type is best,” it’s more useful to ask what trade‑offs you’re making. Key lenses to consider include: Who operates the AI runtime day‑to‑day Where customer data and AI prompts inputs and outputs are processed and stored How quickly models, prompts, and logic can evolve The balance between publisher control and customer control How Marketplace transactions and billing align with runtime behavior SaaS Container (AKS / ACI) Virtual Machine (VM) Azure Managed Application What it is Fully managed, externally hosted app integrated with Marketplace for billing and entitlement Containerized app deployed into customer-managed Azure container environments VM image deployed directly into the customer’s Azure subscription Azure native solution deployed into the customer’s subscription, managed by the publisher Control plane Publisher‑owned Customer owned Customer owned Customer owned (with publisher access) Operational model Centralized operations, updates, and scaling Customer operates infra; publisher provides containers Customer operates infra; publisher provides VM image Per customer deployment and lifecycle Good fit scenarios • Multi‑tenant AI apps serving many customers • Fast onboarding and trials • Frequent model or feature updates • Publisher has full runtime control • AI apps or agents built as microservices • Legacy or lift-and-shift AI workloads • Enterprise AI solutions requiring customer owned infrastructure Avoid when • Customers require deployment into their own subscription • Strict data residency mandates customer control • Offline or air‑gapped environments are required • Customers standardize on Kubernetes • Custom OS or driver dependencies • Tight integration with customer Azure resources Typical AI usage pattern Centralized inference and orchestration across tenants • Portability across environments is important • Specialized runtime requirements • Strong compliance and governance needs Different AI solutions land in different places across these dimensions. The right choice is the one that matches your operational reality—not just your product vision. Note: If your solution primarily delivers virtual machines or containerized workloads, use a Virtual Machine or Container offer instead of an Azure Managed Application. Supported sales models and pricing options by Marketplace offer type Marketplace offer types don’t just define how an AI app and agent is deployed — they also determine how it can be sold, transacted, and expanded through Microsoft Marketplace. Understanding the supported sales models early helps avoid misalignment between architecture, pricing, and go‑to‑market strategy. Supported sales models Offer type Transactable listing Public listing Private offers Resale enabled Multiparty private offers Azure IP Co‑sell eligible SaaS Yes Yes Yes Yes Yes Yes Container Yes Yes Yes No Yes Yes Virtual Machine Yes Yes Yes Yes Yes Yes Azure Managed Application Yes Yes Yes No Yes Yes What these sales models mean Transactable listing A Marketplace listing that allows customers to purchase the solution directly through Microsoft Marketplace, with billing handled through Microsoft. Public listing A listing that is discoverable by any customer browsing Microsoft Marketplace and available for self‑service acquisition. Private offers Customer‑specific offers created by the publisher with negotiated pricing, terms, or configurations, purchased through Marketplace. Resale enabled Using resale enabled offers, software companies can authorize their channel partners to sell their existing Marketplace offers on their behalf. After authorization, channel partners can independently create and sell private offers without direct involvement from the software company. Multiparty private offers Private offers that involve one or more Microsoft partners (such as resellers or system integrators) as part of the transaction. Azure IP Co‑sell eligible When all requirements are met this allows your offers to contribute toward customers' Microsoft Azure Consumption Commitments (MACC). Pricing section Marketplace offer types determine the pricing models available. Make sure you build towards a marketplace offer type that aligns with how you want to deploy and price your solution. SaaS – Subscription or flat‑rate pricing, per‑user pricing, and usage‑based (metered) pricing. Container – Kubernetes‑based offers support multiple Marketplace‑transactable pricing models aligned to how the workload runs in the customer’s environment, including per core, per core in cluster, per node, per node in cluster, per pod, or per cluster pricing, all billed on a usage basis. Container offers can also support custom metered dimensions for application‑specific usage. Alternatively, publishers may offer Bring Your Own License (BYOL) plans, where customers deploy through Marketplace but bring an existing software license. Virtual Machine – Usage-based hourly pricing (flat rate, per vCPU, or per vCPU size), with optional 1-year or 3-year reservation discounts. Publishers may also offer Bring Your Own License (BYOL) plans, where customers bring an existing software license and are billed only for Azure infrastructure. Azure Managed Application – A monthly management or service fee charged by the publisher; Azure infrastructure consumption is billed separately to the customer. Note: Azure Managed Applications are designed to charge for management and operational services, not for SaaS‑style application usage or underlying infrastructure consumption. Buyer‑side assumptions to be aware of For customers to purchase AI apps and agents through these sales models: The customer must be able to purchase through Microsoft Marketplace using their existing Microsoft procurement setup Marketplace purchases align with enterprise buying and governance controls, rather than ad‑hoc vendor contracts For private and multiparty private offers, the customer must be willing to engage in a negotiated Marketplace transaction, rather than pure self‑service checkout Important clarification Supported sales models are consistent across Marketplace offer types. What varies by offer type is how the solution is provisioned, billed, operated, and updated. Sales flexibility alone should not drive offer‑type selection — it must align with the architecture and operating model of the AI app and agent. How this decision impacts everything that follows Offer type decisions ripple through the rest of the Marketplace journey. They directly shape: Architecture design choices Security and compliance boundaries Fulfillment APIs and billing integration Publishing and certification requirements Cost, scalability, and operational responsibility Follow the series for updates on new posts. What’s next in the journey With the offer type decision in place, the focus shifts to turning that choice into a production‑ready solution. This includes designing an architecture that aligns with your delivery model, establishing clear security and compliance boundaries, and preparing the operational foundations required to run, update, and scale your AI app or agent confidently in customer environments. Getting these elements right early reduces rework and sets the stage for a smoother Marketplace journey. Key resources See curated, step-by-step guidance to help you build, publish, or sell your app or agent (no matter where you start) in App Advisor Quick-Start Development Toolkit Microsoft AI Envisioning Day Events How to build and publish AI apps and agents for Microsoft Marketplace Get over $126K USD in benefits and technical consultations with ISV Success156Views4likes0CommentsNo plans are available for the selected subscription
I've set up a SaaS offer in the Azure Marketplace and created a free plan for testing purposes. However, when attempting to subscribe to the offer, I encounter an issue: No plans are available for the selected subscription, despite configuring both yearly and monthly plans. Additionally, a paid plan has been configured. Please see the screenshot below for reference.4.4KViews1like25CommentsDo you want to publish a transactable offer but are finding it difficult to do?
Many SaaS companies want to sell through Microsoft Marketplace. But surprisingly few actually launch transactable offers. Why? Over the last few years, Microsoft has heavily invested in its commercial marketplace. For ISVs and SaaS companies, the opportunity is clear: Access Microsoft's enterprise customers Co-sell with Microsoft sellers Shorten procurement cycles Unlock Azure consumption commitments But despite the upside, many companies still struggle to publish transactable offers. Not because they lack great products. Because marketplace readiness requires new operational muscle. From working with companies exploring the marketplace path, three challenges show up repeatedly. 1. Offer Architecture & Packaging Most companies start with a product but the marketplace requires a sellable offer structure. That means translating your product into: SaaS offers Managed apps Private plans Metered billing models Azure-backed services Questions teams often wrestle with: Should this be SaaS, VM, or a managed app? What pricing model works in marketplace billing? How should enterprise customers purchase it? Without clear packaging, the publishing process stalls quickly. 2. Technical & Operational Readiness Publishing an offer is not just a marketing step. It touches multiple teams: Engineering Product Finance Legal Marketplace operations Some of the most common blockers include: Marketplace APIs and SaaS fulfillment integration Metering implementation Identity and tenant provisioning Azure resource deployment templates Testing and certification For companies new to marketplace infrastructure, the learning curve can be steep. 3. Internal Alignment & Ownership One of the biggest challenges isn’t technical. It’s organizational. Marketplace initiatives often sit between multiple teams: Partnerships Product Revenue operations Cloud alliances Sales leadership Without a clear owner, progress slows. Successful marketplace companies usually have a dedicated marketplace strategy owner or partner GTM lead driving execution. Why This Matters Now Enterprise buyers increasingly prefer purchasing through marketplaces. Reasons include: Faster procurement Existing vendor relationships Budget alignment with cloud commitments Simpler contract management Which means companies that enable marketplace transactions often see: Faster deal cycles Larger enterprise deals More co-sell opportunities with Microsoft But getting there requires navigating the early friction. The Question for the Ecosystem If your company is exploring Microsoft Marketplace — or already trying to publish an offer: What has been your biggest challenge? 1️⃣ Offer packaging 2️⃣ Technical integration 3️⃣ Internal ownership / alignment 4️⃣ Something else? Drop your experience in the comments. The more companies share what’s blocking progress, the easier it becomes for the ecosystem to improve the process. Comment with your biggest blocker or lesson learned from publishing a marketplace offer.