azure
6773 TopicsTransitioning SaaS Offers with Multi-Year Pricing from AppSource to Azure Marketplace
When a SaaS transactable offer on Microsoft AppSource includes a pricing plan for more than 1 year, the offer is delisted from AppSource and becomes available on Azure Marketplace. This is due to the platform's structure: AppSource primarily supports monthly or annual subscription models for SaaS offers. Any pricing model that exceeds 1 year (e.g., 2-year, 3-year plans) is outside the scope of AppSource’s transaction capabilities. When a SaaS solution introduces multi-year pricing, it is automatically transitioned to Azure Marketplace, which can accommodate longer-term contracts and subscription models (such as 2-year, 3-year, or longer terms). Azure Marketplace is designed for more complex transactions, including multi-year deals, and supports deeper infrastructure integration and contract management features compared to AppSource. Thus, any SaaS offer that requires multi-year pricing terms will shift from AppSource to Azure Marketplace, where such transactions can be handled effectively.69Views2likes3CommentsAzure Files provisioned v2 billing model for flexibility, cost savings, and predictability
We are excited to announce the general availability of the Azure Files provisioned v2 billing model for the HDD (standard) media tier. Provisioned v2 offers a provisioned billing model, meaning that you pay for what you provision, which enables you to flexibly provision storage, IOPS, and throughput. This allows you to migrate your general-purpose workloads to Azure at the best price and performance, but without sacrificing price predictability. With provisioned v2, you have granular control to scale your file share alongside your workload needs – whether you are connecting from a remote client, in hybrid mode with Azure File Sync, or running an application in Azure. The provisioned v2 model enables you to dynamically scale up or down your application’s performance as needed, without downtime. Provisioned v2 file shares can span from 32 GiB to 256 TiB in size, with up to 50,000 IOPS and 5 GiB/sec throughput, providing the flexibility to handle both small and large workloads. If you’re an existing user of Azure Files, you may be familiar with the current “pay-as-you-go” model for the HDD (standard) media tier. While conceptually, this model is simple – you pay for the storage and transactions used – usage-based pricing can be incredibly challenging to understand and use because it’s very difficult or impossible to accurately predict the usage on a file share. Without knowing how much usage you will drive, especially in terms of transactions, you can’t make accurate predictions about your Azure Files bill ahead of time, making planning and budgeting difficult. The provisioned v2 model solves all these problems – and more! Increased scale and performance In addition to the usability improvements of a provisioned model, we have significantly increased the limits over the current “pay-as-you-go” model: Quantity HDD pay-as-you-go HDD provisioned v2 Maximum share size 100 TiB (102,400 GiB) 256 TiB (262,144 GiB) Maximum share IOPS 40,000 IOPS (recently increased from 20,000 IOPS) 50,000 IOPS Maximum share throughput Variable based on region, split between ingress/egress. 5 GiB / sec (symmetric throughput) The larger limits offered on the HDD media tier in the provisioned v2 model mean that as your storage requirements grow, your file share can keep pace without the need to resort to unnatural workarounds such as sharding, allowing you to keep your data in logical file shares that make sense for your organization. Per share monitoring Since provisioning decisions are made on the file share level, in the provisioned v2 model, we’ve brought the granularity of monitoring down to the file share level. This is a significant improvement over pay-as-you-go file shares, which can only be monitored at the storage account level. To help you monitor the usage of storage, IOPS, and throughput against the provisioned limits of the file share, we’ve added the following new metrics: Transactions by Max IOPS, which provides the maximum IOPS used over the indicated time granularity. Bandwidth by Max MiB/sec, which provides the maximum throughput in MiB/sec used over the indicated time granularity. File Share Provisioned IOPS, which tracks the provisioned IOPS of the share on an hourly basis. File Share Provisioned Bandwidth MiB/s, which tracks the provisioned throughput of the share on an hourly basis. Burst Credits for IOPS, which helps you track your IOPS usage against bursting. To use the metrics, navigate to the specific file share in the Portal, and select “Monitoring > Metrics”. Select the metric you want, in this case, “Transactions by Max IOPS”, and ensure that the usage is filtered to the specific file share you want to examine. How to get access to the provisioned v2 billing model? The provisioned v2 model is generally available now, at the time of writing, in a limited set of regions. When you create a storage account in a region that has been enabled for provisioned v2, you can create a provisioned v2 account by selecting “Standard” for Performance, and “Provisioned v2” for File share billing. See how to create a file share for more information. When creating a share in a provisioned v2 storage account, you can specify the capacity and use the recommended performance. The recommendations we provide for IOPS and throughput are based on common usage patterns. If you know your workloads performance needs, you can manually set the IOPS and throughput to further tune your share. As you use your share, you may find that your usage pattern changes or that your usage is more or less active than your initial provisioning. You can always increase your storage, IOPS and throughput provisioning to right size for growth and you can also decrease any provisioned quantity after 24 hours have elapsed since your last increase. Storage, IOPS, and throughput changes are effective within a few minutes after a provisioning change. In addition to your baseline provisioned IOPS, we provide credit-based IOPS bursting that enables you to burst up to 3X the amount of provisioned IOPS for up to 1 hour, or as long as credits remain. To learn more about credit-based IOPS bursting, see provisioned v2 bursting. Pricing example To see the new provisioned v2 model in action, let’s compare the costs of the pay-as-you-go model versus the provisioned v2 model for the following Azure File Sync deployment: Storage: 50 used TiB For the pay as we go model, we need usage as expressed in the total number of “transaction buckets” for the month: Write: 3,214 List: 7,706 Read: 7,242 Other: 90 For the provisioned v2 model, we need usage as expressed as the maximum IOPS and throughput (in MiB / sec) hit over the course of an average time period to guide our provisioning decision: Maximum IOPS: 2,100 IOPS Maximum throughput: 85 MiB / sec To deploy a file share using the pay-as-you-go model, you need to pick an access tier to store the data in between transaction optimized, hot, and cool. The correct access tier to pick depends on the activity level of your data: a really active share should pick transaction optimized, while a comparatively inactive share should pick cool. Based on the activity level of this share as described above, cool is the best choice. When you deploy the share, you need to provision more than you use today to ensure the share can support your application as your data continues to grow. Ultimately this how much to provision is up to you, but a good rule of thumb is to start with 2X more than what you use today. There’s no need to keep your share at a consistent provisioned to used ratio. Now we have all the necessary inputs to compare cost: HDD pay-as-you-go cool (cool access tier) HDD provisioned v2 Cost components Used storage: 51,200 GiB * $0.015 / GiB = $768.00 Write TX: 3,214 * $0.1300 / bucket = $417.82 List TX: 7,706 * $0.0650 / bucket = $500.89 Read TX: 7,242 * $0.0130 / bucket = $94.15 Other TX: 90 * $0.0052 / bucket = $0.47 Provisioned storage: 51,200 used GiB * 2 * $0.0073 / GiB = $747.52 Provisioned IOPS: 2,100 IOPS * 2 * $0.402 / IO / sec = $168.84 Provisioned throughput: 85 MiB / sec * 2 * $0.0599 / MiB / sec = $10.18 Total cost $1,781.33 / month $926.54 / month Effective price per used GiB $0.0348 / used GiB $0.0181 / used GiB In this example, the pay-as-you-go file share costs $0.0348 / used GiB while the provisioned v2 file share costs $0.0181 / used GiB, a ~2X cost improvement for provisioned v2 over pay-as-you-go. Shares with different levels of activity will have different results – your mileage may vary. Typically, when deploying a file share for the first time, you would not know what the transaction usage would be, making cost projections for the pay-as-you-go model quite difficult. But it would still be straightforward to compute the provisioned v2 costs. If you don’t know specifically what your IOPS and throughput utilization would be, you can use the built-in recommendations as a starting point. Resources Here are some additional resources on how to get started: Azure Files pricing page Understanding the Azure Files provisioned v2 model | Microsoft Docs How to create an Azure file share | Microsoft Docs (follow the steps for creating a provisioned v2 storage account/file share)596Views0likes0CommentsService Trust Portal no longer support Microsoft Account (MSA) access
Dear all, We need to access certain documents (i.e., SOC 2 or ISO 27xxx) on the Service Trust Portal. To download documents you need to be signed in first. However, when I click on "sign in" (using the same email/account as for our azure account) I get the error message "Service Trust Portal no longer support Microsoft Account (MSA) access." (see screenshot below). It seems that I am not the only one since other users had similar issues but they also could not find a solution (or at least it was not mentioned in their post): https://techcommunity.microsoft.com/t5/security-compliance-and-identity/cannot-login-to-service-trust-portal/m-p/3632978 I have been trying this now since more than a week and also created a support ticket (which has not been assigned to a support agent yet). It is quite cumbersome and I hope some of you could have an idea since getting these documents is quite crucial for us.1.7KViews0likes5CommentsCannot login to Service Trust Portal
Hi, I'm trying to get some certificates from the service trust portal, but I keep getting "Service Trust Portal no longer support Microsoft Account (MSA) access." I'm using an account registeredon Azure and I checked the Azure Active Directory,and the user exists (seeing it's the owner of the account). What am I missing here?Control geo failover for ADLS and SFTP with unplanned failover.
We are excited to announce the General Availability of customer managed unplanned failover for Azure Data Lake Storage and storage accounts with SSH File Transfer Protocol (SFTP) enabled. What is Unplanned Failover? With customer managed unplanned failover, you are in control of initiating your failover. Unplanned failover allows you to switch your storage endpoints from the primary region to the secondary region. During an unplanned failover, write requests are redirected to the secondary region, which then becomes the new primary region. Because an unplanned failover is designed for scenarios where the primary region is experiencing an availability issue, unplanned failover happens without the primary region fully completing replication to the secondary region. As a result, during an unplanned failover there is a possibility of data loss. This loss depends on the amount of data that has yet to be replicated from the primary region to the secondary region. Each storage account has a ‘last sync time’ property, which indicates the last time a full synchronization between the primary and the secondary region was completed. Any data written between the last sync time and the current time may only be partially replicated to the secondary region, which is why unplanned failover may incur data loss. Unplanned failover is intended to be utilized during a true disaster where the primary region is unavailable. Therefore, once completed, the data in the original primary region is erased, the account is changed to locally redundant storage (LRS) and your applications can resume writing data to the storage account. If the previous primary region becomes available again, you can convert your account back to geo-redundant storage (GRS). Migrating your account from LRS to GRS will initiate a full data replication from the new primary region to the secondary which has geo-bandwidth costs. If your scenario involves failing over while the primary region is still available, consider planned failover. Planned failover can be utilized in scenarios including planned disaster recovery testing or recovering from non-storage related outages. Unlike unplanned failover, the storage service endpoints must be available in both the primary and secondary regions before a planned failover can be initiated. This is because planned failover is a 3-step process that includes: (1) making the current primary read only, (2) syncing all the data to the secondary (ensuring no data loss), and (3) swapping the primary and secondary regions so that writes are now in the new region. In contrast with unplanned failover, planned failover maintains the geo-redundancy of the account so planned failback does not require a full data copy. To learn more about planned failover and how it works view, Public Preview: Customer Managed Planned Failover for Azure Storage | Microsoft Community Hub To learn more about each failover option and the primary use case for each view, Azure storage disaster recovery planning and failover - Azure Storage | Microsoft Learn How to get started? Getting started is simple, to learn more about the step-by-step process to initiate an unplanned failover review the documentation:Initiate a storage account failover - Azure Storage | Microsoft Learn Feedback If you have questions or feedback, reach out at storagefailover@service.microsoft.com75Views0likes0CommentsAnnouncing the Powerful Devs Conference + Hack Together 2025
Discover the potential of Microsoft Power Platform at this global event starting Feb 12, 2025! Learn from experts, explore tools like Power Apps, AI Builder, and Copilot Studio, and create innovative solutions during the two-week hackathon. Prizes await the best projects across 8 categories. 🌟 Build. Innovate. Hack Together. 👉 Register now: aka.ms/powerfuldevs Your future in enterprise app development starts here!Azure AI Model Inference API
The Azure AI Model Inference API provides a unified interface for developers to interact with various foundational models deployed in Azure AI Studio. This API allows developers to generate predictions from multiple models without changing their underlying code. By providing a consistent set of capabilities, the API simplifies the process of integrating and switching between different models, enabling seamless model selection based on task requirements.3.8KViews0likes2CommentsTransactable vs. non-Transactable marketplace offer?
Peer ISV experiences appreciated. Throughout this fiscal Msft has pushed hard towards the marketplace transactability with various initiatives and this was strongly presented also at Ignite. The list of arguments from Msft is extensive (buyer behavior, co-sell, multi-party offers, MACCs, etc...). However I would like to here real-life experiences from peer ISVs who have both transactable and non-transactable offers / have migrated the existing non-transactable offer to transactable offer: Impact on lead generation and closed deals? Impact on co-sell among Msft and/or CSPs? Does MACC eligibility matter? Any other pros / cons? Bonus question: If you have the same offer on both Azure Marketplace and AppSource: Which one has been the place to be? Or both? Thanks a million in advance! 🙏13Views2likes0CommentsMigrating from NinjaOne, BitDefender, and Phish Titan to a Unified Microsoft Intune and Defender So
I'm currently in the process of evaluating a major migration strategy for the MSP I work for, and I wanted to share my thought process and get some advice on potential gaps I might be overlooking. Any input or suggestions would be greatly appreciated as this is something I want to get right! Current Setup: We currently manage around 300 Microsoft 365 tenants. Each client typically pays for Microsoft 365 licenses per user (usually Business Basic or Standard), along with NinjaOne RMM for device management, BitDefender for endpoint protection, and some opt for Phish Titan for email filtering. Our current setup involves: NinjaOne RMM: Used for remote device management and client support. BitDefender: For antivirus/endpoint protection. Phish Titan: For email filtering, spam protection, and phishing simulation. The Plan: Migrate to Microsoft Intune and Defender The strategy I am considering involves transitioning our clients devices toMicrosoft Intune for device management and Defender for Endpoint for security. Many of the devices we manage are already AzureAD joined. Currently we AzureAD join all the devices in the tenant to the 365 Admin which we control. Intune: Will allow us to manage all devices from a single platform, with granular policies for compliance, software updates, and app management. Defender for Endpoint: Threat protection, antivirus, and EDR features that can replace BitDefender,. Also for those clients who currently opt form email filtering, its email protection features could potentially replace Phish Titan’s filtering and simulation with the addition of Defender for 365. Licensing Concerns and Confusion: This is where I’ve run into several licensing questions and concerns: 365 Admin with E5 License: In my current plan, each client tenant would have a single 365 admin account with an E5 license to manage the devices and benefit from Defender’s full suite of features (including threat intelligence, EDR, attack surface reduction, etc.). All devices in the tenant would be Azure AD-joined to this E5-admin account. My assumption is that since the devices are Azure AD-joined to an account with E5, they would benefit from the full capabilities of Defender for Endpoint, regardless of the license assigned to the end user (who might only have a Microsoft 365 Business Basic or Standard license). However, I’m not 100% certain if the user logged into the device would be limited in any way (e.g., does Defender’s full suite apply only to the device, or does the end-user's license also need to include premium features like Defender for Endpoint?). Entra ID Premium (P1 or P2): My goal is to also enforce MFA across all tenants automatically for new users. I understand that for this, we would need Entra ID Premium P1 or P2. The challenge is whether I can apply a tenant-wide P1/P2 license or if I need to assign the P1/P2 license to each individual user. If I assign the P1 license to the 365 admin, will I be able to enforce MFA for all new users in the tenant, or do I need to assign P1 licenses to each user to make this work? BitDefender Replacement: My understanding is that Defender for Endpoint (through the 365 E5 license) provides advanced features that can completely replace BitDefender in terms of security, threat protection, and response. Does anyone have feedback on how Defender compares to BitDefender, particularly around ease of management, efficacy, and any potential gaps in coverage? Email Filtering and Phishing Simulation: Defender for Office 365 (included with 365 E5) offers email protection, phishing simulation, and spam filtering. If we switch from Phish Titan to Defender, will we be missing any significant functionality, or is this a strong enough alternative? Windows Autopilot Considerations: I also want to incorporate Windows Autopilot into our deployment strategy. While we’re not overly concerned about achieving zero-touch deployment, I believe we can still leverage Autopilot to streamline the device provisioning process and ensure that devices are correctly configured for our clients from the outset. Azure AD Join: My assumption is that for devices to fully utilize Autopilot features, they would need to be Azure AD-joined to the end user. I’m considering how to implement this for end-user devices and whether we can still maintain efficiency if users log into the devices with different Microsoft 365 licenses (Basic or Standard). End-User Experience: I want to ensure that even if users are logging in with lower-tier licenses, they still have a seamless onboarding experience, with essential policies and security measures applied from the get-go (Installed apps, Networking settings, etc) Has anyone here gone through a similar migration, or do you have any insights into the potential pitfalls of this approach? Am I missing any important considerations? Any advice would be appreciated.274Views0likes1Comment