Forum Discussion
Sourav666
Mar 30, 2023Copper Contributor
Azure Cosmos DB storage limit
Is there a storage limit on Azure Cosmos DB when the throughput is set to Autoscale? I couldn't find any definitive answer in Microsoft's documentation. However, I've come across this bluematador do...
josequintino
Apr 03, 2023Iron Contributor
Hi @Sourav66
Azure Cosmos DB does not impose any practical storage limits on a single database or container. The storage capacity of a Cosmos DB account can grow elastically as the data grows, and you can scale to virtually unlimited storage.
Microsoft's documentation states the following regarding Cosmos DB storage:
- "Azure Cosmos DB automatically manages the partitioning, and you don't have to deal with any limits on the amount of storage or throughput that a container can use." (Source: https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview#scalability-of-storage-and-throughput)
- "Azure Cosmos DB allows you to store schema-free JSON documents in containers. Containers are horizontally scalable and grow automatically as you store more documents." (Source: https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet#azure-cosmos-db-account)
Although there is no practical storage limit, you should be aware of the following constraints:
- Maximum storage per Cosmos DB account: 20 TB (soft limit, can be increased upon request).
- Maximum request unit (RU) per second for a single partition key value: 10,000 RU/s for dedicated throughput containers and 4,000 RU/s for shared throughput containers.
Regarding Autoscale throughput, it is designed to scale throughput automatically based on the usage pattern. However, Autoscale does not impact storage limits. It adjusts the provisioned throughput to accommodate the actual usage within a predefined range, enabling you to handle unpredictable workloads more efficiently.
In conclusion, the information provided by the bluematador documentation is accurate. There is no practical
Azure Cosmos DB does not impose any practical storage limits on a single database or container. The storage capacity of a Cosmos DB account can grow elastically as the data grows, and you can scale to virtually unlimited storage.
Microsoft's documentation states the following regarding Cosmos DB storage:
- "Azure Cosmos DB automatically manages the partitioning, and you don't have to deal with any limits on the amount of storage or throughput that a container can use." (Source: https://docs.microsoft.com/en-us/azure/cosmos-db/partitioning-overview#scalability-of-storage-and-throughput)
- "Azure Cosmos DB allows you to store schema-free JSON documents in containers. Containers are horizontally scalable and grow automatically as you store more documents." (Source: https://docs.microsoft.com/en-us/azure/cosmos-db/create-sql-api-dotnet#azure-cosmos-db-account)
Although there is no practical storage limit, you should be aware of the following constraints:
- Maximum storage per Cosmos DB account: 20 TB (soft limit, can be increased upon request).
- Maximum request unit (RU) per second for a single partition key value: 10,000 RU/s for dedicated throughput containers and 4,000 RU/s for shared throughput containers.
Regarding Autoscale throughput, it is designed to scale throughput automatically based on the usage pattern. However, Autoscale does not impact storage limits. It adjusts the provisioned throughput to accommodate the actual usage within a predefined range, enabling you to handle unpredictable workloads more efficiently.
In conclusion, the information provided by the bluematador documentation is accurate. There is no practical