This series of blogs is targeting AlwaysON Readable Secondary. Please refer to http://blogs.msdn.com/b/sqlalwayson/archive/tags/availability+group/ for other blogs in the related area
Readable Secondary is part of AlwaysOn suite of functionality available in SQL12 release. It allows customers to leverage their investment in High Availability hardware for offloading read workload such as reporting to one or more secondary replicas. Offloading reporting workload to secondary replica frees up resources on the primary node for primary application workload to achieve higher throughput and at the same time allowing resources on secondary replica to reporting workload to deliver higher performance. So it is a win-win situation both for primary and reporting workload.
Before we proceed further, it will be useful to look at the technology choices available to SQL customers today to offload read workload. This will provide us a good insight into the unique value proposition of Readable Secondary.
SQL Server 2008 offers following four High Availability choices to customers
Readable Secondary addresses the challenges outlined with previous High Availability solutions with the exception of the ability to create reporting workload specific index(s) similar to the ones allowed by transaction replication. Readable Secondary allows read workload to run concurrently with the recovery thread that is applying the transactional logs from the primary. This allows reporting workload to access the data changes “live” as they occur on the primary replica and subsequently applied by the REDO thread on the secondary replica. The reporting workload is run without any changes as it is the same database on secondary replica as it was on the primary.
In the subsequent blogs I will describe how to setup readable secondary, configure application connectivity and ensuring predictable performance both for primary and secondary workloads
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.