NVIDIA GPU Acceleration for Apache Spark™ in Azure Synapse Analytics

Published 05-25-2021 08:28 AM 6,642 Views
Microsoft

Azure recently announced support for NVIDIA’s T4 Tensor Core Graphics Processing Units (GPUs) which are ideal for deploying machine learning inferencing or analytical workloads in a cost-effective manner. With Apache Spark™ deployments tuned for NVIDIA GPUs, plus pre-installed libraries, Azure Synapse Analytics offers a simple way to leverage GPUs to power a variety of data processing and machine learning tasks. With built-in support for NVIDIA’s RAPIDS acceleration, the Azure Synapse version of GPU-accelerated Spark offers gains of 2x on standard analytical benchmarks compared to running on CPUs, all without any code changes. Additionally, for machine learning workloads Azure Synapse offers Microsoft's Hummingbird out-of-box which can leverage these GPUs to offer significant acceleration on traditional ML workloads.

 

Beginning today, this GPU acceleration feature in Azure Synapse is available for private preview by request.

 

The benefits of GPU Acceleration

GPUs offer extraordinarily low price-per-performance and high compute performance by speeding up multi-core servers for parallel processing. While a CPU consists of a few cores, optimized for sequential serial processing, a GPU has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed to handle multiple tasks simultaneously. Considering that data scientists spend up to 80% of their time on data pre-processing, GPUs are an asset in one’s data processing pipelines compared to relying on pipelines containing CPUs alone.

 

The benefits of GPU acceleration in Apache Spark™ include:

  • Data processing, queries and model training are completed faster; allowing accelerated time to insight.
  • The same GPU-accelerated infrastructure can be used for both Spark and ML/DL frameworks, eliminating the need for complex decision making and tuning.
  • Fewer compute nodes are required; reducing infrastructure cost and potentially helping avoid scale-related problems.

 

Collaboration with NVIDIA

NVIDIA and Azure Synapse have teamed up to bring GPU acceleration to data scientists and data engineers. This collaboration is primarily focused on integrating RAPIDS Accelerator for Apache Spark™ into Azure Synapse. This integration will allow customers to use NVIDIA GPUs for Apache Spark™ applications with no-code change and with an experience identical to a CPU cluster. In addition, this collaboration will continue to add support for the latest NVIDIA GPUs and networking products and provide continuous enhancements for big data customers who are looking to improve productivity and save costs with a single pipeline for data engineering, data preparation, and machine learning.

 

When asked about the collaboration and the importance of having GPUs in Azure Synapse, Scott McClellan, Senior Director, Data Science at NVIDIA said, “The synergy between Azure Synapse and NVIDIA is critical to democratize AI for citizen data scientists on Azure as businesses look to gain competitive advantage with advanced analytics, artificial intelligence (AI), and machine learning (ML). Azure Synapse is transforming siloed enterprise analytics into an integrated platform to accelerate time to insights across data warehouses and big data systems. The on-going collaboration will seamlessly integrate RAPIDS Accelerator for Apache Spark, accelerate the Azure Synapse platform, and fast track new feature development for Accelerated Data Engineering and Data Science applications.

 

To learn more about this collaboration, check out our presentation at NVIDIA’s GTC 2021 Conference.

 

Apache Spark™ 3.0 GPU Acceleration in Azure Synapse

While Apache Spark™ provides GPU support out-of-box, configuring all the required hardware and installing all the low-level libraries can take significant effort. When you attempt to use GPU-enabled Apache Spark™ pools in Azure Synapse, you will immediately notice a surprisingly simple user experience:

 

Behind the scenes heavy lifting: To be able to run GPU libraries, hardware libraries like NVIDIA CUDA are required for communication with the graphics card on the host machine. Downloading and installing these libraries takes both time and effort. Through integration with Azure, Azure Synapse takes care of pre-installing these libraries and setting up all the complex networking amongst compute nodes to offer you GPU Apache Spark™ pools within just a few minutes so you can stop worrying about setup and focus instead on solving your business problems.

 

Optimized Spark configuration: By collaborating with NVIDIA, we have come up with optimal configurations for your GPU-enabled Apache Spark™ pools so your workloads run most optimally saving you both time and operational costs.

 

Packed with Data Prep and ML Libraries: The GPU-enabled Apache Spark™ pools in Azure Synapse come built-in with two popular libraries with support for more on the way:

  1. RAPIDS for Data Prep: RAPIDS is a suite of open-source software libraries and APIs for executing end-to-end data science and analytics pipelines entirely on GPUs, allowing for a substantial speed up, particularly on large data sets. Built on top of NVIDIA CUDA and UCX, the RAPIDS Accelerator for Apache Spark™ enables GPU-accelerated SQL and DataFrame operations and Spark shuffles. Since there are no code changes required to leverage these accelerations, you can also accelerate your data pipelines that rely on Linux Foundation's Delta Lake or Microsoft's Hyperspace indexing (both of which are available on Synapse out-of-box).
  2. Hummingbird for accelerating scoring and inference over your traditional ML models. Hummingbird is a library for converting traditional ML operators to tensors, with the goal of accelerating inference (scoring/prediction) for traditional machine learning models.

 

Rahul_Potharaju_0-1621906031112.jpeg

 

When running NVIDIA Decision Support (NDS) test queries, derived from industry-known benchmarks, over 1 TB of Parquet data our early results indicate that GPUs can deliver up to 2x acceleration in overall query performance, without any code changes.

 

Rahul_Potharaju_0-1621956310769.png

 

Getting started

Rahul_Potharaju_2-1621906031249.jpeg

%3CLINGO-SUB%20id%3D%22lingo-sub-2382243%22%20slang%3D%22en-US%22%3ENVIDIA%20GPU%20Acceleration%20for%20Apache%20Spark%E2%84%A2%20in%20Azure%20Synapse%20Analytics%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2382243%22%20slang%3D%22en-US%22%3E%3CP%3EAzure%20recently%20announced%20support%20for%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fvirtual-machines%2Fnct4-v3-series%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3ENVIDIA%E2%80%99s%20T4%20Tensor%20Core%20Graphics%20Processing%20Units%20(GPUs)%3C%2FA%3E%20which%20are%20ideal%20for%20deploying%20machine%20learning%20inferencing%20or%20analytical%20workloads%20in%20a%20cost-effective%20manner.%20With%20Apache%20Spark%E2%84%A2%20deployments%20tuned%20for%20NVIDIA%20GPUs%2C%20plus%20pre-installed%20libraries%2C%20Azure%20Synapse%20Analytics%20offers%20a%20simple%20way%20to%20leverage%20GPUs%20to%20power%20a%20variety%20of%20data%20processing%20and%20machine%20learning%20tasks.%20With%20built-in%20support%20for%20%3CA%20href%3D%22https%3A%2F%2Frapids.ai%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%22%3ENVIDIA%E2%80%99s%20RAPIDS%3C%2FA%3E%20acceleration%2C%20the%20Azure%20Synapse%20version%20of%20GPU-accelerated%20Spark%20offers%20gains%20of%202x%20on%20standard%20analytical%20benchmarks%20compared%20to%20running%20on%20CPUs%2C%20all%20without%20any%20code%20changes.%20Additionally%2C%20for%20machine%20learning%20workloads%20Azure%20Synapse%20offers%20Microsoft's%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fhummingbird%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3EHummingbird%3C%2FA%3E%20out-of-box%20which%20can%20leverage%20these%20GPUs%20to%20offer%20significant%20acceleration%20on%20traditional%20ML%20workloads.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBeginning%20today%2C%20this%20GPU%20acceleration%20feature%20in%20Azure%20Synapse%20is%20available%20for%20%3CA%20href%3D%22http%3A%2F%2Faka.ms%2FSynapseGPUs%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3Eprivate%20preview%20by%20request%3C%2FA%3E.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--464195145%22%20id%3D%22toc-hId--464254602%22%3EThe%20benefits%20of%20GPU%20Acceleration%3C%2FH2%3E%0A%3CP%3EGPUs%20offer%20extraordinarily%20low%20price-per-performance%20and%20high%20compute%20performance%20by%20speeding%20up%20multi-core%20servers%20for%20parallel%20processing.%20While%20a%20CPU%20consists%20of%20a%20few%20cores%2C%20optimized%20for%20sequential%20serial%20processing%2C%20a%20GPU%20has%20a%20massively%20parallel%20architecture%20consisting%20of%20thousands%20of%20smaller%2C%20more%20efficient%20cores%20designed%20to%20handle%20multiple%20tasks%20simultaneously.%20Considering%20that%20%3CA%20href%3D%22https%3A%2F%2Ftowardsdatascience.com%2Fworkflow-of-a-machine-learning-project-ec1dba419b94%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3Edata%20scientists%20spend%20up%20to%2080%25%20of%20their%20time%20on%20data%20pre-processing%3C%2FA%3E%2C%20GPUs%20are%20an%20asset%20in%20one%E2%80%99s%20data%20processing%20pipelines%20compared%20to%20relying%20on%20pipelines%20containing%20CPUs%20alone.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EThe%20benefits%20of%20GPU%20acceleration%20in%20Apache%20Spark%E2%84%A2%20include%3A%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EData%20processing%2C%20queries%20and%20model%20training%20are%20completed%20faster%3B%20allowing%20accelerated%20time%20to%20insight.%3C%2FLI%3E%0A%3CLI%3EThe%20same%20GPU-accelerated%20infrastructure%20can%20be%20used%20for%20both%20Spark%20and%20ML%2FDL%20frameworks%2C%20eliminating%20the%20need%20for%20complex%20decision%20making%20and%20tuning.%3C%2FLI%3E%0A%3CLI%3EFewer%20compute%20nodes%20are%20required%3B%20reducing%20infrastructure%20cost%20and%20potentially%20helping%20avoid%20scale-related%20problems.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-2023317688%22%20id%3D%22toc-hId-2023258231%22%3ECollaboration%20with%20NVIDIA%3C%2FH2%3E%0A%3CP%3ENVIDIA%20and%20Azure%20Synapse%20have%20teamed%20up%20to%20bring%20GPU%20acceleration%20to%20data%20scientists%20and%20data%20engineers.%20This%20collaboration%20is%20primarily%20focused%20on%20integrating%20RAPIDS%20Accelerator%20for%20Apache%20Spark%E2%84%A2%20into%20Azure%20Synapse.%20This%20integration%20will%20allow%20customers%20to%20use%20NVIDIA%20GPUs%20for%20Apache%20Spark%E2%84%A2%20applications%20with%20no-code%20change%20and%20with%20an%20experience%20identical%20to%20a%20CPU%20cluster.%20In%20addition%2C%20this%20collaboration%20will%20continue%20to%20add%20support%20for%20the%20latest%20NVIDIA%20GPUs%20and%20networking%20products%20and%20provide%20continuous%20enhancements%20for%20big%20data%20customers%20who%20are%20looking%20to%20improve%20productivity%20and%20save%20costs%20with%20a%20single%20pipeline%20for%20data%20engineering%2C%20data%20preparation%2C%20and%20machine%20learning.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWhen%20asked%20about%20the%20collaboration%20and%20the%20importance%20of%20having%20GPUs%20in%20Azure%20Synapse%2C%20Scott%20McClellan%2C%20Senior%20Director%2C%20Data%20Science%20at%20NVIDIA%20said%2C%20%E2%80%9C%3CEM%3EThe%20synergy%20between%20Azure%20Synapse%20and%20NVIDIA%20is%20critical%20to%20democratize%20AI%20for%20citizen%20data%20scientists%20on%20Azure%20as%20businesses%20look%20to%20gain%20competitive%20advantage%20with%20advanced%20analytics%2C%20artificial%20intelligence%20(AI)%2C%20and%20machine%20learning%20(ML).%20Azure%20Synapse%20is%20transforming%20siloed%20enterprise%20analytics%20into%20an%20integrated%20platform%20to%20accelerate%20time%20to%20insights%20across%20data%20warehouses%20and%20big%20data%20systems.%20The%20on-going%20collaboration%20will%20seamlessly%20integrate%20RAPIDS%20Accelerator%20for%20Apache%20Spark%2C%20accelerate%20the%20Azure%20Synapse%20platform%2C%20and%20fast%20track%20new%20feature%20development%20for%20Accelerated%20Data%20Engineering%20and%20Data%20Science%20applications.%3C%2FEM%3E%E2%80%9D%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20learn%20more%20about%20this%20collaboration%2C%20check%20out%20our%20presentation%20at%20%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fazure-compute%2Fmicrosoft-sessions-at-nvidia-gtc%2Fba-p%2F2259888%22%20target%3D%22_blank%22%3ENVIDIA%E2%80%99s%20GTC%202021%20Conference%3C%2FA%3E.%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-215863225%22%20id%3D%22toc-hId-215803768%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--1591591238%22%20id%3D%22toc-hId--1591650695%22%3EApache%20Spark%E2%84%A2%203.0%20GPU%20Acceleration%20in%20Azure%20Synapse%3C%2FH2%3E%0A%3CP%3EWhile%20Apache%20Spark%E2%84%A2%20provides%20GPU%20support%20out-of-box%2C%20configuring%20all%20the%20required%20hardware%20and%20installing%20all%20the%20low-level%20libraries%20can%20take%20significant%20effort.%20When%20you%20attempt%20to%20use%20GPU-enabled%20Apache%20Spark%E2%84%A2%20pools%20in%20Azure%20Synapse%2C%20you%20will%20immediately%20notice%20a%20surprisingly%20simple%20user%20experience%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EBehind%20the%20scenes%20heavy%20lifting%3C%2FSTRONG%3E%3A%20To%20be%20able%20to%20run%20GPU%20libraries%2C%20hardware%20libraries%20like%20NVIDIA%20CUDA%20are%20required%20for%20communication%20with%20the%20graphics%20card%20on%20the%20host%20machine.%20Downloading%20and%20installing%20these%20libraries%20takes%20both%20time%20and%20effort.%20Through%20integration%20with%20Azure%2C%20Azure%20Synapse%20takes%20care%20of%20pre-installing%20these%20libraries%20and%20setting%20up%20all%20the%20complex%20networking%20amongst%20compute%20nodes%20to%20offer%20you%20GPU%20Apache%20Spark%E2%84%A2%20pools%20within%20just%20a%20few%20minutes%20so%20you%20can%20stop%20worrying%20about%20setup%20and%20focus%20instead%20on%20solving%20your%20business%20problems.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EOptimized%20Spark%20configuration%3C%2FSTRONG%3E%3A%20By%20collaborating%20with%20NVIDIA%2C%20we%20have%20come%20up%20with%20optimal%20configurations%20for%20your%20GPU-enabled%20Apache%20Spark%E2%84%A2%20pools%20so%20your%20workloads%20run%20most%20optimally%20saving%20you%20both%20time%20and%20operational%20costs.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EPacked%20with%20Data%20Prep%20and%20ML%20Libraries%3C%2FSTRONG%3E%3A%20The%20GPU-enabled%20Apache%20Spark%E2%84%A2%20pools%20in%20Azure%20Synapse%20come%20built-in%20with%20two%20popular%20libraries%20with%20support%20for%20more%20on%20the%20way%3A%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Frapids.ai%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noreferrer%22%3ERAPIDS%20for%20Data%20Prep%3C%2FA%3E%3A%20RAPIDS%20is%20a%20suite%20of%20open-source%20software%20libraries%20and%20APIs%20for%20executing%20end-to-end%20data%20science%20and%20analytics%20pipelines%20entirely%20on%20GPUs%2C%20allowing%20for%20a%20substantial%20speed%20up%2C%20particularly%20on%20large%20data%20sets.%20Built%20on%20top%20of%20NVIDIA%20CUDA%20and%20UCX%2C%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2FNVIDIA%2Fspark-rapids%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3ERAPIDS%20Accelerator%20for%20Apache%20Spark%E2%84%A2%3C%2FA%3E%20enables%20GPU-accelerated%20SQL%20and%20DataFrame%20operations%20and%20Spark%20shuffles.%26nbsp%3BSince%20there%20are%20no%20code%20changes%20required%20to%20leverage%20these%20accelerations%2C%20%3CSPAN%3Eyou%20can%20also%20accelerate%20your%20data%20pipelines%20that%20rely%20on%20Linux%20Foundation's%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fdelta-io%2Fdelta%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3EDelta%20Lake%3C%2FA%3E%3CSPAN%3E%26nbsp%3Bor%20Microsoft's%26nbsp%3B%3C%2FSPAN%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fhyperspace%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3EHyperspace%3C%2FA%3E%3CSPAN%3E%26nbsp%3Bindexing%20(both%20of%20which%20are%20available%20on%20Synapse%20out-of-box).%3C%2FSPAN%3E%3C%2FLI%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fmicrosoft%2Fhummingbird%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EHummingbird%3C%2FA%3E%20for%20accelerating%20scoring%20and%20inference%20over%20your%20traditional%20ML%20models.%20Hummingbird%20is%20a%20library%20for%20converting%20traditional%20ML%20operators%20to%20tensors%2C%20with%20the%20goal%20of%20accelerating%20inference%20(scoring%2Fprediction)%20for%20traditional%20machine%20learning%20models.%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22Rahul_Potharaju_0-1621906031112.jpeg%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F283283i7F9160A33F45E9D1%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22Rahul_Potharaju_0-1621906031112.jpeg%22%20alt%3D%22Rahul_Potharaju_0-1621906031112.jpeg%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWhen%20running%20NVIDIA%20Decision%20Support%20(NDS)%20test%20queries%2C%20derived%20from%20industry-known%20benchmarks%2C%20over%201%20TB%20of%20Parquet%20data%20our%20early%20results%20indicate%20that%20GPUs%20can%20deliver%20up%20to%202x%20acceleration%20in%20overall%20query%20performance%2C%20without%20any%20code%20changes.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22Rahul_Potharaju_0-1621956310769.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F283576iB42B6DE2FE5F90E9%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22Rahul_Potharaju_0-1621956310769.png%22%20alt%3D%22Rahul_Potharaju_0-1621956310769.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-895921595%22%20id%3D%22toc-hId-895862138%22%3E%26nbsp%3B%3C%2FH2%3E%0A%3CH2%20id%3D%22toc-hId--911532868%22%20id%3D%22toc-hId--911592325%22%3EGetting%20started%3C%2FH2%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Faka.ms%2FSynapseGPUs%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EContact%20us%3C%2FA%3E%20if%20you%20are%20interested%20in%20being%20added%20to%20the%20private%20preview%20list.%3C%2FLI%3E%0A%3CLI%3EUse%20the%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fen-us%2Fblog%2Flimited-time-free-quantities-offer-for-azure-synapse-analytics%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3Elimited-time%20free%20quantities%20available%20in%20Azure%20Synapse%3C%2FA%3E%20to%20try%20new%20features.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22Rahul_Potharaju_2-1621906031249.jpeg%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F283282i843B48CB9B65EE13%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22Rahul_Potharaju_2-1621906031249.jpeg%22%20alt%3D%22Rahul_Potharaju_2-1621906031249.jpeg%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-2382243%22%20slang%3D%22en-US%22%3E%3CP%3EWith%20built-in%20support%20for%20NVIDIA%E2%80%99s%20RAPIDS%20acceleration%2C%20the%20Azure%20Synapse%20version%20of%20GPU-accelerated%20Spark%20offers%20gains%20of%202x%20on%20standard%20analytical%20benchmarks%20compared%20to%20running%20on%20CPUs%2C%20all%20without%20any%20code%20changes.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBeginning%20today%2C%20this%20GPU%20acceleration%20feature%20in%20Azure%20Synapse%20is%20available%20for%20%3CA%20href%3D%22http%3A%2F%2Faka.ms%2FSynapseGPUs%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%22%3Eprivate%20preview%20by%20request%3C%2FA%3E.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2382243%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EMicrosoft%20Build%202021%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ESynapse%20Spark%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EUpdates%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Co-Authors
Version history
Last update:
‎May 25 2021 08:25 AM
Updated by: