%3CLINGO-SUB%20id%3D%22lingo-sub-1685102%22%20slang%3D%22en-US%22%3EDiary%20of%20an%20Engineer%3A%20Delivering%2045x%20faster%20percentiles%20using%20Postgres%2C%20Citus%2C%20%26amp%3B%20t-digest%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-1685102%22%20slang%3D%22en-US%22%3E%3CP%3EWhen%20working%20on%20the%20internals%20of%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fcitusdata%2Fcitus%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3ECitus%3C%2FA%3E%2C%20an%20open%20source%20extension%20that%20transforms%20Postgres%20into%20a%20distributed%20database%2C%20we%20often%20get%20to%20talk%20with%20customers%20that%20have%20interesting%20challenges%20you%20won%E2%80%99t%20find%20everywhere.%20Just%20a%20few%20months%20back%2C%20I%20encountered%20an%20analytics%20workload%20that%20was%20a%20really%20good%20fit%20for%20Citus.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBut%20we%20had%20one%20problem%3A%20the%20percentile%20calculations%20on%20their%20data%20(over%20300%20TB%20of%20data)%20could%20not%20meet%20their%20SLA%20of%2030%20seconds.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20make%20things%20worse%2C%20the%20query%20performance%20was%20not%20even%20close%20to%20the%20target%3A%20the%20percentile%20calculations%20were%20taking%20about%206%20minutes%20instead%20of%20the%20required%2030%20second%20SLA.%20%3CBR%20%2F%3E%3CBR%20%2F%3EFiguring%20out%20how%20to%20meet%20the%2030%20second%20Postgres%20query%20SLA%20was%20a%20challenge%20because%20we%20didn%E2%80%99t%20have%20access%20to%20our%20customer%E2%80%99s%20data%E2%80%94and%20also%20because%20my%20customer%20didn%E2%80%99t%20have%20the%20cycles%20to%20compare%20the%20performance%20for%20different%20approaches%20I%20was%20considering.%20So%20we%20had%20to%20find%20ways%20to%20%3CEM%3Eestimate%3C%2FEM%3E%20which%20types%20of%20percentile%20calculations%20would%20meet%20their%20SLA%2C%20without%20having%20to%20spend%20the%20engineering%20cycles%20to%20implement%20different%20approaches.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20post%20explores%20how%E2%80%94with%20the%20help%20of%20the%20Postgres%20open%20source%20community%E2%80%94I%20was%20able%20to%20reduce%20the%20time%20to%20calculate%20percentiles%20by%2045x%20by%20using%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Et-digest%3C%2FA%3E%20extension%20to%20Postgres.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1265782481%22%20id%3D%22toc-hId--1265782479%22%3EImportance%20of%20calculating%20percentiles%20in%20analytics%20workloads%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EMy%20customer%20operates%20a%20multi%20datacenter%20web%20application%20with%20a%20real-time%20analytics%20dashboard%20that%20displays%20statistics%20about%20a%20variety%20of%20signals%E2%80%94and%20they%20store%20the%20analytics%20data%20in%20%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fazure-database-for-postgresql%2Fazure-database-for-postgresql-hyperscale-citus-now-generally%2Fba-p%2F1014865%22%20target%3D%22_blank%22%20rel%3D%22noopener%22%3EHyperscale%20(Citus)%3C%2FA%3E%20on%20our%20%3CA%20href%3D%22https%3A%2F%2Fazure.microsoft.com%2Fservices%2Fpostgresql%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EAzure%20Database%20for%20PostgreSQL%3C%2FA%3E%20managed%20service.%20They%20ingest%20over%202%20TB%20of%20data%20per%20hour%20and%20needed%20to%20get%20%26lt%3B%2030%20second%20performance%20for%20their%20queries%20over%20a%207-day%20period%20This%20analytics%20dashboard%20is%20used%20by%20their%20engineers%20to%20debug%20and%20root%20cause%20customer-reported%20issues.%20So%20they%20query%20metrics%20like%20latency%2C%20status%20codes%2C%20and%20error%20codes%20based%20on%20dimensions%20such%20as%20region%2C%20browser%2C%20data%20center%2C%20and%20the%20like.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELatency%20is%20of%20course%20an%20important%20metric%20for%20understanding%20these%20types%20of%20issues.%20However%2C%20average%20latency%20can%20be%20very%20misleading%E2%80%94which%20is%20where%20percentiles%20come%20in.%20If%201%25%20of%20your%20users%20are%20experiencing%20super%20slow%20response%20times%2C%20the%20average%20query%20response%20time%20may%20not%20change%20much%2C%20leading%20you%20to%20(incorrectly)%20think%20that%20nothing%20is%20wrong.%20However%2C%20you%20would%20see%20a%20notable%20difference%20in%20P99%2C%20allowing%20you%20to%20isolate%20issues%20much%20faster.%3CBR%20%2F%3E%3CBR%20%2F%3EWhich%20is%20why%20metrics%20like%20P99%20are%20so%20important%20when%20monitoring%20analytics%20workloads.%20A%20P99%20query%20response%20time%20of%20500ms%20means%20that%20the%20response%20time%20for%2099%25%20of%20your%20queries%20are%20faster%20than%20500ms.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1221730352%22%20id%3D%22toc-hId-1221730354%22%3ENative%20percentile%20functions%20in%20Postgres%20didn%E2%80%99t%20do%20the%20trick%20for%20this%20use%20case%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EPostgres%20provides%20native%20support%20for%20selecting%20the%20value%20of%20a%20column%20at%20a%20certain%20percentile%20with%20the%20%3CA%20href%3D%22https%3A%2F%2Fwww.postgresql.org%2Fdocs%2F12%2Ffunctions-aggregate.html%23FUNCTIONS-ORDEREDSET-TABLE%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eordered-set%20aggregate%20functions%3C%2FA%3E%3A%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CCODE%20style%3D%22color%3A%20black%3B%22%3Epercentile_cont%3C%2FCODE%3E%3C%2FLI%3E%0A%3CLI%3E%3CCODE%20style%3D%22color%3A%20black%3B%22%3Epercentile_disc%3C%2FCODE%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHaving%20native%20support%20for%20percentile%20calculations%20in%20Postgres%20is%20super%20powerful%3A%20all%20you%20have%20to%20do%20is%20specify%20you%20want%20the%20percentile%20and%20then%20you%20can%20let%20Postgres%20figure%20out%20how%20to%20get%20it.%20%3CBR%20%2F%3E%3CBR%20%2F%3EAnd%20yet...%20percentiles%20can%E2%80%99t%20be%20sped%20up%20by%20indexes%20in%20Postgres.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWhen%20diving%20into%20the%20implementation%20of%20%3CCODE%20style%3D%22color%3A%20black%3B%22%3Epercentile_cont%3C%2FCODE%3E%20in%20Postgres%2C%20you%20will%20find%20the%20transition%20functions%20for%20these%20aggregates%20collect%2C%20store%2C%20and%20sort%20the%20data%20internally.%20Meaning%20Postgres%20cannot%20rely%20on%20a%20known%20sort%20order%20of%20an%20index%20to%20significantly%20speed%20up%20the%20calculation.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAdd%20on%20top%20that%2C%20when%20Postgres%20is%20sharded%20across%20multiple%20nodes%20by%20Citus%20and%20we%20want%20the%20percentile%20over%20a%20sharded%20dataset%2C%20we%20cannot%20combine%20the%20percentiles%20we%20get%20back%20from%20the%20distributed%20shards%20and%20expect%20a%20correct%20result.%20Instead%20Citus%20requires%20all%20rows%20to%20be%20sent%20to%20one%20location%2C%20sorted%20locally%2C%20and%20find%20the%20value%20at%20the%20requested%20percentile.%20The%20copying%20of%20data%20takes%20the%20most%20time%20in%20a%20distributed%20Citus%20cluster%20and%20this%20was%20in%20fact%20the%20problem%20for%20our%20customer%3A%20out%20of%20the%206%20minutes%20for%20the%20query%2C%204%20%C2%BD%20minutes%20were%20spent%20in%20just%20pulling%20data%20to%20the%20coordinator.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20speed%20up%20the%20percentile%20calculations%20with%20Citus%2C%20we%20needed%20to%20reduce%20the%20amount%20of%20data%20required%20at%20the%20Citus%20coordinator%20to%20get%20to%20a%20reasonable%20percentile.%20%3CBR%20%2F%3E%3CBR%20%2F%3ELucky%20for%20us%20this%20was%20not%20the%20first%20time%20we%20had%20needed%20to%20find%20an%20effective%20way%20to%20run%20complex%20SQL%20calculations%E2%80%94at%20scale.%20In%20the%20past%20%3CA%20href%3D%22https%3A%2F%2Fwww.citusdata.com%2Fblog%2F2017%2F04%2F04%2Fdistributed_count_distinct_with_postgresql%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ewe%20have%20written%20about%20similar%20challenges%20with%20COUNT(DISTINCT)%20queries%3C%2FA%3E%E2%80%94which%20we%20solved%20with%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Fcitusdata%2Fpostgresql-hll%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EHyperLogLog%3C%2FA%3E%20approximation%20algorithm%2C%20also%20known%20as%20HLL.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHence%20this%20epiphany%3A%20Using%20a%20trusted%2C%20mathematically-robust%20approximation%20algorithm%20(sometimes%20called%20a%20sketch%20algorithm)%20might%20help%20us%20speed%20up%20performance.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--585724111%22%20id%3D%22toc-hId--585724109%22%3EWhich%20Percentile%20approximation%20technique%20to%20use%20with%20Postgres%3F%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EA%20quick%20exploration%20in%20the%20space%20of%20percentile%20approximation%20pinpointed%203%20different%20Postgres%20extensions%20we%20could%20consider.%20Two%20of%20these%20Postgres%20extensions%20had%20been%20created%20by%20Citus%20engineering%20interns%20in%20years%20past%E2%80%94and%20while%20not%20yet%20finished%20and%20not%20yet%20open%20source%2C%20these%20two%20prototype%20extensions%20deserved%20consideration%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EHigh%20Dynamic%20Range%20Histograms%2C%20HDR%20for%20short%3C%2FLI%3E%0A%3CLI%3Et-digest%20(the%20one%20created%20by%20our%20Citus%20intern)%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%3CBR%20%2F%3EAnd%20one%20open%20source%20t-digest%20extension%20contributed%20by%20Postgres%20committer%20Tomas%20Vondra%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3E%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Egithub.com%2Ftvondra%2Ftdigest%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWith%203%20competing%20extensions%20to%20Postgres%20that%20were%20all%20suitable%20for%20the%20approximation%20the%20question%20changes%20from%20how%20to%2C%20into%20the%20question%20of%20which%20one%20to%20use%3F%20To%20decide%20between%20the%203%20different%20extensions%20for%20approximating%20percentiles%2C%20we%20needed%20to%20answer%20these%202%20questions%20for%20each%20option%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EHow%20fast%20would%20the%20execution%20of%20our%20customer%20be%3F%3C%2FLI%3E%0A%3CLI%3EHow%20production-ready%20is%20the%20extension%3F%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAnd%20we%20wanted%20to%20be%20able%20to%20answer%20these%20questions%20without%20having%20to%20integrate%20each%20of%20the%20approximation%20algorithms%20with%20Citus%E2%80%94and%20without%20having%20to%20ask%20our%20customer%20to%20measure%20how%20fast%20each%20performed.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-1901788722%22%20id%3D%22toc-hId-1901788724%22%3EThe%20challenge%3A%20Figuring%20out%20if%20approximation%20algorithms%20would%20meet%20the%20SLA%2C%20without%20having%20to%20implement%20%26amp%3B%20measure%20all%20the%20approaches%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EGetting%20to%20our%20customer%20SLA%20of%2030%20seconds%20from%20over%206%20minutes%20was%20a%20hard%20task.%20And%20before%20doing%20too%20much%20engineering%20work%2C%20we%20needed%20to%20assess%20if%20calculating%20the%20percentiles%20using%20one%20of%20these%20Postgres%20extensions%20like%20t-digest%20or%20HDR%20was%20going%20to%20hit%20the%2030%20second%20SLA.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EGiven%20that%20COUNT%20DISTINCT%20had%20already%20been%20solved%20with%20HyperLogLog%2C%20we%20realized%20we%20could%20use%20a%20query%20that%20triggers%20the%20HLL%20code%20path%20to%20establish%20a%20ballpark%20figure%20on%20time%20required%20for%20the%20calculation%2C%20omitting%20some%20computational%20details.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20get%20this%20information%2C%20we%20asked%20our%20customer%20for%20the%20Postgres%20EXPLAIN%20plans%20of%20the%20original%20percentile%20queries%20and%20counting%20the%20distinct%20number%20of%20latencies%20(with%20HLL).%20Both%20were%20grouped%20by%20the%20desired%20time%20interval%E2%80%94minutes%20of%20a%20day%E2%80%94which%20resulted%20in%201440%20groups.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CTABLE%20border%3D%221%22%20width%3D%22100%25%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2255.50964187327824%25%22%3E%0A%3CP%3E%3CSTRONG%3EType%20of%20calculation%20with%20customer%20data%20(based%20on%20our%20customer%E2%80%99s%20Postgres%20EXPLAIN%20plans)%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2244.49035812672176%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E%3CSTRONG%3EExecution%20Time%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2255.50964187327824%25%22%3E%0A%3CP%3Epercentile_cont%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2244.49035812672176%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E361%20seconds%20(~6%20minutes)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2255.50964187327824%25%22%3E%0A%3CP%3ECOUNT%20DISTINCT%20%E2%80%93%20via%20HyperLogLog%20approximation%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2244.49035812672176%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E4%20seconds%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EEven%20though%20the%20COUNT%20DISTINCT%20via%20HLL%20above%20gives%20an%20answer%20that%20tells%20us%20nothing%2C%20the%20execution%20characteristics%20for%20approximating%20the%20counts%20is%20very%20similar%20on%20how%20we%20would%20approximate%20the%20percentiles.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20shared%20execution%20characteristics%20between%20COUNT%20DISTINCT%20via%20HLL%20and%20approximating%20percentiles%20are%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3Eto%20iterate%20over%20all%20values%2C%3C%2FLI%3E%0A%3CLI%3Echange%20a%20summary%20we%20keep%2C%3C%2FLI%3E%0A%3CLI%3Esend%20the%20summaries%20to%20the%20Citus%20coordinator%2C%20and%20finally%3C%2FLI%3E%0A%3CLI%3Ecombine%20the%20summaries%20to%20come%20to%20an%20answer.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EFrom%20the%20COUNT%20DISTINCT%20via%20HLL%20row%20in%20the%20table%20above%2C%20you%20can%20see%20that%20using%20an%20approximation%20algorithm%20(and%20maintaining%20the%20summary)%20seemed%20to%20get%20us%20in%20the%20ballpark%20of%20the%20desired%20execution%20times.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBut%20it%20was%20unclear%20how%20the%20time%20it%20would%20take%20to%20do%20percentile%20approximations%20compares%20to%20the%20time%20it%20takes%20to%20approximate%20COUNT%20DISTINCT.%20%3CBR%20%2F%3E%3CBR%20%2F%3EWe%20needed%20to%20figure%20out%2C%20would%20it%20be%202X%20or%2020X%20more%20expensive%20to%20approximate%20percentiles%20vs.%20to%20approximate%20COUNT%20DISTINCTs%3F%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CP%3EThe%20good%20news%3A%20to%20get%20a%20multiplier%20for%20the%20compute%20required%2C%20we%20did%20not%20have%20to%20rely%20on%20the%20environment%20of%20our%20customer.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EInstead%20we%20set%20a%20baseline%20by%20running%20roughly%20the%20same%20approximation.%20In%20this%20case%2C%20the%20number%20of%20rows%20and%20the%20number%20of%20expected%20groups%2C%20could%20give%20a%20good%20indication.%20By%20running%20both%20a%20count%20distinct%20approximation%20and%20a%20percentile%20approximation%2C%20we%20were%20able%20to%20measure%20how%20much%20more%20work%20one%20requires%20over%20the%20other%20on%20the%20same%20data.%20This%20multiplier%20also%20shed%20light%20on%20whether%20we%20would%20be%20able%20to%20meet%20the%20target%20execution%20time%20of%2030%20seconds.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CP%3EWe%20used%20these%20queries%20to%20estimate%20the%20performance%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-sql%22%3E%3CCODE%3E%0AWITH%20data%20AS%20(%0A%20SELECT%0A%20%20(random()*60*24)%3A%3Ainteger%20AS%20minute%2C%0A%20%20(random()*60000)%3A%3Ainteger%20AS%20latency%0A%20FROM%20generate_series(1%2C1000000)%0A)%0ASELECT%0A%20minute%2C%0A%20aggregate_to_create_summary(latency)%20AS%20summary%0AFROM%20data%0AGROUP%20BY%20minute%3B%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EExecuting%20these%20experiments%20gave%20us%20a%20workload%20multiplier%20which%20we%20applied%20to%20the%20runtime%20as%20measured%20at%20the%20target%20infrastructure.%20We%20ran%20these%20experiments%20for%20the%203%20extensions%20we%20had%20identified%20earlier%20(t-digest%2C%20t-digest%2C%20and%20HDR)%20and%20the%20HyperLogLog%20(HLL)%20extension%20used%20for%20count%20approximations.%20The%20multipliers%20in%20the%20table%20below%20are%20against%20the%20baseline%20of%20the%20HLL%20extension.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CTABLE%20border%3D%221%22%20width%3D%2299.99999999999999%25%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%3E%0A%3CP%3E%3CSTRONG%3EApproximation%20algorithm%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E%3CSTRONG%3ERuntime%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E%3CSTRONG%3EMultiplier%20vs.%20HLL%3CBR%20%2F%3E(lower%20means%20faster)%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%3E%0A%3CP%3EHDR%20%E2%80%93%20prototype%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E2023%20ms%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~2x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%3E%0A%3CP%3Et-digest%20%E2%80%93%20prototype%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E4089%20ms%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~4x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%3E%0A%3CP%3Et-digest%20%E2%80%93%20open%20source%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E1590%20ms%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~1.5x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%3E%0A%3CP%3EHyperLogLog%20(baseline)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E1026%20ms%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E1%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-94334259%22%20id%3D%22toc-hId-94334261%22%3EData%20sizes%20matter%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBesides%20compute%20time%2C%20the%20size%20of%20the%20data%20being%20transferred%20from%20the%20shards%20to%20the%20coordinator%20has%20a%20big%20influence%20on%20the%20total%20execution%20time.%20That%20is%20why%20we%20started%20this%20investigation%20in%20the%20first%20place%3A%20because%20transferring%20all%20rows%20to%20the%20coordinator%20had%20been%20taking%20the%20majority%20of%20the%206%20minutes.%20Due%20to%20simplicity%20and%20portability%2C%20Citus%20uses%20the%20text-based%20protocol%20for%20transferring%20rows%20from%20the%20workers%20to%20the%20coordinator.%20To%20get%20an%20idea%20of%20data%20sizes%20involved%2C%20we%20can%20cast%20the%20summaries%20created%20above%20to%20text%20and%20sum%20their%20lengths%20to%20get%20an%20idea%20of%20the%20size%20of%20data%20transferred.%20This%20is%20again%20not%20an%20exact%20size%20but%20more%20an%20order%20of%20magnitude%20check.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ETo%20get%20an%20estimated%20guess%20on%20how%20long%20the%20final%20execution%20would%20take%20using%20t-digest%2C%20we%20compared%20the%20transfer%20sizes%20and%20then%20we%20used%20a%20multiplier%20on%20the%20transfer%20times%20(measured%20in%20the%20target%20environment.)%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-applescript%22%3E%3CCODE%3EWITH%20data%20AS%20(%0A%20SELECT%0A%20%20(random()*60*24)%3A%3Ainteger%20AS%20minute%2C%0A%20%20(random()*60000)%3A%3Ainteger%20AS%20latency%0A%20FROM%20generate_series(1%2C1000000)%0A)%2C%20%0Asummary_sizes%20AS%20(%0A%20SELECT%0A%20%20minute%2C%0A%20%20octet_length(aggregate_to_create_summary(latency)%3A%3Atext)%20AS%20percentile_size%0A%20FROM%20data%0A%20GROUP%20BY%20minute%0A)%0ASELECT%20sum(percentile_size)*200%20FROM%20summary_sizes%3B%0A%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CTABLE%20border%3D%221%22%20width%3D%2299.99999999999999%25%22%3E%0A%3CTBODY%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2257px%22%3E%0A%3CP%3E%3CSTRONG%3EPostgres%20Extension%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2257px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E%3CSTRONG%3EAmount%20of%20data%20transferred%20to%20Citus%20coordinator%20%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2257px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E%3CSTRONG%3EMultiplier%20%3CBR%20%2F%3E(lower%20is%20better)%3C%2FSTRONG%3E%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%3E%0A%3CP%3EHDR%20prototype%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E3.8%20GB%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~6x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%3E%0A%3CP%3Et-digest%20prototype%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E1.7%20GB%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~2.7x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%3E%0A%3CP%3Et-digest%20(open%20source)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E218%20MB%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E~0.34x%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3CTR%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%3E%0A%3CP%3EHyperLogLog%20(baseline)%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E646%20MB%3C%2FP%3E%0A%3C%2FTD%3E%0A%3CTD%20width%3D%2233.33333333333333%25%22%20height%3D%2230px%22%20class%3D%22lia-align-center%22%3E%0A%3CP%3E1%3C%2FP%3E%0A%3C%2FTD%3E%0A%3C%2FTR%3E%0A%3C%2FTBODY%3E%0A%3C%2FTABLE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EBased%20on%20both%20the%20measurements%20of%20compute%20time%20as%20well%20as%20the%20expected%20data%20transfer%20sizes%20it%20became%20clear%20the%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Eopen%20source%20t-digest%20extension%3C%2FA%3E%20created%20by%20Postgres%20committer%20Tomas%20Vondra%20would%20yield%20the%20best%20performance%20for%20Citus%20and%20specifically%20for%20the%20analytics%20use%20case%20we%20were%20targeting.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId--1713120204%22%20id%3D%22toc-hId--1713120202%22%3EIf%20you%20can%E2%80%99t%20beat%20them%2C%20join%20them%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAt%20this%20point%2C%20we%20had%20a%20good%20understanding%20of%20the%20speeds%20that%20could%20be%20achieved%20for%20percentile%20approximation%20with%20the%20different%20Postgres%20extensions%20we%20were%20considering.%20And%20the%20good%20news%20was%2C%20the%20projected%20speeds%20were%20well%20within%20our%20customer%E2%80%99s%20SLA.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSTRONG%3EThe%20hard%20question%20we%20had%20to%20answer%3A%20should%20we%20spend%20time%20to%20productize%20our%20own%20t-digest%20or%20HDR%20prototype%20extensions%E2%80%94or%20should%20we%20adopt%20(and%20try%20to%20improve)%20the%20existing%20open%20source%20t-digest%20extension%3F%3C%2FSTRONG%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EA%20quick%20chat%20with%20the%20author%20of%20the%20open%20source%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%22%20target%3D%22_self%22%20rel%3D%22noopener%20noreferrer%20noopener%20noreferrer%22%3Egithub.com%2Ftvondra%2Ftdigest%3C%2FA%3E%20extension%2C%20Tomas%20Vondra%2C%20revealed%20that%20he%20uses%20the%20extension%20to%20great%20satisfaction.%20Also%2C%20Tomas%20was%20open%20to%20contributions%20to%20the%20extension%20to%20make%20it%20work%20well%20for%20Citus.%20The%20documented%20use%20of%20the%20open%20source%20t-digest%20extension%20by%20at%20least%201%20user%20was%20already%20infinitely%20more%20than%20our%20internal%20prototypes.%20And%20fixing%20a%20bug%20we%20had%20encountered%20was%20also%20straightforward.%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%2Fpull%2F1%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EOne%20pull%20request%20later%3C%2FA%3E%20and%20we%20had%20an%20open%20source%20t-digest%20extension%20that%20would%20work%20with%20Citus.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThe%20next%20step%20was%20to%20integrate%20t-digest%20with%20Citus.%20Given%20our%20experience%20with%20HLL%20in%20the%20past%2C%20the%20integration%20of%20t-digest%20with%20the%20Citus%20distributed%20Postgres%20planner%20was%20straightforward.%20It%20was%20time%20to%20go%20back%20to%20our%20customer%20to%20validate%20that%20we%20could%20meet%20their%20SLA.%20And%20we%20did%20meet%20our%20customer%E2%80%99s%20SLA%3A%20with%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Et-digest%3C%2FA%3E%20and%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fazure%2Fpostgresql%2Fquickstart-create-hyperscale-portal%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3EHyperscale%20(Citus)%3C%2FA%3E%20%2C%20our%20customer%E2%80%99s%20Postgres%20queries%20to%20approximate%20percentiles%20now%20ran%20in%207.5%20seconds%2C%20which%20was%20%3CSTRONG%3E45%20times%20faster%20compared%20to%20their%20initial%206%20minute%20Postgres%20query%3C%2FSTRONG%3E.%3CBR%20%2F%3E%3CBR%20%2F%3E%3C%2FP%3E%0A%3CH2%20id%3D%22toc-hId-774392629%22%20id%3D%22toc-hId-774392631%22%3EMore%20open%20source%20adoption%20of%20t-digest%20(with%20Citus)%3C%2FH2%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EOne%20positive%20side-effect%20of%20embracing%20the%20existing%20open%20source%20t-digest%20and%20contributing%20back%20to%20the%20t-digest%20project%3A%20we%20found%20more%20users%20that%20were%20interested%20in%20using%20t-digest%20with%20%3CA%20href%3D%22https%3A%2F%2Fwww.citusdata.com%2Fdownload%2F%22%20target%3D%22_self%22%20rel%3D%22nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3ECitus%3C%2FA%3E!%20%3CBR%20%2F%3E%3CBR%20%2F%3EMin%20Wei%20from%20the%20Windows%20Data%20%26amp%3B%20Intelligence%20team%E2%80%94%3CA%20href%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fazure-database-for-postgresql%2Farchitecting-petabyte-scale-analytics-by-scaling-out-postgres-on%2Fba-p%2F969685%22%20target%3D%22_blank%22%20rel%3D%22noopener%22%3Ewho%20uses%20Citus%20with%20Postgres%20on%20Azure%20to%20support%20over%206M%20queries%20per%20day%3C%2FA%3E%E2%80%94also%20needs%20to%20do%20percentile%20approximation%20at%20scale.%20And%20while%20working%20on%20this%20project%20we%20discovered%20that%20Min%20is%20looking%20to%20use%20the%20exact%20exact%20same%20t-digest%20extension%20to%20Postgres%2C%20too.%20%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EMatt%20Watson%20from%20Stackify%20recently%20published%20a%20blog%20about%20%3CA%20href%3D%22https%3A%2F%2Fstackify.com%2Fsql-percentile-aggregates-and-rollups-with-postgresql-and-t-digest%2F%22%20target%3D%22_blank%22%20rel%3D%22noopener%20nofollow%20noopener%20noreferrer%20noopener%20noreferrer%22%3Ecalculating%20percentiles%20with%20t-digest%20in%20Citus%3C%2FA%3E.%20Matt%20even%20helped%20improve%20the%20extension%20by%20%3CA%20href%3D%22https%3A%2F%2Fgithub.com%2Ftvondra%2Ftdigest%2Fissues%2F4%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noopener%20noreferrer%20noopener%20noreferrer%22%3Edocumenting%20an%20edge%20case%3C%2FA%3E%20where%20the%20calculations%20were%20off%2C%20making%20t-digest%20work%20better%20for%20Stackify%20and%20pretty%20much%20all%20users%E2%80%94including%20the%20original%20author%20and%20all%20of%20our%20Citus%20users.%20Were%20we%20to%20have%20selected%20a%20closed%20source%20extension%2C%20our%20efforts%20would%20not%20have%20helped%20other%20customers%20like%20Stackify%20and%20Min%20Wei%20of%20Microsoft%2C%20and%20we%20would%20have%20run%20into%20similar%20bugs%20that%20we%20would%20have%20had%20to%20fix%20ourselves.%20By%20adopting%20an%20open%20source%20solution%20and%20improving%20it%20collectively%2C%20we%20make%20it%20work%20better%20for%20all.%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-1685102%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22Histogram-graphic-for-tdigest-Postgres-Citus-blogpost.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F219809iCA4CA19A2D3D6F1B%2Fimage-size%2Flarge%3Fv%3D1.0%26amp%3Bpx%3D999%22%20title%3D%22Histogram-graphic-for-tdigest-Postgres-Citus-blogpost.png%22%20alt%3D%22Histogram-graphic-for-tdigest-Postgres-Citus-blogpost.png%22%20%2F%3E%3C%2FSPAN%3E%3CBR%20%2F%3EHow%20we%20sped%20up%20percentile%20calculations%20for%20a%20customer%20by%2045x%20using%20Postgres%2C%20Citus%2C%20and%20t-digest%2C%20without%20access%20to%20our%20customer%E2%80%99s%20data.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-1685102%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3EAnalytics%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3EPerformance%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3Epostgresql%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Microsoft

When working on the internals of Citus, an open source extension that transforms Postgres into a distributed database, we often get to talk with customers that have interesting challenges you won’t find everywhere. Just a few months back, I encountered an analytics workload that was a really good fit for Citus.

 

But we had one problem: the percentile calculations on their data (over 300 TB of data) could not meet their SLA of 30 seconds.

 

To make things worse, the query performance was not even close to the target: the percentile calculations were taking about 6 minutes instead of the required 30 second SLA.

Figuring out how to meet the 30 second Postgres query SLA was a challenge because we didn’t have access to our customer’s data—and also because my customer didn’t have the cycles to compare the performance for different approaches I was considering. So we had to find ways to estimate which types of percentile calculations would meet their SLA, without having to spend the engineering cycles to implement different approaches.

 

This post explores how—with the help of the Postgres open source community—I was able to reduce the time to calculate percentiles by 45x by using the t-digest extension to Postgres.

Importance of calculating percentiles in analytics workloads

 

My customer operates a multi datacenter web application with a real-time analytics dashboard that displays statistics about a variety of signals—and they store the analytics data in Hyperscale (Citus) on our Azure Database for PostgreSQL managed service. They ingest over 2 TB of data per hour and needed to get < 30 second performance for their queries over a 7-day period This analytics dashboard is used by their engineers to debug and root cause customer-reported issues. So they query metrics like latency, status codes, and error codes based on dimensions such as region, browser, data center, and the like.

 

Latency is of course an important metric for understanding these types of issues. However, average latency can be very misleading—which is where percentiles come in. If 1% of your users are experiencing super slow response times, the average query response time may not change much, leading you to (incorrectly) think that nothing is wrong. However, you would see a notable difference in P99, allowing you to isolate issues much faster.

Which is why metrics like P99 are so important when monitoring analytics workloads. A P99 query response time of 500ms means that the response time for 99% of your queries are faster than 500ms.

 

Native percentile functions in Postgres didn’t do the trick for this use case

 

Postgres provides native support for selecting the value of a column at a certain percentile with the ordered-set aggregate functions:

  • percentile_cont
  • percentile_disc

 

Having native support for percentile calculations in Postgres is super powerful: all you have to do is specify you want the percentile and then you can let Postgres figure out how to get it.

And yet... percentiles can’t be sped up by indexes in Postgres.

 

When diving into the implementation of percentile_cont in Postgres, you will find the transition functions for these aggregates collect, store, and sort the data internally. Meaning Postgres cannot rely on a known sort order of an index to significantly speed up the calculation.

 

Add on top that, when Postgres is sharded across multiple nodes by Citus and we want the percentile over a sharded dataset, we cannot combine the percentiles we get back from the distributed shards and expect a correct result. Instead Citus requires all rows to be sent to one location, sorted locally, and find the value at the requested percentile. The copying of data takes the most time in a distributed Citus cluster and this was in fact the problem for our customer: out of the 6 minutes for the query, 4 ½ minutes were spent in just pulling data to the coordinator.

 

To speed up the percentile calculations with Citus, we needed to reduce the amount of data required at the Citus coordinator to get to a reasonable percentile.

Lucky for us this was not the first time we had needed to find an effective way to run complex SQL calculations—at scale. In the past we have written about similar challenges with COUNT(DISTINCT) queries—which we solved with the HyperLogLog approximation algorithm, also known as HLL.

 

Hence this epiphany: Using a trusted, mathematically-robust approximation algorithm (sometimes called a sketch algorithm) might help us speed up performance.

 

Which Percentile approximation technique to use with Postgres?

 

A quick exploration in the space of percentile approximation pinpointed 3 different Postgres extensions we could consider. Two of these Postgres extensions had been created by Citus engineering interns in years past—and while not yet finished and not yet open source, these two prototype extensions deserved consideration:

  • High Dynamic Range Histograms, HDR for short
  • t-digest (the one created by our Citus intern)


And one open source t-digest extension contributed by Postgres committer Tomas Vondra:

 

With 3 competing extensions to Postgres that were all suitable for the approximation the question changes from how to, into the question of which one to use? To decide between the 3 different extensions for approximating percentiles, we needed to answer these 2 questions for each option:

 

  • How fast would the execution of our customer be?
  • How production-ready is the extension?

 

And we wanted to be able to answer these questions without having to integrate each of the approximation algorithms with Citus—and without having to ask our customer to measure how fast each performed.

 

The challenge: Figuring out if approximation algorithms would meet the SLA, without having to implement & measure all the approaches

 

Getting to our customer SLA of 30 seconds from over 6 minutes was a hard task. And before doing too much engineering work, we needed to assess if calculating the percentiles using one of these Postgres extensions like t-digest or HDR was going to hit the 30 second SLA.

 

Given that COUNT DISTINCT had already been solved with HyperLogLog, we realized we could use a query that triggers the HLL code path to establish a ballpark figure on time required for the calculation, omitting some computational details.

 

To get this information, we asked our customer for the Postgres EXPLAIN plans of the original percentile queries and counting the distinct number of latencies (with HLL). Both were grouped by the desired time interval—minutes of a day—which resulted in 1440 groups.

Type of calculation with customer data (based on our customer’s Postgres EXPLAIN plans)

Execution Time

percentile_cont

361 seconds (~6 minutes)

COUNT DISTINCT – via HyperLogLog approximation

4 seconds

 

Even though the COUNT DISTINCT via HLL above gives an answer that tells us nothing, the execution characteristics for approximating the counts is very similar on how we would approximate the percentiles.

 

The shared execution characteristics between COUNT DISTINCT via HLL and approximating percentiles are:

  • to iterate over all values,
  • change a summary we keep,
  • send the summaries to the Citus coordinator, and finally
  • combine the summaries to come to an answer.

 

From the COUNT DISTINCT via HLL row in the table above, you can see that using an approximation algorithm (and maintaining the summary) seemed to get us in the ballpark of the desired execution times.

 

But it was unclear how the time it would take to do percentile approximations compares to the time it takes to approximate COUNT DISTINCT.

We needed to figure out, would it be 2X or 20X more expensive to approximate percentiles vs. to approximate COUNT DISTINCTs?

The good news: to get a multiplier for the compute required, we did not have to rely on the environment of our customer.

 

Instead we set a baseline by running roughly the same approximation. In this case, the number of rows and the number of expected groups, could give a good indication. By running both a count distinct approximation and a percentile approximation, we were able to measure how much more work one requires over the other on the same data. This multiplier also shed light on whether we would be able to meet the target execution time of 30 seconds.

We used these queries to estimate the performance:

 

 


WITH data AS (
	SELECT
		(random()*60*24)::integer AS minute,
		(random()*60000)::integer AS latency
	FROM generate_series(1,1000000)
)
SELECT
	minute,
	aggregate_to_create_summary(latency) AS summary
FROM data
GROUP BY minute;

 

 

Executing these experiments gave us a workload multiplier which we applied to the runtime as measured at the target infrastructure. We ran these experiments for the 3 extensions we had identified earlier (t-digest, t-digest, and HDR) and the HyperLogLog (HLL) extension used for count approximations. The multipliers in the table below are against the baseline of the HLL extension.

Approximation algorithm

Runtime

Multiplier vs. HLL
(lower means faster)

HDR – prototype

2023 ms

~2x

t-digest – prototype

4089 ms

~4x

t-digest – open source

1590 ms

~1.5x

HyperLogLog (baseline)

1026 ms

1

 

Data sizes matter

 

Besides compute time, the size of the data being transferred from the shards to the coordinator has a big influence on the total execution time. That is why we started this investigation in the first place: because transferring all rows to the coordinator had been taking the majority of the 6 minutes. Due to simplicity and portability, Citus uses the text-based protocol for transferring rows from the workers to the coordinator. To get an idea of data sizes involved, we can cast the summaries created above to text and sum their lengths to get an idea of the size of data transferred. This is again not an exact size but more an order of magnitude check.

 

To get an estimated guess on how long the final execution would take using t-digest, we compared the transfer sizes and then we used a multiplier on the transfer times (measured in the target environment.)

 

 

WITH data AS (
	SELECT
		(random()*60*24)::integer AS minute,
		(random()*60000)::integer AS latency
	FROM generate_series(1,1000000)
), 
summary_sizes AS (
	SELECT
		minute,
		octet_length(aggregate_to_create_summary(latency)::text) AS percentile_size
	FROM data
	GROUP BY minute
)
SELECT sum(percentile_size)*200 FROM summary_sizes;

 

 

Postgres Extension

Amount of data transferred to Citus coordinator

Multiplier
(lower is better)

HDR prototype

3.8 GB

~6x

t-digest prototype

1.7 GB

~2.7x

t-digest (open source)

218 MB

~0.34x

HyperLogLog (baseline)

646 MB

1

 

Based on both the measurements of compute time as well as the expected data transfer sizes it became clear the open source t-digest extension created by Postgres committer Tomas Vondra would yield the best performance for Citus and specifically for the analytics use case we were targeting.

 

If you can’t beat them, join them

 

At this point, we had a good understanding of the speeds that could be achieved for percentile approximation with the different Postgres extensions we were considering. And the good news was, the projected speeds were well within our customer’s SLA.

 

The hard question we had to answer: should we spend time to productize our own t-digest or HDR prototype extensions—or should we adopt (and try to improve) the existing open source t-digest extension?

 

A quick chat with the author of the open source github.com/tvondra/tdigest extension, Tomas Vondra, revealed that he uses the extension to great satisfaction. Also, Tomas was open to contributions to the extension to make it work well for Citus. The documented use of the open source t-digest extension by at least 1 user was already infinitely more than our internal prototypes. And fixing a bug we had encountered was also straightforward. One pull request later and we had an open source t-digest extension that would work with Citus.

 

The next step was to integrate t-digest with Citus. Given our experience with HLL in the past, the integration of t-digest with the Citus distributed Postgres planner was straightforward. It was time to go back to our customer to validate that we could meet their SLA. And we did meet our customer’s SLA: with t-digest and Hyperscale (Citus) , our customer’s Postgres queries to approximate percentiles now ran in 7.5 seconds, which was 45 times faster compared to their initial 6 minute Postgres query.

More open source adoption of t-digest (with Citus)

 

One positive side-effect of embracing the existing open source t-digest and contributing back to the t-digest project: we found more users that were interested in using t-digest with Citus!

Min Wei from the Windows Data & Intelligence team—who uses Citus with Postgres on Azure to support over 6M queries per day—also needs to do percentile approximation at scale. And while working on this project we discovered that Min is looking to use the exact exact same t-digest extension to Postgres, too.  

 

Matt Watson from Stackify recently published a blog about calculating percentiles with t-digest in Citus. Matt even helped improve the extension by documenting an edge case where the calculations were off, making t-digest work better for Stackify and pretty much all users—including the original author and all of our Citus users. Were we to have selected a closed source extension, our efforts would not have helped other customers like Stackify and Min Wei of Microsoft, and we would have run into similar bugs that we would have had to fix ourselves. By adopting an open source solution and improving it collectively, we make it work better for all.