Partition Stream Analytics output and query it with serverless Synapse SQL

Published Jul 19 2021 10:18 AM 3,553 Views
Microsoft

The use case is as follows: I have water meter telemetry I would like to do analytics on. 
Events are ingested from water meters and collected into a data lake in parquet format. The data is partitioned by Year, Month and Day based on the timestamp contained in the events themselves and not based on the time of the event processing in ASA as this is a frequent requirement.

 

Events are sent from the on premise SCADA systems to Event Hub then processed by Stream Analytics which then can easily:

  1. Convert events sent in JSON format into partitioned parquet.
  2. Portioning is based on Year/Month/Day.
  3. Date used for partitioning is coming from within the event.

The result can immediately be queried with serverless Synapse SQL pool.

Input Stream

My ASA input stream named inputEventHub is plugged into an Event Hub in JSON format.

Output Stream

The output stream is the interesting part and will define the partition scheme:

lionelp_0-1624896450197.png

We see that its path pattern is based on a pseudo column named "time_struct" and all the partitioning logic is in the construct of this pseudo column.

 

Let's have a look at the ASA query:

lionelp_1-1624896696879.png

lionelp_2-1624896721550.png

 

We can see now that the pseudo_column time_struct contains the path, ASA understands it and processes it literally including the "/" sign.

 

Here is the query code:

 

 

 

select 
    concat('year=',substring(createdAt,1,4),'/month=',substring(createdAt,6,2),'/day=',substring(createdAt,9,2)) as time_struct,
    eventId,
    [type],
    deviceId,
    deviceSequenceNumber,
    createdAt,
    Value,
    complexData,
    EventEnqueuedUtcTime AS enqueuedAt,
    EventProcessedUtcTime AS processedAt,
    cast(UDF.GetCurrentDateTime('') as datetime) AS storedAt,
    PartitionId
into
    [lionelpdl]
from 
    [inputEventHub]

 

 

 

 

After few days of processing the output folder looks like this as a result:

lionelp_0-1624896848841.png

 

lionelp_1-1624896848841.png

Query results with serveless SQL and take advantage of partitioning

Now I can directly query my Output Stream with serverless SQL:

 

lionelp_0-1624898246159.png

 

We can also notice that the metadata functions are fully functional without any additional work. For example I can run the following query using filepath metadata function:

 

 

 

  SELECT top 100
    [result].filepath(1) AS [year]
    ,[result].filepath(2) AS [month]
    ,[result].filepath(3) AS [day]
    ,*
FROM
    OPENROWSET(
        BULK 'https://lionelpdl.dfs.core.windows.net/parquetzone/deplasa1/year=*/month=*/day=*/*.parquet',
        FORMAT='PARQUET'
    ) AS [result]

where [result].filepath(2)=6
  and [result].filepath(3)=23

 

 

 

Spark post processing

Finally, to optimize my query performance I can schedule a Spark job which processes daily all events from the previous day, compacts them into fewer and larger parquet files.

As an example, I've decided to rebuild the partitions with files containing 2 million rows.

 

Here are 2 versions of the same code:

PySpark notebook (for interactive testing for instance)

 

 

 

from pyspark.sql import SparkSession
from pyspark.sql.types import *
from functools import reduce
from pyspark.sql import DataFrame
import datetime

account_name = "storage_account_name"
container_name = "container_name"
source_root = "source_directory_name"
target_root = "target_directory_name"
days_backwards = 4 #number of days from today, typicaly, as a daily job it'll be set to 1
adls_path = 'abfss://%s@%s.dfs.core.windows.net/%s' % (container_name, account_name, source_root)

hier = datetime.date.today() - datetime.timedelta(days = days_backwards)
day_to_process = '/year=%04d/month=%02d/day=%02d/' % (hier.year,hier.month,hier.day)
file_pattern='*.parquet'

print((adls_path + day_to_process + file_pattern))

df = spark.read.parquet(adls_path + day_to_process + file_pattern)

adls_result = 'abfss://%s@%s.dfs.core.windows.net/%s' % (container_name, account_name, target_root)

print(adls_result + day_to_process + file_pattern)

df.coalesce(1).write.option("header",True) \
        .mode("overwrite") \
        .option("maxRecordsPerFile", 2000000) \
        .parquet(adls_result + day_to_process)

 

 

 

 

Spark job (with input parameters scheduled daily)

lionelp_0-1625501958062.png

 

 

 

 

import sys
import datetime
from pyspark import SparkContext, SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.types import *
from functools import reduce
from pyspark.sql import DataFrame

if __name__ == "__main__":
	
	# create Spark context with necessary configuration
	conf = SparkConf().setAppName("dailyconversion").set("spark.hadoop.validateOutputSpecs", "false")
	sc = SparkContext(conf=conf)
	spark = SparkSession(sc)
	
	account_name = sys.argv[1] #'storage_account_name'
	container_name = sys.argv[2] #"container_name"
	source_root = sys.argv[3] #"source_directory_name"
	target_root = sys.argv[4] #"target_directory_name"
	days_backwards = sys.argv[5] #number of days backwards in order to reprocess the parquet files, typically 1

	hier = datetime.date.today() - datetime.timedelta(days=int(days_backwards))
    
	day_to_process = '/year=%04d/month=%02d/day=%02d/' % (hier.year,hier.month,hier.day)
	file_pattern='*.parquet'

	adls_path = 'abfss://%s@%s.dfs.core.windows.net/%s' % (container_name, account_name, source_root)

	print((adls_path + day_to_process + file_pattern))

	df = spark.read.parquet(adls_path + day_to_process + file_pattern)
	#display (df.limit(10))
	#df.printSchema()
	#display(df)
	adls_result = 'abfss://%s@%s.dfs.core.windows.net/%s' % (container_name, account_name, target_root)

	print(adls_result + day_to_process + file_pattern)

	df.coalesce(1).write.option("header",True) \
		.mode("overwrite") \
		.option("maxRecordsPerFile", 2000000) \
		.parquet(adls_result + day_to_process)

 

 

 

Conclusion

In this article we have covered:

  • How to easily use Stream Analytics to write an output with partitioned parquet files.
  • How to use serverless Synapse SQL pool to query Stream analytics output.
  • How to reduce the number of parquet files using synapse Spark pool.

Additional resources:

 

 

%3CLINGO-SUB%20id%3D%22lingo-sub-2516757%22%20slang%3D%22en-US%22%3EPartition%20Stream%20Analytics%20output%20and%20query%20it%20with%20serverless%20Synapse%20SQL%3C%2FLINGO-SUB%3E%3CLINGO-BODY%20id%3D%22lingo-body-2516757%22%20slang%3D%22en-US%22%3E%3CP%3EThe%20use%20case%20is%20as%20follows%3A%20I%20have%20water%20meter%20telemetry%20I%20would%20like%20to%20do%20analytics%20on.%26nbsp%3B%3CBR%20%2F%3EEvents%20are%20ingested%20from%20water%20meters%20and%20collected%20into%20a%20data%20lake%20in%20parquet%20format.%20The%20data%20is%20partitioned%20by%20Year%2C%20Month%20and%20Day%20based%20on%20the%20timestamp%20contained%20in%20the%20events%20themselves%20and%20not%20based%20on%20the%20time%20of%20the%20event%20processing%20in%20ASA%20as%20this%20is%20a%20frequent%20requirement.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EEvents%20are%20sent%20from%20the%20on%20premise%20SCADA%20systems%20to%20Event%20Hub%20then%20processed%20by%20Stream%20Analytics%20which%20then%20can%20easily%3A%3C%2FP%3E%0A%3COL%3E%0A%3CLI%3EConvert%20events%20sent%20in%20JSON%20format%20into%20partitioned%20parquet.%3C%2FLI%3E%0A%3CLI%3EPortioning%20is%20based%20on%20Year%2FMonth%2FDay.%3C%2FLI%3E%0A%3CLI%3EDate%20used%20for%20partitioning%20is%20coming%20from%20within%20the%20event.%3C%2FLI%3E%0A%3C%2FOL%3E%0A%3CP%3EThe%20result%20can%20immediately%20be%20queried%20with%20serverless%20Synapse%20SQL%20pool.%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId-2089202414%22%20id%3D%22toc-hId-2089202388%22%3EInput%20Stream%3C%2FH3%3E%0A%3CP%3EMy%20ASA%20input%20stream%20named%20inputEventHub%20is%20plugged%20into%20an%20Event%20Hub%20in%20JSON%20format.%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId-281747951%22%20id%3D%22toc-hId-281747925%22%3EOutput%20Stream%3C%2FH3%3E%0A%3CP%3EThe%20output%20stream%20is%20the%20interesting%20part%20and%20will%20define%20the%20partition%20scheme%3A%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_0-1624896450197.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291928i7651396FA019DF36%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22lionelp_0-1624896450197.png%22%20alt%3D%22lionelp_0-1624896450197.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3EWe%20see%20that%20its%20path%20pattern%20is%20based%20on%20a%20pseudo%20column%20named%20%22time_struct%22%20and%20all%20the%20partitioning%20logic%20is%20in%20the%20construct%20of%20this%20pseudo%20column.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3ELet's%20have%20a%20look%20at%20the%20ASA%20query%3A%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_1-1624896696879.png%22%20style%3D%22width%3A%20810px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291929i7E2A466AA2C31AB6%2Fimage-dimensions%2F810x462%3Fv%3Dv2%22%20width%3D%22810%22%20height%3D%22462%22%20role%3D%22button%22%20title%3D%22lionelp_1-1624896696879.png%22%20alt%3D%22lionelp_1-1624896696879.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_2-1624896721550.png%22%20style%3D%22width%3A%20780px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291930i2B474A4E4FBA6648%2Fimage-dimensions%2F780x199%3Fv%3Dv2%22%20width%3D%22780%22%20height%3D%22199%22%20role%3D%22button%22%20title%3D%22lionelp_2-1624896721550.png%22%20alt%3D%22lionelp_2-1624896721550.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20can%20see%20now%20that%20the%20pseudo_column%20time_struct%20contains%20the%20path%2C%20ASA%20understands%20it%20and%20processes%20it%20literally%20including%20the%20%22%2F%22%20sign.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHere%20is%20the%20query%20code%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-sql%22%3E%3CCODE%3Eselect%20%0A%20%20%20%20concat('year%3D'%2Csubstring(createdAt%2C1%2C4)%2C'%2Fmonth%3D'%2Csubstring(createdAt%2C6%2C2)%2C'%2Fday%3D'%2Csubstring(createdAt%2C9%2C2))%20as%20time_struct%2C%0A%20%20%20%20eventId%2C%0A%20%20%20%20%5Btype%5D%2C%0A%20%20%20%20deviceId%2C%0A%20%20%20%20deviceSequenceNumber%2C%0A%20%20%20%20createdAt%2C%0A%20%20%20%20Value%2C%0A%20%20%20%20complexData%2C%0A%20%20%20%20EventEnqueuedUtcTime%20AS%20enqueuedAt%2C%0A%20%20%20%20EventProcessedUtcTime%20AS%20processedAt%2C%0A%20%20%20%20cast(UDF.GetCurrentDateTime('')%20as%20datetime)%20AS%20storedAt%2C%0A%20%20%20%20PartitionId%0Ainto%0A%20%20%20%20%5Blionelpdl%5D%0Afrom%20%0A%20%20%20%20%5BinputEventHub%5D%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EAfter%20few%20days%20of%20processing%20the%20output%20folder%20looks%20like%20this%20as%20a%20result%3A%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_0-1624896848841.png%22%20style%3D%22width%3A%20679px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291931i2D3B16639990D7F1%2Fimage-dimensions%2F679x236%3Fv%3Dv2%22%20width%3D%22679%22%20height%3D%22236%22%20role%3D%22button%22%20title%3D%22lionelp_0-1624896848841.png%22%20alt%3D%22lionelp_0-1624896848841.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_1-1624896848841.png%22%20style%3D%22width%3A%20683px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291932i86C0328B8B0DF004%2Fimage-dimensions%2F683x246%3Fv%3Dv2%22%20width%3D%22683%22%20height%3D%22246%22%20role%3D%22button%22%20title%3D%22lionelp_1-1624896848841.png%22%20alt%3D%22lionelp_1-1624896848841.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--1525706512%22%20id%3D%22toc-hId--1525706538%22%3EQuery%20results%20with%20serveless%20SQL%20and%20take%20advantage%20of%20partitioning%3C%2FH3%3E%0A%3CP%3ENow%20I%20can%20directly%20query%20my%20Output%20Stream%20with%20serverless%20SQL%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_0-1624898246159.png%22%20style%3D%22width%3A%20947px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F291952i7FD32ED20F113A65%2Fimage-dimensions%2F947x413%3Fv%3Dv2%22%20width%3D%22947%22%20height%3D%22413%22%20role%3D%22button%22%20title%3D%22lionelp_0-1624898246159.png%22%20alt%3D%22lionelp_0-1624898246159.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EWe%20can%20also%20notice%20that%20the%20metadata%20functions%20are%20fully%20functional%20without%20any%20additional%20work.%20For%20example%20I%20can%20run%20the%20following%20query%20using%20filepath%20metadata%20function%3A%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-sql%22%3E%3CCODE%3E%20%20SELECT%20top%20100%0A%20%20%20%20%5Bresult%5D.filepath(1)%20AS%20%5Byear%5D%0A%20%20%20%20%2C%5Bresult%5D.filepath(2)%20AS%20%5Bmonth%5D%0A%20%20%20%20%2C%5Bresult%5D.filepath(3)%20AS%20%5Bday%5D%0A%20%20%20%20%2C*%0AFROM%0A%20%20%20%20OPENROWSET(%0A%20%20%20%20%20%20%20%20BULK%20'https%3A%2F%2Flionelpdl.dfs.core.windows.net%2Fparquetzone%2Fdeplasa1%2Fyear%3D*%2Fmonth%3D*%2Fday%3D*%2F*.parquet'%2C%0A%20%20%20%20%20%20%20%20FORMAT%3D'PARQUET'%0A%20%20%20%20)%20AS%20%5Bresult%5D%0A%0Awhere%20%5Bresult%5D.filepath(2)%3D6%0A%20%20and%20%5Bresult%5D.filepath(3)%3D23%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId-961806321%22%20id%3D%22toc-hId-961806295%22%3ESpark%20post%20processing%3C%2FH3%3E%0A%3CP%3EFinally%2C%20to%20optimize%20my%20query%20performance%20I%20can%20schedule%20a%20Spark%20job%20which%20processes%20daily%20all%20events%20from%20the%20previous%20day%2C%20compacts%20them%20into%20fewer%20and%20larger%20parquet%20files.%3C%2FP%3E%0A%3CP%3EAs%20an%20example%2C%20I've%20decided%20to%20rebuild%20the%20partitions%20with%20files%20containing%202%20million%20rows.%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EHere%20are%202%20versions%20of%20the%20same%20code%3A%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId-1652367795%22%20id%3D%22toc-hId-1652367769%22%3EPySpark%20notebook%20(for%20interactive%20testing%20for%20instance)%3C%2FH4%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Efrom%20pyspark.sql%20import%20SparkSession%0Afrom%20pyspark.sql.types%20import%20*%0Afrom%20functools%20import%20reduce%0Afrom%20pyspark.sql%20import%20DataFrame%0Aimport%20datetime%0A%0Aaccount_name%20%3D%20%22storage_account_name%22%0Acontainer_name%20%3D%20%22container_name%22%0Asource_root%20%3D%20%22source_directory_name%22%0Atarget_root%20%3D%20%22target_directory_name%22%0Adays_backwards%20%3D%204%20%23number%20of%20days%20from%20today%2C%20typicaly%2C%20as%20a%20daily%20job%20it'll%20be%20set%20to%201%0Aadls_path%20%3D%20'abfss%3A%2F%2F%25s%40%25s.dfs.core.windows.net%2F%25s'%20%25%20(container_name%2C%20account_name%2C%20source_root)%0A%0Ahier%20%3D%20datetime.date.today()%20-%20datetime.timedelta(days%20%3D%20days_backwards)%0Aday_to_process%20%3D%20'%2Fyear%3D%2504d%2Fmonth%3D%2502d%2Fday%3D%2502d%2F'%20%25%20(hier.year%2Chier.month%2Chier.day)%0Afile_pattern%3D'*.parquet'%0A%0Aprint((adls_path%20%2B%20day_to_process%20%2B%20file_pattern))%0A%0Adf%20%3D%20spark.read.parquet(adls_path%20%2B%20day_to_process%20%2B%20file_pattern)%0A%0Aadls_result%20%3D%20'abfss%3A%2F%2F%25s%40%25s.dfs.core.windows.net%2F%25s'%20%25%20(container_name%2C%20account_name%2C%20target_root)%0A%0Aprint(adls_result%20%2B%20day_to_process%20%2B%20file_pattern)%0A%0Adf.coalesce(1).write.option(%22header%22%2CTrue)%20%5C%0A%20%20%20%20%20%20%20%20.mode(%22overwrite%22)%20%5C%0A%20%20%20%20%20%20%20%20.option(%22maxRecordsPerFile%22%2C%202000000)%20%5C%0A%20%20%20%20%20%20%20%20.parquet(adls_result%20%2B%20day_to_process)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH4%20id%3D%22toc-hId--155086668%22%20id%3D%22toc-hId--155086694%22%3ESpark%20job%20(with%20input%20parameters%20scheduled%20daily)%3C%2FH4%3E%0A%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-inline%22%20image-alt%3D%22lionelp_0-1625501958062.png%22%20style%3D%22width%3A%20400px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F293530iB25A86443B81A8BB%2Fimage-size%2Fmedium%3Fv%3Dv2%26amp%3Bpx%3D400%22%20role%3D%22button%22%20title%3D%22lionelp_0-1625501958062.png%22%20alt%3D%22lionelp_0-1625501958062.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CPRE%20class%3D%22lia-code-sample%20language-python%22%3E%3CCODE%3Eimport%20sys%0Aimport%20datetime%0Afrom%20pyspark%20import%20SparkContext%2C%20SparkConf%0Afrom%20pyspark.sql%20import%20SparkSession%0Afrom%20pyspark.sql.types%20import%20*%0Afrom%20functools%20import%20reduce%0Afrom%20pyspark.sql%20import%20DataFrame%0A%0Aif%20__name__%20%3D%3D%20%22__main__%22%3A%0A%20%0A%20%23%20create%20Spark%20context%20with%20necessary%20configuration%0A%20conf%20%3D%20SparkConf().setAppName(%22dailyconversion%22).set(%22spark.hadoop.validateOutputSpecs%22%2C%20%22false%22)%0A%20sc%20%3D%20SparkContext(conf%3Dconf)%0A%20spark%20%3D%20SparkSession(sc)%0A%20%0A%20account_name%20%3D%20sys.argv%5B1%5D%20%23'storage_account_name'%0A%20container_name%20%3D%20sys.argv%5B2%5D%20%23%22container_name%22%0A%20source_root%20%3D%20sys.argv%5B3%5D%20%23%22source_directory_name%22%0A%20target_root%20%3D%20sys.argv%5B4%5D%20%23%22target_directory_name%22%0A%20days_backwards%20%3D%20sys.argv%5B5%5D%20%23number%20of%20days%20backwards%20in%20order%20to%20reprocess%20the%20parquet%20files%2C%20typically%201%0A%0A%20hier%20%3D%20datetime.date.today()%20-%20datetime.timedelta(days%3Dint(days_backwards))%0A%20%20%20%20%0A%20day_to_process%20%3D%20'%2Fyear%3D%2504d%2Fmonth%3D%2502d%2Fday%3D%2502d%2F'%20%25%20(hier.year%2Chier.month%2Chier.day)%0A%20file_pattern%3D'*.parquet'%0A%0A%20adls_path%20%3D%20'abfss%3A%2F%2F%25s%40%25s.dfs.core.windows.net%2F%25s'%20%25%20(container_name%2C%20account_name%2C%20source_root)%0A%0A%20print((adls_path%20%2B%20day_to_process%20%2B%20file_pattern))%0A%0A%20df%20%3D%20spark.read.parquet(adls_path%20%2B%20day_to_process%20%2B%20file_pattern)%0A%20%23display%20(df.limit(10))%0A%20%23df.printSchema()%0A%20%23display(df)%0A%20adls_result%20%3D%20'abfss%3A%2F%2F%25s%40%25s.dfs.core.windows.net%2F%25s'%20%25%20(container_name%2C%20account_name%2C%20target_root)%0A%0A%20print(adls_result%20%2B%20day_to_process%20%2B%20file_pattern)%0A%0A%20df.coalesce(1).write.option(%22header%22%2CTrue)%20%5C%0A%20%20.mode(%22overwrite%22)%20%5C%0A%20%20.option(%22maxRecordsPerFile%22%2C%202000000)%20%5C%0A%20%20.parquet(adls_result%20%2B%20day_to_process)%3C%2FCODE%3E%3C%2FPRE%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CH3%20id%3D%22toc-hId--165589772%22%20id%3D%22toc-hId--165589798%22%3EConclusion%3C%2FH3%3E%0A%3CP%3EIn%20this%20article%20we%20have%20covered%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EHow%20to%20easily%20use%20Stream%20Analytics%20to%20write%20an%20output%20with%20partitioned%20parquet%20files.%3C%2FLI%3E%0A%3CLI%3EHow%20to%20use%20serverless%26nbsp%3BSynapse%20SQL%20pool%20to%20query%20Stream%20analytics%20output.%3C%2FLI%3E%0A%3CLI%3EHow%20to%20reduce%20the%20number%20of%20parquet%20files%20using%20synapse%20Spark%20pool.%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3EAdditional%20resources%3A%3C%2FP%3E%0A%3CUL%3E%0A%3CLI%3EPartitioning%20of%20the%20output%20stream%20is%20built%20based%20on%20%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fstream-analytics%2Fstream-analytics-custom-path-patterns-blob-storage-output%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EAzure%20Stream%20Analytics%20custom%20blob%20output%20partitioning%20%7C%20Microsoft%20Docs%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3EQuerying%20the%20data%20using%20file%20metadata%20with%20serverless%20SQL%20is%20based%20on%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fsynapse-analytics%2Fsql%2Fquery-specific-files%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3EUsing%20file%20metadata%20in%20queries%20-%20Azure%20Synapse%20Analytics%20%7C%20Microsoft%20Docs%3C%2FA%3E%3C%2FLI%3E%0A%3CLI%3EAdditional%20Apache%20Spark%20post%20processing%20of%20parquet%20files%20is%20based%20on%26nbsp%3B%3CA%20href%3D%22https%3A%2F%2Fdocs.microsoft.com%2Fen-us%2Fazure%2Fsynapse-analytics%2Fspark%2Fapache-spark-job-definitions%22%20target%3D%22_blank%22%20rel%3D%22noopener%20noreferrer%22%3ETutorial%3A%20Create%20Apache%20Spark%20job%20definition%20in%20Synapse%20Studio%20-%20Azure%20Synapse%20Analytics%20%7C%20Microsoft%20Docs%3C%2FA%3E%3C%2FLI%3E%0A%3C%2FUL%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%3C%2FLINGO-BODY%3E%3CLINGO-TEASER%20id%3D%22lingo-teaser-2516757%22%20slang%3D%22en-US%22%3E%3CP%3E%3CSPAN%20class%3D%22lia-inline-image-display-wrapper%20lia-image-align-center%22%20image-alt%3D%22lionelp_0-1625504556150.png%22%20style%3D%22width%3A%20999px%3B%22%3E%3CIMG%20src%3D%22https%3A%2F%2Ftechcommunity.microsoft.com%2Ft5%2Fimage%2Fserverpage%2Fimage-id%2F293532iFCF0F13EAD121C9A%2Fimage-size%2Flarge%3Fv%3Dv2%26amp%3Bpx%3D999%22%20role%3D%22button%22%20title%3D%22lionelp_0-1625504556150.png%22%20alt%3D%22lionelp_0-1625504556150.png%22%20%2F%3E%3C%2FSPAN%3E%3C%2FP%3E%0A%3CP%3E%26nbsp%3B%3C%2FP%3E%0A%3CP%3EThis%20example%20is%20based%20on%20a%20use%20case%20I've%20worked%20with%20a%20customer%20who%20is%20in%20the%20utility%20industry.%3C%2FP%3E%0A%3CP%3EIt%20leverages%20on-the-fly%20partitioning%20capabilities%20of%20Stream%20analytics%2C%20Synapse%20serverless%20SQL%20pool%20and%20Spark%20pool.%3C%2FP%3E%3C%2FLINGO-TEASER%3E%3CLINGO-LABS%20id%3D%22lingo-labs-2516757%22%20slang%3D%22en-US%22%3E%3CLINGO-LABEL%3ESynapse%20Spark%3C%2FLINGO-LABEL%3E%3CLINGO-LABEL%3ESynapse%20SQL%3C%2FLINGO-LABEL%3E%3C%2FLINGO-LABS%3E
Co-Authors
Version history
Last update:
‎Jul 19 2021 10:18 AM
Updated by: