Partition strategy to improve performance




I am querying loganalytics workspace from workbook. By nature of the solution, the log analytics workspace will have large number of records. And the query is bit complex with number of joins and summarize.

Most of my queries work fine. But few of the queries actually lead to memory peak and then aborting  query execution.

Hence I looked at to improve performance by using shuffle.key operator. I could see the difference. The query which used to get aborted is now running (but occasionally fails). 

In the above link, it mentions about hint.num_partitions and using which we can specify the number of partitions/cluster to execute query parallely.
1. How many clusters will be allocated for log analytics workspace. Is it configurable?
2. In some of the log queries I noticed "hint.strategy=partitioned". But I couldn't find details about it. Could you please explain/provide pointers? 
(Eg - | summarize hint.strategy=partitioned arg_max(TimeGenerated, UpdateState) by SourceComputerId, UpdateID )

1 Reply
I think this has been answered offline @Vino55 ?