Introduction:
In this article we are going to use YAML pipelines for doing the deployment of synapse code along with customization of input parameters which can help you create the deployment d...
Unfortunately, it didn't work. I did the same steps as you did. My cells in my notebook are in parameters and so on.
I created two storage account, one called st92dev and another one called st92uat
And also two synapse workspaces.
In my notebook on workspace dev, I put my cell as parameter as you did and my parameter account_name.
( as you can notice, my dev enviroment is linked to my ADO, the cell is as parameters and the name of my var is st92dev. )
And in my notebook I created the Parameters as you did.
I published, and appeared the parameters in my ARM template.
Then, I created a variable Group called uat, added the var called env == uat.
Later, I created the release pipeline, build the Artifact pointing to my folder workspace_publish
I put these variables in my pipeline.
Then I replaced them in the field override parameters, as depicted below.
It works well, and the deployment happens. But I don't know what happens in my uat environment, it doesn't appear as the correct name for my parameter.
And honestly, I didn't understand really well, how to add the template_parameters.json that you mentioned.
And another thing that I noticed, my env uat was not running the notebook, it throws an error "AVAILABLE_WORKSPACE_CAPACITY_EXCEEDED: Livy session has failed. Session state: Error. Error code: AVAILABLE_WORKSPACE_CAPACITY_EXCEEDED. Your job requested 40 vcores. However, the workspace only has 12 vcores available out of quota of 12 vcores for node size family [MemoryOptimized]. Try ending the running job(s), reducing the numbers of vcores requested or increasing your vcore quota. Source: User."
My Spark pools across my environments have the same name.
My Service Principal from ADO is already a Workspace Publisher Artifact on the environment uat.
I appreciate your patience and kindness in helping.