running python script on azure pipeline and storing output

Copper Contributor

I have a python script on azure repository, and the job runs in azure Pipeline whenever a change is detected in the .py code. Since the code's output is being seen on the cmd line or bash (I am not sure what the black screen is called) within the pipeline's job space, how do I get to store this output in a suitable format? Like JSON or .csv or .xlsx ?

 

Since my output is a dataframe called df , i tried storing it as df.to_csv(output_file_name . csv) but I dont know the path where this file is even getting created within Azure devops.

 

@David Pazdera @nguyenphupn  and other members, your help is greatly appreciated. I am very new to Azure devops.python_code_repositoryimages.PNGjob failure image.PNG

1 Reply

Hi @rekha7,

There is a set of predefined variables in Azure Pipelines, one being System.DefaultWorkingDirectory. This is where the code from your repository is stored on an agent. You can find more info about those variables and how to use them here.

 

In general, there are several ways how you can persist an output from your pipeline run:

  • save it as pipeline artifact, so you can retrieve it at a later time
  • use a cloud storage service and its CLI commands to persist it there, e.g. Azure Blob Storage. A useful blog post on how to do it here.
  • store your file in the source git repository.

 

Keep in mind, when the pipeline run is over, the agent and anything stored in its filesystem gets wiped, so just storing it in the filesystem would not persist the data.