Question
Coforge
US
Last activity: 14 Apr 2021 3:45 EDT
Dataflow target data set of type AWS S3 Repo
I have a requirement where I need to export data into csv format and save it onto an amazon s3 bucket. We are reading data from a postgres table and applying some transformation, rules and enriching the data with other datasets on postgres. After this process we want to save the files onto an amazon s3 bucket.
We run this dataflow for multiple times and every time a new file is generated.
The thing I wanted to find out is there a way to find out the name with which the file was saved onto the s3 bucket so that I can map to my case, giving me an option to download the file from s3.
Is there any property on the dataflow run case where this information is stored?
We tried this approach as well which didn't meet our requirement: https://collaborate.pega.com/question/parameterize-file-name-and-path-file-dataset