@ClaudeY7 it's normal to ingest large files like 13 MB in Pega Platform 8.53 as the default file size limit is 1024 MB.
However, for efficient processing of large files, you can use Pega's queue processing feature which automatically handles messages that exceed the file size threshold for the Stream Service. For on-premises or client-managed cloud implementation, you need to set up a large message repository and enable the feature by using dynamic system settings. Also, consider using other file-based platform integration capabilities for very large files to avoid memory-related performance problems.
This is a GenAI-powered tool. All generated answers require validation against the provided references.
The File data set is a tool for reading and writing data from and to files. You can use this data set type for the following use cases:
To read from a file in the CSV or JSON format that you upload and to store the content of the file in a compressed form in the pyFileSourcePreview clipboard property. You can use this data set as a source in Data Flow rules instances to test data flows and strategies. For configuration details, see Creating a File data set record for embedded files.