Question


Charles Schwab
US
Last activity: 19 Jan 2025 16:26 EST
File transmission to NAS drive using specific account
We have a requirement to send files to NAS location which is mounted on application server. Currently our application(Tomcat+linux+Pega23) runs using a service account .
Is there a way to change the account within Pega or on Server to place files using a separate account instead of Default application service account ?
-
Reply
-
Share this page Facebook Twitter LinkedIn Email Copying... Copied!


Pegasystems Inc.
US
@VINAYDURGAM Are these known files that need to be copied on a regular basis?
The reason I ask is that we do that here, but it's not done within pega. We use scheduled scripts on the servers via the OS. They backup important DB's, and copy them to the SAN location(s). We also do similar by using a central jenkins server in some cases. It basically pulls the files off the remote servers at scheduled intervals, and copies them to the remote network location(s) for archiving.


Charles Schwab
US
@PhilipShannon Thank you.
Yes we do as part of our daily batch job which runs at end of day.
Can you please share more insights on the server level scripts


Pegasystems Inc.
US
@VINAYDURGAM Yes I can probably share some example code from our scripts, are your servers running Windows or Linux?


Charles Schwab
US
@PhilipShannon Linux servers.
Updated: 14 Jan 2025 14:05 EST


Pegasystems Inc.
US
@VINAYDURGAM This is from Enterprise Linux, might work the same with Debian but I am not 100% sure as have not tried that. We copy the data to a cifs share, here is the information sort of anonymized. I hope that helps out some.
Server Setup
# THis step is only for postgresql, so that the pg_dump command works in the scripts
vi ~/.pgpass
localhost:5432:dbname:postgres:password
chmod 600 ~/.pgpass
#required to connect to cifs server shares
yum install cifs-utils
#Create credential file for the network connection
vi ~/.smb-credentials (adding 3 lines):
username=<user-name>
password=<password>
domain=<domain-name>
# hide this credentials file from other users:
chmod 600 ~/.smb-credentials
# create mount point location
mkdir /mnt/backupSAN
# add to fstab vi /etc/fstab add one line at the end looking like this
//servername/team1-backup /mnt/backupSAN cifs credentials=/root/.smb-credentials,rw,vers=3.0,cache=strict,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1 0 0
# mount (since the final mount point is not mentioned – mapping takes place using value from /etc/fstab)
@VINAYDURGAM This is from Enterprise Linux, might work the same with Debian but I am not 100% sure as have not tried that. We copy the data to a cifs share, here is the information sort of anonymized. I hope that helps out some.
Server Setup
# THis step is only for postgresql, so that the pg_dump command works in the scripts
vi ~/.pgpass
localhost:5432:dbname:postgres:password
chmod 600 ~/.pgpass
#required to connect to cifs server shares
yum install cifs-utils
#Create credential file for the network connection
vi ~/.smb-credentials (adding 3 lines):
username=<user-name>
password=<password>
domain=<domain-name>
# hide this credentials file from other users:
chmod 600 ~/.smb-credentials
# create mount point location
mkdir /mnt/backupSAN
# add to fstab vi /etc/fstab add one line at the end looking like this
//servername/team1-backup /mnt/backupSAN cifs credentials=/root/.smb-credentials,rw,vers=3.0,cache=strict,file_mode=0755,dir_mode=0755,soft,nounix,serverino,mapposix,rsize=1048576,wsize=1048576,echo_interval=60,actimeo=1 0 0
# mount (since the final mount point is not mentioned – mapping takes place using value from /etc/fstab)
mount -t cifs -o credentials=/root/.smb-credentials //servername/team1-backup
Script
#/bin/bash
now=$(date +"%m_%d_%Y")
cd /data
# delete yesterdays backup file (anything more than 10 hours old)
find /data -name "*.dmp" -type f -mmin +$((60*10)) -exec rm -f {} \;
# create backup file
pg_dump -h localhost -U postgres -F c -b -v -f filename-$now.dmp dbname
chmod a+r *
# creating backup directory on san
mkdir -p /mnt/backupSAN/dbname/$now
# Copy backup file to san
rsync -av *.dmp /mnt/backupSAN/dbname/$now
exit


HCA Healthcare
US
@PhilipShannon To send files to a NAS location using a different account, you can create a script on your Linux server to handle the file transfer. First, install cifs-utils
to enable NAS access and create a .smb-credentials
file with the NAS account's username, password, and domain. Set proper permissions on this file using chmod 600
to secure it. Mount the NAS location to a directory (e.g., /mnt/backupSAN
) by adding its configuration to /etc/fstab
and running the mount
command. Next, write a shell script to handle the file operations: delete old files, create new backups (e.g., using pg_dump
for PostgreSQL), and transfer them to the NAS using rsync
. Schedule this script with cron
for automation. Alternatively, call the script from Pega using a Connect-File or ShellCommand in an Activity. Ensure all credentials are securely stored, and test the setup to confirm file transfers work as expected.