Below is a snippet of my automation script, ownership gets changed for the directory (or)mount point - /deploy/umbro/$Client to ind$Client:ind as expected, but on the other hand, ownership for the directory (or)mount point, under the case statements are not getting changed. Still remains as root:root
Not exactly sure where I have gone wrong.
#!/bin/bashClient=$1Region=$2sudo mkfs -t xfs /dev/nvme1n1sudo mkfs -t xfs /dev/nvme2n1#Mount point creation for nvme2n1mkdir -p /deploy/umbro/$Clientmount -t xfs /dev/nvme2n1 /deploy/umbro/$Clientsudo echo UUID=$(sudo blkid | grep /dev/nvme2n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client xfs defaults,nofail 0 2 >> /etc/fstabperm=ind$Client:indchown -R $perm /deploy/umbro/$Client#Mount point creation for nvme1n1, based on regioncase $Region in AUS) mkdir -p /deploy/umbro/$Client/checkpoint/default/logs chown -R ind$Client:ind /deploy/umbro/$Client/checkpoint/default/logs mount -t xfs /dev/nvme1n1 /deploy/umbro/$Client/checkpoint/default/logs sudo echo UUID=$(sudo blkid | grep /dev/nvme1n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client/checkpoint/default/logs xfs defaults,nofail 0 2 >> /etc/fstab ;; EUR) mkdir -p /deploy/umbro/$Client/checkpoint/arm/logs chown -R ind$Client:ind /deploy/umbro/$Client/checkpoint/arm/logs mount -t xfs /dev/nvme1n1 /deploy/umbro/$Client/checkpoint/arm/logs sudo echo UUID=$(sudo blkid | grep /dev/nvme1n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client/checkpoint/arm/logs xfs defaults,nofail 0 2 >> /etc/fstab ;; ...... ......esac#!/bin/bashClient=$1Region=$2sudo mkfs -t xfs /dev/nvme1n1sudo mkfs -t xfs /dev/nvme2n1#Mount point creation for nvme2n1mkdir -p /deploy/umbro/$Clientmount -t xfs /dev/nvme2n1 /deploy/umbro/$Clientsudo echo UUID=$(sudo blkid | grep /dev/nvme2n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client xfs defaults,nofail 0 2 >> /etc/fstabperm=ind$Client:indchown -R $perm /deploy/umbro/$Client#Mount point creation for nvme1n1, based on regioncase $Region in AUS) mkdir -p /deploy/umbro/$Client/checkpoint/default/logs chown -R ind$Client:ind /deploy/umbro/$Client/checkpoint/default/logs mount -t xfs /dev/nvme1n1 /deploy/umbro/$Client/checkpoint/default/logs sudo echo UUID=$(sudo blkid | grep /dev/nvme1n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client/checkpoint/default/logs xfs defaults,nofail 0 2 >> /etc/fstab ;; EUR) mkdir -p /deploy/umbro/$Client/checkpoint/arm/logs chown -R ind$Client:ind /deploy/umbro/$Client/checkpoint/arm/logs mount -t xfs /dev/nvme1n1 /deploy/umbro/$Client/checkpoint/arm/logs sudo echo UUID=$(sudo blkid | grep /dev/nvme1n1 | grep -Eo [\"].*[\"] | awk '{print $1}'| tr -d '"') /deploy/umbro/$Client/checkpoint/arm/logs xfs defaults,nofail 0 2 >> /etc/fstab ;; ...... ......esac
AWS EC2 - Red Hat Enterprise Linux Server release 7.7, user - root
Strange observation is that, if I manually do the below steps, ownership gets changed recursievly till logs folder.
cd /deploy/umbro/$Clientchown -R ind$Client:ind checkpoint/