Quantcast
Channel: Active questions tagged amazon-ec2 - Stack Overflow
Viewing all articles
Browse latest Browse all 29416

Scylla fails to mount RAID volume after restarting EC2 instance

$
0
0

I am new to Scylla. I have followed the installation steps on the Scylla website to setup a small 4 node Scylla cluster in my AWS account. I am using the Scylla ami on my EC2 instances.

If I stop one of the EC2 instances and then start it up again. I get the message Failed mounting RAID volume! when I try to restart Scylla.

I believe I have to remount the RAID volume by running this:

scylla_raid_setup --raiddev /dev/md0 --disks /dev/nvme1n1,/dev/nvme2n1 --update-fstab --root /var/lib/scylla --volume-role all

However, when I then try to start Scylla I get the following error message:

A dependency job for scylla-server.service failed. See 'journalctl -xe' for details.

It seems that the mount failed, here are the logs:

-- Subject: Unit var-lib-scylla.mount has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel---- Unit var-lib-scylla.mount has failed.---- The result is dependency.Dependency failed for Scylla Server.-- Subject: Unit scylla-server.service has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel---- Unit scylla-server.service has failed.---- The result is dependency.May 05 13:23:56 systemd[1]: Dependency failed for Scylla JMX.-- Subject: Unit scylla-jmx.service has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel---- Unit scylla-jmx.service has failed.---- The result is dependency.May 05 13:23:56 systemd[1]: Job scylla-jmx.service/start failed with result 'dependency'.May 05 13:23:56 systemd[1]: Dependency failed for Run Scylla Housekeeping daily mode.-- Subject: Unit scylla-housekeeping-daily.timer has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel---- Unit scylla-housekeeping-daily.timer has failed.---- The result is dependency.May 05 13:23:56 polkitd[4226]: Unregistered Authentication Agent for unix-process:7668:53288 (system bus name :1.20, object path /org/freedesktop/PolicyKit1/AuthenticationAgeMay 05 13:23:56 systemd[1]: Job scylla-housekeeping-daily.timer/start failed with result 'dependency'.May 05 13:23:56 sudo[7666]: pam_unix(sudo:session): session closed for user rootMay 05 13:23:56 systemd[1]: Dependency failed for Run Scylla Housekeeping restart mode.-- Subject: Unit scylla-housekeeping-restart.timer has failed-- Defined-By: systemd-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel---- Unit scylla-housekeeping-restart.timer has failed.---- The result is dependency.May 05 13:23:56 systemd[1]: Job scylla-housekeeping-restart.timer/start failed with result 'dependency'.May 05 13:23:56 systemd[1]: Job scylla-server.service/start failed with result 'dependency'.May 05 13:23:56 systemd[1]: Job var-lib-scylla.mount/start failed with result 'dependency'.May 05 13:23:56 systemd[1]: Job dev-disk-by\x2duuid-67fde517\x2d892a\x2d4a3f\x2d9e19\x2dac71c9bdd533.device/start failed with result 'timeout'.

What should my next step be?

Here are the disks:

Disk /dev/nvme1n1: 7500.0 GB, 7500000000000 bytes, 14648437500 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/nvme2n1: 7500.0 GB, 7500000000000 bytes, 14648437500 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk /dev/nvme0n1: 10.7 GB, 10737418240 bytes, 20971520 sectorsUnits = sectors of 1 * 512 = 512 bytesSector size (logical/physical): 512 bytes / 512 bytesI/O size (minimum/optimal): 512 bytes / 512 bytesDisk label type: dosDisk identifier: 0x000b0301

If I include nvme0n1 in disks for scylla_raid_setup then it returns: /dev/nvme0n1 is busy.

Otherwise, this is what scylla_raid_setup outputs:

Creating RAID0 for scylla using 2 disk(s): /dev/nvme2n1,/dev/nvme1n1mdadm: Defaulting to version 1.2 metadatamdadm: array /dev/md0 started.meta-data=/dev/md0               isize=512    agcount=32, agsize=114438912 blks         =                       sectsz=512   attr=2, projid32bit=1         =                       crc=1        finobt=0, sparse=0data     =                       bsize=4096   blocks=3662043136, imaxpct=5         =                       sunit=256    swidth=512 blksnaming   =version 2              bsize=4096   ascii-ci=0 ftype=1log      =internal log           bsize=4096   blocks=521728, version=2         =                       sectsz=512   sunit=8 blks, lazy-count=1realtime =none                   extsz=4096   blocks=0, rtextents=0

My /etc/fstab file looks like this:

UUID=0a84de8e-5bfe-43e7-992b-5bfff8cdce43 /                       xfs     defaults        0 0UUID="67fde517-892a-4a3f-9e19-ac71c9bdd533" /var/lib/scylla xfs noatime,nofail 0 0UUID="24aab0fc-dc32-48de-bf6b-5a3d5bcd1f00" /var/lib/scylla xfs noatime,nofail 0 0

I removed one of the entries and tried restarting Scylla. But it still failed to start :(


Viewing all articles
Browse latest Browse all 29416

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>