I have an AWS instance (EC2). I created to run Jupyter Notebook to test. And suddenly the space ran out, see the result below:
root@ip-172-31-14-181:/# df -hFilesystem Size Used Avail Use% Mounted onudev 992M 0 992M 0% /devtmpfs 200M 21M 180M 11% /run/dev/xvda1 15G 15G 55M 100% /tmpfs 1000M 2.5M 997M 1% /dev/shmtmpfs 5.0M 0 5.0M 0% /run/locktmpfs 1000M 0 1000M 0% /sys/fs/cgroupnone 15G 15G 55M 100% /var/lib/docker/aufs/mnt/3a5473a...none 15G 15G 55M 100% /var/lib/docker/aufs/mnt/d0f29d3...shm 64M 0 64M 0% /var/lib/docker/containers/c501847.../shmshm 64M 0 64M 0% /var/lib/docker/containers/851f091.../shmtmpfs 200M 0 200M 0% /run/user/1000
How can I solve this kind of problem?
I was not very clear on the question. In fact, the disk is full but I can't find why it is full. Through research I managed to add only 10 GB of occupied space.Furthermore, it is a docker to run Jupyter remotely, and it doesn't even have 500Mb of Jupiter file there ...Would there be any way to understand what made 5Gb disappear? Or how to review the sizes of folders and files?