UPDATED
Following the AWS instance scheduler I've been able to setup a scheduler that starts and stops at the beginning and end of the day.
However, the instances keep being terminated and reinstalled.
I have an Amazon Elastic Kubernetes Service (EKS) that returns the following CloudWatch log: discovered the following log in my CloudWatch
13:05:30
2019-11-21 - 13:05:30.251 - INFO : Handler SchedulerRequestHandler scheduling request for service(s) rds, account(s) 612681954602, region(s) eu-central-1 at 2019-11-21 13:05:30.251936
13:05:30
2019-11-21 - 13:05:30.433 - INFO : Running RDS scheduler for account 612681954602 in region(s) eu-central-1
13:05:31
2019-11-21 - 13:05:31.128 - INFO : Fetching rds Instances for account 612681954602 in region eu-central-1
13:05:31
2019-11-21 - 13:05:31.553 - INFO : Number of fetched rds Instances is 2, number of schedulable resources is 0
13:05:31
2019-11-21 - 13:05:31.553 - INFO : Scheduler result {'612681954602': {'started': {}, 'stopped': {}}}
I don't know if it is my EKS that keeps rebooting my instances, but I really would love to keep them stopped until the next day.
How can I prevent my EC2 instances from automatically rebooting every time one has stopped? Or, even better, how can I deactivate my EKS stack automatically?
Update:
I discovered that EKS has a Cluster Autoscaler. Maybe this could be where the problem lies? https://docs.aws.amazon.com/eks/latest/userguide/cluster-autoscaler.html