Quantcast
Channel: Active questions tagged amazon-ec2 - Stack Overflow
Viewing all articles
Browse latest Browse all 29250

Correct design pattern for single server in AWS

$
0
0

I have customized cluster software that runs in a single AZ (subnet). One of the servers is the "controller". There can only be one of these running at a time. I need to be able to have it in the local DNS. It needs to automatically rebuild itself if it fails for any reason. I do not believe I need an elb/alb/nlb setup for this. However, when I set the system up with an autoscaling group, I am not able to get to the private IP address to update the route53 record. Is there a correct design pattern for this in AWS?

Here is the stub code, which does work in rebuilding the server from scratch if it is stopped or becomes unhealthy.

resource "aws_launch_configuration""example" {
  image_id  = "${lookup(var.AmiLinux, var.region)}"
  instance_type   = "t2.micro"
  security_groups = [aws_security_group.ingress-all-test.id]
  key_name = "akeyname"

  lifecycle {
    create_before_destroy = true
  }
}

data "aws_availability_zones""all" {}

resource "aws_autoscaling_group""example" {
  launch_configuration = aws_launch_configuration.example.id
  min_size = 1
  max_size = 1
  health_check_grace_period = 60
 vpc_zone_identifier       = ["${aws_subnet.subnetTest.id}"]
  tag {
    key                 = "Name"
    value               = "tf-asg-example"
    propagate_at_launch = true
  }
}

I do like the above as it does maintain a single server in an AZ. However, ASG makes it rather hard to get to the IP. I am not looking for the user-data to "hack" the change on boot. Since it can only run in a single subnet (AZ), I cannot use an elb. Thanks in advance for any design pattern for this type of setup.


Viewing all articles
Browse latest Browse all 29250

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>