Problem
I use SSH often to manage that older service and connect with an Elastic IP (which is basically a static IP) as the "hostname". Trying this same approach worked in dev (our office has a static IP and it's just like my terminal), but failed when deployed to Elastic Beanstalk.
Managing the Security Group (SG) of the older service required a new access rule for port 22:
Using the SG of Elastic Beanstalk failed to connect.
Using the private IP of of our VPC (e.g. 172.31.0.0) failed to connect.
Using the public IP of the Elastic Beanstalk worked!
However, this was problematic because the IP of my elastic beanstalk could change (our staging system is a single instance but production is a rolling cluster). Editing the SG manually would be dumb and writing a script to check public IPs in the Beanstalk after each deploy sounded hard.
Easy Solution
Instead of referencing the static IP address XXX.XX.XXX.XXX when connecting via SSH, I used the public DNS, which contains the elastic/static IP address anyway (e.g.
ec2-XXX-XX-XXX-XXX.compute-1.amazonaws.com,
so shouldn't change on me). This allowed EC2 to resolve internal IP addresses, and thus the Security Group rule on the older EC2 instanced worked for another security group instead of public IP. I panicked and asked on AWS Forums and StackoverFlow as well, and answered my own question at each.
No comments:
Post a Comment