Best Practice: Handling SSL Certificates Within A Multi-Server Deployment?

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
As the title implies, how would one deal correctly with maintaining an SSL certificate across various web servers.

Use case? Say you have an app and it's growing, to the point where you're serving it off of more then one web server.

Web server 1 is configured with LetsEncrypt and every two weeks during a cron task it tries to renew said certificate. The certificate to my understanding is only valid for 90 days so when the it gets renewed the actual certificate that is being referenced by Nginx/Apache changes?

So in comes web server 2 and is also wearing Nginx and wants to host the app but Nginx want's the certificate path and the certificate is currently on / being renewed by web server 1.

In reality, how are we supposed to ensure that web servers 1-10 can all make use of the same SSL certificate? Sure, we can rsync but that hardly seems like an ideal solution...

Any advice / input would be appreciated, tyia! :)
 
Last edited:

Sinbad

Honorary Master
Joined
Jun 5, 2006
Messages
81,150
Put a loadbalancer cluster in front of the web servers and offload the SSL to there.
 

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
Put a loadbalancer cluster in front of the web servers and offload the SSL to there.

Hmm, thanks.

Was thinking more like HAProxy but don't really know what it's capable of, so attach the SSL to the load balancer and let it talk to the relative web servers over a local network?

(Facepalm) So I went and looked at ELB in AWS and can see that it runs the SSL so yeah I guess that it how it works in reality. Woke up with a weird idea today :)
 
Last edited:

semaphore

Honorary Master
Joined
Nov 13, 2007
Messages
15,194
You can run an nginx as a reverse proxy to be your "load balancer" and do your SSL termination. https://www.google.co.za/search?q=nginx+reverse+proxy+ssl

If you run on AWS, you can use Certificate Manger to do your certs, bound them to your ELB and automatically renew. Fire and forget solution. https://aws.amazon.com/certificate-manager/pricing/

Thats pretty much how i handle it. Servers behind the nginx proxy are standard http but not exposed at all to the outside world.
 

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
You can run an nginx as a reverse proxy to be your "load balancer" and do your SSL termination. https://www.google.co.za/search?q=nginx+reverse+proxy+ssl

If you run on AWS, you can use Certificate Manger to do your certs, bound them to your ELB and automatically renew. Fire and forget solution. https://aws.amazon.com/certificate-manager/pricing/

Thanks will take a look. Didn't see ACM until now either which is awesome is you are using ELB. The certs are limited but free?
 

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
Thats pretty much how i handle it. Servers behind the nginx proxy are standard http but not exposed at all to the outside world.

So like if they are not exposed to the outside world how are they updated, how do you manage integration and deployment? o_O
 

semaphore

Honorary Master
Joined
Nov 13, 2007
Messages
15,194
So like if they are not exposed to the outside world how are they updated, how do you manage integration and deployment? o_O

I have multiple servers, with a single entry point. The servers that host the APIs are only allowed to access eachother with explicit routing and firewall rules. So only the entry point has got the SSL on it. Whether this is best practices I don't know, but it works for my use case right now.
 

gkm

Expert Member
Joined
May 10, 2005
Messages
1,519
Thanks will take a look. Didn't see ACM until now either which is awesome is you are using ELB. The certs are limited but free?

Yes, I checked the FAQ on that link I posted earlier and that says it works for ELB and Cloudfront. They are free and auto renew, so you can avoid all the LetsEncrypt contortions. I have only used it for ELB so far and was easy to setup.

I have multiple servers, with a single entry point. The servers that host the APIs are only allowed to access eachother with explicit routing and firewall rules. So only the entry point has got the SSL on it. Whether this is best practices I don't know, but it works for my use case right now.

Yes, in general you should only allow your load balancer to access your web servers and do not allow direct access to your web servers from the internet. If you are using AWS, you can set up two security groups, one for the ELB and one for your web servers. Only allow the ELB security group to talk to your web server security group. Then on the web server security group, you can also allow access to whatever port you are using for deployments, but only from your office. And obviously allow the internet to talk to your ELB security group over 80 and/or 443. If you have backend servers or databases, put each of those sets of servers in their own security group as well. And only allow access from the relevant other security group and only on the relevant ports. That way everything is nicely protected from the internet as much as possible, with least required privileges allowed between the layers. Sorry, not sure if this makes sense. Guess you can maybe Google for better explanations on how to layer your VPC security groups.
 

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
I have multiple servers, with a single entry point. The servers that host the APIs are only allowed to access eachother with explicit routing and firewall rules. So only the entry point has got the SSL on it. Whether this is best practices I don't know, but it works for my use case right now.

That makes sense, I guess I was just confused when you say exposed because in the sense of AWS I might have thought that this means that the instances have no public IP and then like how would they reach say for instance the Ubuntu servers during a systems update but I guess you mean there is no way in to theses instances publicly, so the instances will only allow http type traffic to certain security groups or IPs.
 

IndigoIdentity

Expert Member
Joined
May 10, 2010
Messages
1,964
LetsEncrypt contortions

It seems that there are some pro's and con's involved: https://community.letsencrypt.org/t/aws-announces-certificate-manager-similar-to-le/9289


Sorry, not sure if this makes sense.

So would this sound sane in terms of security groups:

Web Servers: allow 80/sg-oftheloadbalancer
Load Balancer: allow 80&443/0.0.0.0/0

And we would only use port 443 in the target and security groups of the Web Servers if the instances were outside of the VPC or region of the Load Balancer?

Sorry, not sure if this makes sense.

No, that was most helpful thank you!
 
Last edited:

gkm

Expert Member
Joined
May 10, 2005
Messages
1,519
So would this sound sane in terms of security groups:

Web Servers: allow 80/sg-oftheloadbalancer
Load Balancer: allow 80&443/0.0.0.0/0

And we would only use port 443 in the target and security groups of the Web Servers if the instances were outside of the VPC or region of the Load Balancer?

No, that was most helpful thank you!

Yes, sounds right.

If you traverse between regions, then doing it over SSL/TLS (post 443) would be best, but I suspect usually the complications of doing that is not worth it, unless you have a fairly big and complex system. For complex multi-region systems, ensure all your comms are secured/encrypted. Even then you would probably talk from the one region, to an ELB in the other region in front of the systems in that other region.

Glad to hear it helps.
 

gkm

Expert Member
Joined
May 10, 2005
Messages
1,519
Top