nginx reverse proxy SSL passthrough fun!
February 4, 2022 7:09 PM Subscribe
I'm probably trying to do this a really dumb way, but: I have two web servers in my house. Currently, port 80 and port 443 are forwarded on my router to one of them. It uses nginx reverse proxy to pass stuff intended for the other one through to it on the LAN by looking at the hostname. I would like this second one to receive SSL traffic as well (not just have the SSL stripped by the first machine).
The above works beautifully with port 80 HTTP for the webserver2 traffic (webserver1 uses SSL no problem port 443 from the internet just hits that site). I would like to do it with port 443 SSL, but I gather I need to use a stream{} block rather than a server{} block for it? Every example solution I've seen assumes you're passing SSL through to one of two/several hosts. I want it to pass SSL through to one host, and just point it at the locally installed webserver for the other one and I'm not quite clear how I can do that? How do I mix stream and server blocks? Can I use a stream block to conditionally but not necessarily forward traffic?
Help?
(I know I could strip/terminate the SSL at webserver1, but I would much rather just pass the SSL stream through unmangled for reasons, not least because it gives me scope to add non-webserver SSL services in future more easily)
(And before anyone asks, no none of this is important production stuff, just crossword websites for a few friends running on a raspberry pi)
INTERNET
|
\/
webserver1
|
\/
webserver2
The above works beautifully with port 80 HTTP for the webserver2 traffic (webserver1 uses SSL no problem port 443 from the internet just hits that site). I would like to do it with port 443 SSL, but I gather I need to use a stream{} block rather than a server{} block for it? Every example solution I've seen assumes you're passing SSL through to one of two/several hosts. I want it to pass SSL through to one host, and just point it at the locally installed webserver for the other one and I'm not quite clear how I can do that? How do I mix stream and server blocks? Can I use a stream block to conditionally but not necessarily forward traffic?
Help?
(I know I could strip/terminate the SSL at webserver1, but I would much rather just pass the SSL stream through unmangled for reasons, not least because it gives me scope to add non-webserver SSL services in future more easily)
(And before anyone asks, no none of this is important production stuff, just crossword websites for a few friends running on a raspberry pi)
Response by poster: Ooh, that might work, but I think there should be a way to do with just nginx?
posted by Dysk at 8:27 PM on February 4, 2022
posted by Dysk at 8:27 PM on February 4, 2022
Best answer: I've never done it, but just reading your example / thinking off the top of my head, I'd suppose a worst-case scenario would be to add another listener on webserver1 on port 8080, outside the stream declaration, and inside the stream declaration make that an upstream for webserver1's traffic.
posted by Wobbuffet at 8:30 PM on February 4, 2022
posted by Wobbuffet at 8:30 PM on February 4, 2022
Best answer: It looks like ngx_stream_ssl_preread_module allows you to achieve the same result as sniproxy using only nginx.
I'm not familiar enough with nginx to know if it would be OK to have webserver1 forward some requests to webserver2 and others to itself (presumably on a different port), or if you would need to put a separate nginx instance in front of both webserver1 and webserver2.
posted by teraflop at 8:32 PM on February 4, 2022
I'm not familiar enough with nginx to know if it would be OK to have webserver1 forward some requests to webserver2 and others to itself (presumably on a different port), or if you would need to put a separate nginx instance in front of both webserver1 and webserver2.
posted by teraflop at 8:32 PM on February 4, 2022
Response by poster: Oh, I probably could just treat webserver1 as a separately defined stream forward target, and just change the port number in the server{} block. I'll try that after breakfast!
posted by Dysk at 9:42 PM on February 4, 2022 [1 favorite]
posted by Dysk at 9:42 PM on February 4, 2022 [1 favorite]
Best answer: Given that the only way to extract the hostname from a SSL stream has to involve parsing it to some extent, and that SNI makes such parsing possible without needing to do a little certificate validation dance, it makes sense to me that the forwarding and the actual connection establishment would be best handled separately, and that there should be no reason why doing both of these things with separate processes on the same box should end up looking notably different from doing them with processes on separate boxes, even if nginx is ultimately what does all of them.
posted by flabdablet at 9:59 PM on February 4, 2022 [1 favorite]
posted by flabdablet at 9:59 PM on February 4, 2022 [1 favorite]
Consider setting up haproxy (or something like it) to split traffic. haproxy can also do SSL termination (see the "Frontend" section of https://www.haproxy.com/blog/the-four-essential-sections-of-an-haproxy-configuration/ for an example). Using a proxy removes the dependency that your webserver2 has on webserver1 being up and happy.
But if you're already using nginx to front-end something like gunicorn or uwsgi, nothing says they have to be running on the same Pi.
posted by dws at 9:54 AM on February 5, 2022
But if you're already using nginx to front-end something like gunicorn or uwsgi, nothing says they have to be running on the same Pi.
posted by dws at 9:54 AM on February 5, 2022
Response by poster: I set nginx up to do SNI and forward in a stream{} targeting webserver1 (itself) on a different port, or webserver2 on 443,depending on hostname. Modified the server{} block on webserver1 to listen on the new port. Totally works! And leaves me set up to forward any SNI snoopable traffic in future.
Thanks for the help all, I know this isn't the best way to do this in a lot of ways, but it was easy, and none of this is important anyway. webserver1 just needs nginx not to have crashed for it to work, and that's easily good enough for me.
posted by Dysk at 11:20 AM on February 5, 2022
Thanks for the help all, I know this isn't the best way to do this in a lot of ways, but it was easy, and none of this is important anyway. webserver1 just needs nginx not to have crashed for it to work, and that's easily good enough for me.
posted by Dysk at 11:20 AM on February 5, 2022
I set nginx up to do SNI and forward in a stream{} targeting webserver1 (itself) on a different port
One of the patterns I like to use with my own little network of Pi-class boxes is sticking with the standard port numbers for LAN-accessible services, but also adding extra IP addresses to the network adapters so that each such service can be bound to its own dedicated IP address.
That lets me give each service its own DNS hostname on the LAN, rather than needing to refer to them via the hostnames belonging to whatever box they're running on. Which, in turn, lets me move a service from one box to another without making any changes at all to the kind of inter-service glue config you're setting up here or to config on clients.
I also find that doing things this way tends to make all of that config a bit more self-explanatory and comprehensible.
posted by flabdablet at 6:40 PM on February 5, 2022
One of the patterns I like to use with my own little network of Pi-class boxes is sticking with the standard port numbers for LAN-accessible services, but also adding extra IP addresses to the network adapters so that each such service can be bound to its own dedicated IP address.
That lets me give each service its own DNS hostname on the LAN, rather than needing to refer to them via the hostnames belonging to whatever box they're running on. Which, in turn, lets me move a service from one box to another without making any changes at all to the kind of inter-service glue config you're setting up here or to config on clients.
I also find that doing things this way tends to make all of that config a bit more self-explanatory and comprehensible.
posted by flabdablet at 6:40 PM on February 5, 2022
Oh, and services even get to keep the same IP address across box migrations, not just stable DNS names. Keeps all the associated config nicely stable, making migration feel much less fraught with peril.
posted by flabdablet at 6:49 PM on February 5, 2022
posted by flabdablet at 6:49 PM on February 5, 2022
« Older My spouse isn’t supporting me during a rough patch... | Tips for Taking a Sabbatical Newer »
This thread is closed to new comments.
posted by flabdablet at 7:43 PM on February 4, 2022