Nginx, Websockets, SSL and Socket.IO deployment

I’ve spent some time recently figuring out the options for deploying Websockets with SSL and load balancing – and more specifically, Socket.IO – while allowing for dual stacks (e.g. Node.js and another dev platform). Since there seems to be very little concrete guidance on this topic, here are my notes – I’d love to hear from you on your implementation  (leave a comment or write about and link back)…

The goal here is to:

  1. Expose Socket.io and your main application from a single port — avoiding cross-domain communication
  2. Support HTTPS for both connections — enabling secure messaging
  3. Support the Websockets and Flashsockets transports from Socket.io — for performance
  4. Perform load balancing for both the backends somewhere — for performance

Socket.io’s various transports

Socket.io supports multiple different transports:

  • WebSockets — which are essentially long lived HTTP 1.1 requests, which after a handshake upgrade to the Websockets protocol
  • Flash sockets — which are plain TCP sockets with optional SSL support (but Flash seems to use some older SSL encryption method)
  • various kinds of polling — which work over long lived HTTP 1.0 requests

Starting point: Nginx and Websockets

Nginx is generally the first recommendation for Node.js deployments. It’s a high-performance server and even includes support for proxying requests via the HttpProxyModule.

However, — and this should be made much obvious to people starting with Socket.io — the problem is that while Nginx can talk HTTP/1.1 to the client (browser), it talks HTTP/1.0 to the server. Nginx’s default HttpProxyModule does not support HTTP/1.1, which is needed for Websockets.

Websockets 76 requires support for HTTP/1.1 as the handshake mechanism is not compatible with HTTP/1.0. What this means is that if Nginx is used to reverse proxy a Websockets server (like Socket.io), then the WS connections will fail. So no Websockets for you if you’re behind Nginx.

There is a workaround, but I don’t see the benefit: use a TCP proxy (there is a custom module for this by Weibin Yao, see here ). However, you cannot run another service on the same port (e.g. your main app and Socket.io on port 80) as the TCP proxy does not support routing based on the URL (e.g. /socket.io/ to Socket.io and the rest to the main app), only simple load balancing.

So the benefit gained from doing this is quite marginal: sure, you can use Nginx for load balancing, but you will still be working with alternative ports for your main app and Socket.io.

Alternatives to Nginx

Since you can’t use Nginx and support Websockets,  you’ll need to deal with two separate problems:

  1. How to terminate SSL connections and
  2. How to route HTTP traffic to the right backend based on the URL / load balance

If you want to run two services on the same port, then you will have to terminate SSL connections before doing anything else. There are several alternatives for SSL termination:

  • Stunnel. Supports multiple SSL certificates per process, does simple SSL termination to another port.
  • Stud. Only supports one SSL certificate per invocation, does simple SSL termination to another port.
  • Pound. An SSL-termination-capable reverse proxy and load balancer.
  • Node’s https. Can be made to do anything, but you’ll have to write it yourself.

If you choose Stunnel or Stud, then you need a load balancer as well if you plan on having more than one Node instance in the backend.

HAProxy is not generally compatible with Websockets, but Socket.IO contains code which works around this issue and allows you to use HAProxy. This means that the alternatives are:

  • Stunnel for SSL termination + HAProxy for routing/load balancing
  • Stud for SSL termination + HAProxy for routing/load balancing
  • Pound (SSL and routing/load balancing)

I haven’t looked into Pound more – mainly as I could not find info on it’s TCP reverse proxying capabilities (see the section on Flash sockets below), but it seems to work for these guys.

Setting up Stunnel

The Stunnel part is quite simple:

cert = /path/to/certfile.pem
; Service-level configuration
[https]
accept  = 443
connect = 8443

If you only have one Node instance, you can skip setting up HAProxy, since you don’t need load balancing.

Setting up HAProxy

Would you like Flash Sockets with that?

Note that we need TCP mode in order to support Flash sockets, which do not speak HTTP.

Flash sockets are just plain and simple TCP sockets, which will start by sending the following payload: ‘<policy-file-request/>’. They expect to receive a Flash cross domain policy as a response.

Since Flash sockets don’t use HTTP, we need a load balancer which is capable of detecting the protocol of the request, and of forwarding non-HTTP requests to Socket.io.

HAProxy can do that, as it has two different modes of operation:

  • HTTP mode – which allows you to specify the backend based on the URI
  • TCP mode – which can be used to load balance non-HTTP transports.

Main frontend

We accept connections on two ports: 80 (HTTP) and 8443 (Stunnel-terminated HTTPS connections).

By default, everything goes to the backend app at port 3000. Some HTTP paths are selectively routed to socket.io

TCP mode is needed so that Flash socket connections can be passed through, and all non HTTP connections are sent to the TCP mode socket.io backend.

# Main frontend
frontend app
  bind 0.0.0.0:80
  bind 0.0.0.0:8443
  # Mode is TCP
  mode tcp
  # allow for many connections, with long timeout
  maxconn 200000
  timeout client 86400000
 
  # default to webapp backend
  default_backend webapp
 
  # two URLs need to go to the node pubsub backend
  acl is_socket_io path_beg /node
  acl is_socket_io path_beg /socket.io
     use_backend socket_io if is_socket_io
 
   tcp-request inspect-delay 500ms
   tcp-request content accept if HTTP
   use_backend sio_tcp if !HTTP

Port 843: Flash policy

Flash policy should be made available on 843.

# Flash policy frontend
frontend flashpolicy 0.0.0.0:843
   mode tcp
   default_backend sio_tcp

Default backend

This is just for your main application.

backend webapp
   mode http
   option httplog
   option httpclose
   server nginx1s localhost:3000 check

Socket.io backend

Here, we have a bunch of settings in order to allow Websockets connections through HAProxy.

backend socket_io
  mode http
  option                  httplog
  # long timeout
  timeout server 86400000
  # check frequently to allow restarting
  # the node backend
  timeout check 1s
  # add X-Forwarded-For
   option forwardfor
  # Do not use httpclose (= client and server
  # connections get closed), since it will close
  # Websockets connections
  no   option httpclose
  # Use "option http-server-close" to preserve
  # client persistent connections while handling
  # every incoming request individually, dispatching
  # them one after another to servers, in HTTP close mode
  option http-server-close
  option forceclose
  # just one node server at :8000
  server node1 localhost:8000 maxconn 2000 check

Socket.io backend in TCP mode

This is the same server as above, but accessed in TCP mode.

backend sio_tcp
  mode tcp
  server node2 localhost:8000 maxconn 2000 check

Conclusion

The configs above allow you to serve Websockets, Flash and polling from a single port.

However, I am dissatisfied by the complexity of this configuration. In particular, Flash sockets’ TCP requirements are rather painful since they require protocol detection in order to work from a single port.

The alternative is of course to run Socket.io on a different port than your main app. This would mean that you configure HAProxy to just do TCP mode load balancing at that port, with SSL termination in front of HAProxy.

If you do that, you might want to configure a fallback from Nginx at port 80 to Socket.io for those clients who are behind draconian corporate firewalls which disallow ports other than 80 and 443. The fallback will only support long polling and I don’t think Socket.io itself supports automatically switching ports during transport negotiation, but you can detect a failure in Socket.io and re-initialize manually with a different port and polling-only transport.

Do you have a better way? How do you deploy Socket.io? Let me know in the comments below.

 


26 comments


  1. How are you handling HTTP requests then?

    • As configured above, HAProxy listens at two ports:

      80 (e.g. for direct HTTP requests)
      8443 (e.g. for SSL-terminted HTTP requests)

      And the default HAProxy backend is the main app.

      HTTP comes in at port 80 to HAproxy.
      HTTPS comes in at port 443 -> Stunnel terminates SSL -> 8443

  2. Hi, we’ve been integrating support for http/1.1 to backends for nginx lately. Would you like to test your setup with http/1.1 development code? Would be much appreciated.

  3. Hi Andrew,

    We’re working on NodeJS apps that needs Websockets, and thanks to your comment, we’ll be taking a look at the development version of Nginx. We’ll send you feedback for any problems.

  4. Hi there,

    I’m trying to setup something similar. I have a rails app on port 3000, and a nodejs app on port 3001. The nodejs app doesnt only do sockets though… it also has a little http api using Express.

    I wanted to have a different subdomain for each app, and use haproxy to serve the right thing but I can’t seem to see how to do that… seems like when I use tcp mode I can’t really do anything regarding the domain that was requested.

  5. This configuration does not work for me. According to HAProxy docs you cannot use layer 7 filtering with TCP mode. On my setup this caused to use random backend.

    I managed to fix this configuration in HTTP mode. The main was flash-policy-file serving – it is not served if socket.io client manage to get the policy from the same server on port 843.

    If you provide policy file on that port then flash will not request policy file and use standard HTTP headers for communication. This makes HAProxy happy.

    My config is here:
    http://pastebin.com/g8KQSWTW

    Many thanks for your post – it was the best starter to make websockets, xhr-pooling and flashsockets to work with HAProxy.

  6. With newer versions of nginx you can use 1.1 for reverse proxying.

    This seems to work for reverse proxying websocket connections:

    location / {
    chunked_transfer_encoding off;
    proxy_http_version 1.1;
    proxy_pass http://localhost:9001;
    proxy_buffering off;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header Host $host:9001;
    proxy_set_header Connection “Upgrade”;
    proxy_set_header Upgrade websocket;
    }

    • I did a bunch of experimenting and some digging, and it appears there is not currently any way to get nginx to work with websockets using a method like that. WebSockets needs more than just forwarding a request, and the nginx developers say more work would need to be done to get it proxying websockets. You can, however, use nginx with the tcp proxying module to blindly load-balance, and then put stunnel on each of your application servers, though that removes most of the benefit of nginx.

  7. Why not do it simple and use two servers. Then configure dns to point socket.mydomain.com to the websocket server and mydomain.com to the nginx server.
    You use different IP’s but the same domain name, sofar I havent seen any issue with that.

  8. Trying to load balance several node.js servers but can’t make it work.
    When using more than one server, the browser seems to keep reconnecting.
    Everything is great when using just one node.

    The haproxy conf I’m using is the following.
    Am I missing anything?

    backend socket_io
    balance roundrobin

    mode http
    option httplog

    # long timeout
    timeout queue 5000
    timeout server 86400000
    timeout connect 86400000

    # check frequently to allow restarting
    # the node backend
    timeout check 1s

    # add X-Forwarded-For
    option forwardfor

    # Do not use httpclose (= client and server
    # connections get closed), since it will close
    # Websockets connections
    no option httpclose

    # Use “option http-server-close” to preserve
    # client persistent connections while handling
    # every incoming request individually, dispatching
    # them one after another to servers, in HTTP close mode
    option http-server-close
    option forceclose

    # balance
    server node1 127.0.0.1:8001 weight 1 maxconn 2000 check
    server node2 127.0.0.1:8002 weight 1 maxconn 2000 check

  9. Great post! I setup the same deployment for https://ratchet.io/ earlier this year. We are, however running into some trouble with stunnel not setting the x-forwarded-for header. Looks like stunnel supports something like this via the transparent=source config option however our server’s kernel doesn’t have the necessary flags enabled.

    Are you able to get the incoming client IP from your node.js or ruby servers?

  10. Another way to deploy Socket.IO is to use Kaazing’s HTML5 Gateway. Kaazing will proxy the Socket.IO traffic and it takes care of the scaling, emulation, clustering, security and even provides an SSO integration. The Gateway is basically like a network router, except it’s protocol aware. Kaazing does all the emulation if WS is not available in a particular browser so you don’t have to worry about getting polling and Flash-Socket to work. In fact, the polling fall back from Socket.IO is really slow, and also seriously increases the battery consumption on mobile apps. And Flash Socket is notoriously insecure. Kaazing will emulate WS all the way back to an old IE6 browser as well, and doesn’t have the firewall traversal issues that Socket.IO’s alternate transport layers have.

    All you have to do is to put Node.JS behind the Gateway and use only the WS transport. You just use the Kaazing client library below the Socket.IO client library and you make sure the client also uses the WS transport only. The Gateway will readily take care of all production needs from there on. Their Cloud solution is relatively cheap, you can just rent an EC2 instance, the cost is a tiny fraction on top of what EC2 charges, and it adds up to much less than some of the basic publish/subscribe service providers out there. And there is no lock in because the entire Kaazing stack is based on the WS standard, so you just program against the WS APIs.

    Now a disclaimer: I work for Kaazing, but I am not posting this on behalf of Kaazing. I happen to be a major fan and enthousiast of Node.JS, and I would like to see more Node.JS adoption in large enterprises. Kaazing unfortunately doesn’t currently market this solution at all. Using Kaazing to deploy Node.JS just makes Node.JS so much more attractive to enterprise shops because it overcomes any and all objections anyone would have against using Node.JS in an enterprise.

  11. Pound doesn’t work.
    Nginx should have websocket compatibility by Jan 2012 – http://trac.nginx.org/nginx/roadmap

  12. HAProxy from version 1.5dev12 (currently 1.5dev16) supports SSL.

Leave a comment

You must be logged in to post a comment.