When working with sails.js, or indeed any software that accepts websocket connections, you have to be careful you don’t start rejecting requests. It’s all about finding the perfect balance between CPU, memory and file descriptors. Unfortunately, there is no such thing as a “one size fits all” solution in this case, because it all depends on your application.
I’ll try to highlight the importance of certain aspects by providing examples with them. I’ll explain why, when and how they’re important, providing possible solutions with them as well. For this article I’ll assume you’re using sails.js, but the suggestions given in this article apply to any application. Let’s get to it.
Scalability is something to always keep in mind. You can read more about scaling sails.js here.
You can read about this more by reading this article. To me, this is one of the most important things to keep in mind while setting up your server architecture.
The most obvious way to make scaling possible, to me, is by using redis as the session store and pubsub adapter. That way, you’re not memory-bound and you can simply add instances and VMs, where you apply load balancing.
This is a pretty simple thing to set up. Using PM2 you’ll be able to scale your application over multiple cores on a single machine. Sometimes, an extra core is cheaper that an extra VM. This all depends on how much memory you’ll need to add, and how much CPU power is actually being used.
It’s crucial to know exactly how much memory you need, and to keep an eye on how much you’re using. If you have a memory leak, or a sudden increase in activity your application might crash (which can obviously be recovered using forever.js) your application might crash, and we don’t want that to happen. In general, you should be careful with what you store in memory. Just as with memory, it’s also equally as important to keep an eye on your CPU power.
File descriptors, as taken from wikipedia:
In computer programming, a file descriptor (FD) is an abstract indicator for accessing a file.
Everything, even network connections use FDs. By default, the limits set for them are quite low making it quite probable you’ll find yourself looking at errors that only show up in your
/var/log/messages log, and don’t make a whole lot of sense. Capping this limit, will cause connections to drop, communications to fail and basically ensures chaos.
Viewing the limits
There are two types of FD limits: system limits and user limits. The system limits can be viewed by running:
$ cat /proc/sys/fs/file-max
the user limits can be viewed by signing in as the user (
su username) and running for the hard limit:
And for the soft limit:
Changing the limits
To change the system limits, run:
$ sysctl -w fs.file-max=200000
fs.file-max = 200000
vi /etc/sysctl.conf and add the following at the end of the file:
fs.file-max = 200000.
For the user, open the file
vi /etc/security/limits.conf and add the following lines (replacing
username with the username you’re using):
username soft nofile 11095
username hard nofile 16000
Save the file, and done.
The right limits
Figuring out what the right number is for your server depends on your application. If you notice you’re capping on file descriptors, with plenty of bandwidth, memory and CPU left it’s probably safe to increase the number by quite a bit.