High-Level Guide to Scaling Meteor
The first and highest-impact place to optimize is in softwareland:
- Mongo schema design, indexing, and oplog
- Subscription and query design (also try to avoid types of queries that use the polling driver)
For server architecture, place at the front a SockJS-safe, sticky-session load balancer that distributes connections to app servers based on least connections.
On the application servers:
- CPU: Run an instance of meteor for each CPU core, since node is single-threaded.
- Memory: Meteor’s MergeBox is keeping on the server a copy of each client’s client-specific subscription data, so make as many publications as you can general-purpose (as in, they return the same data for all users or a group of users, rather than different data for each user).
A floor for your per-client memory usage is how much additional memory your publications take up when another user connects. Looking at Kadira for my app, I average 8MB per client, so a server with 80GB of RAM would max out by the time I hit 10k connected clients. As the number of connected clients on each app server gets close to that number, I need to add more app servers to lighten the load. For instance at 50k connected clients, I might have six app servers, each with around 8.3k clients, who use up over 83% of the server’s memory. Note: By default, node has a heap memory limit, which you can raise (–max-old-space-size) to whatever you want on a 64-bit machine and to 2GB on a 32-bit machine (so the above example would require a 64-bit machine).
However, in Arunoda’s experience, CPU is more often the bottleneck than memory. So instead of balancing based on memory load in the above example, you would add servers when you notice the current servers getting toward 100% CPU usage. One example of this is Josh Owens maxing out a Modulus servo CPU with 700 connected users (a servo has 396MB RAM and an undisclosed CPU limit).
Every app server will connect to the same Mongo replica set. On the Mongo servers, keep the working set (indexes + active documents) in memory. The working set will be differently-sized depending on your application. For instance from what I read on the Foursquare Mongo outage, it appears that all of their data was active. If you need to scale reads and are okay with sometimes reading slightly old data, you can read from secondaries in your replica set.
If you need to scale writes, or once you get above the limit of affordable RAM in one machine, you will need to shard. Sharding with the oplog isn’t currently supported, but I spoke with core dev David Glasser on 1/29/15, who said it was something they knew they needed to do, but it wasn’t going to happen in February. Mongo also recommends checking your file system and disks before sharding. Here are some good guidelines on scaling Mongo. For an example sharding server calculation, as well as an example whole-infrastructure setup, see this mailing list post.