Max connections pr cache node

Hi,

i have a client that holds quite a large event - about 40000 visitors. They offer free Wifi to the visitors, so there are quite a few connections from that one location to our servers.

My question is if 1000 connections from a cache node to the origin server will be an issue? We know that there will be 2-3000 visitors using the site in the peak hours from the Wifi. They will all hit the same cache node. And there will also be 3-4000 visitiors using their mobile network and they will also most likely hit the same cache node.

Are there several cache nodes at each CDN location? And can I expect the visitors to hit different nodes at the CDN location, when they all seem to come from either the same IP og very close to each other?

1 Like

Those are some great questions. There are a lot of factors to consider when assessing scale. I’ll try to explain the biggest factors, so you can start to deliberate them. If you’d like to discuss in more detail reach out to support who can make more specific recommendations

Are there several cache nodes at each CDN location?

Yes. There are many nodes at each Point Of Presence. We spread network connections across all nodes in a PoP as they are received. Depending on the architecture and how the WiFi network is facilitated your users will likely access the PoP.

1000 connections from a cache node to the origin server will be an issue?

In my past experiences this is more likely to cause issues at the origin than to our network. The default number of connections is set at 200. This equates to 200 open TCP connections per node * number of nodes in use. This can quickly get to the hundreds of thousands of connections. I’d recommend testing the scale as closely as possible to the real world usage as you can to assess any bottle necks.

Fastly also (where possible) re-uses connections, so the number of connections does not line up with numbers of requests per second at the origin. Fastly coalesces duplicate simultaneous requests for cacheable content into a single request per PoP.

If the content is not cacheable, then each request will need it’s own connection. This can quickly lead to socket starvation at the origin (depending on scale of the origin), so this might be worth checking and tuning the origin. Nginx has a blog that includes some tuning for Linux Kernels for such scale that might be a place to start.

We know that there will be 2-3000 visitors using the site in the peak hours from the Wifi

Fastly is tuned for high throughput, so this is unlikely to be a problem at our Points Of Presence. I’d suggest testing this and ensuring the WiFi hardware can maintain it’s NAT tables at the capacity needed.

1 Like

HI

thanks for the answer! Most of the content is cached very aggressively - images and json requests. And we use litespeed cache on the server, so most traffic should be cached.

We have a few endpoints that handles stats and favorites, that are not cached but the server can handle that without issues.

So our main concern is that the traffic does not reach our server. Hence the question.

We will do some tests to simulate both the traffic to the CDN from many users and the calls to the server and see if we hit a limit.

Regards

Johannes