Will Concurrent ESI be possible once HTTP/2 is implemented?


#1

Scratching a curiosity itch

I’m curious about experimenting with ESI in our API.

For example, instead of our origin returning this:

[
    {
        "ID": 1111,
        "Title": "Some title",
        "LotsMoreProperties": "(about 12KB of properties follow)"
    },
    {
        "ID": 2222,
        "Title": "Some other title",
        "LotsMoreProperties": "(about 12KB of properties follow)"
    }
]

…our origin could return this:

[
    <esi:include src="https://api.example.com/products/1111" />,
    <esi:include src="https://api.example.com/products/2222" />
]

Will this even help?

One might ask, “How much time will this really shave off especially if you’ve already got these individual products in memcache?”

It’s true, if even 1 of the 25 products is a miss, this architecture will be slightly slower since it needs to round trip back to the origin. I’m pretty sure this round trip time would be greater than the transmit time of even 25x more gzipped bytes. (And our colo is just 2.9 miles from your nearest POP (Infomart, Dallas, TX).

However, I do think it’s worth testing out past the theoretical and see how it performs in a real world setting. Some endpoints, like keyword search, might be undetectably slower for our users. But other endpoints that only change 20x per day would be screamingly fast.

But Fastly makes ESI requests serially, not concurrently

If a search query returns 25 products as cache misses, resolving those from the origin serially would be a deal breaker.

I can understand your hesitation to implement concurrent ESI resolutions to the origin so I mostly wrote off the idea. I thought maybe I’d be saved by HTTP/2’s multiplexing but saw that Varnish’s Poul-Hennings isn’t that keen on it ( https://www.varnish-cache.org/docs/trunk/phk/http20.html ), so I wrote off that idea too.

Yesterday I saw HTTP/2 is on the roadmap! Rejoice! So I guess my question is this… once HTTP/2 is implemented, will I be able to resolve the ESI cache misses to the origin over HTTP/2 using multiplexing thereby avoiding the serial fetches of doom?

I love Fastly!
Rylan


#2

Hey Rylan,

First of all, excellent post! Your questions make a lot of sense.

Unfortunately, I’m going to have to disappoint you. First of all, HTTP/2 is going to be front-side only for a while. And even if we did H2 to the backend, parallel ESI includes would still require a lot of additional work to add as a feature.

What about 25 simultaneous XHR requests over H2?

Cheers,

Doc


#3

Thanks for the info Doc. That helps. Unfortunately, XHR won’t cut it for us. I was looking for these changes in our API to be transparent to our clients, some are 3rd parties, some are native mobile clients.

It’s no worries though. I’m always on the lookout for when new technologies turn today’s best practices into tomorrow’s anti-patterns. (Not that ESI is new by any stretch.) This was an exploration of memcache on the edge. A fun thought experiment but, perhaps, before its time. I’ve seen other (lesser) CDNs offer concurrent ESI but instant purge is the other critical component, which they do not have.

If we were to put a CDN in front of our API, I think the best way to go would be to use surrogate keys. This way we could still insta-purge any responses that included a stale product.

Example for https://api.example.com/products?q=blue suede shoes

HTTP/1.1 200 OK
Surrogate-Key: products/1111 products/2222 products/3333 ...
Content-Type: text/html
...

And then purge any responses that that might contain a newly stale product with:

PURGE /service/id/purge/products/2222
…

Rylan