I’m trying to deploy the: “Deploy fastly/compute-starter-kit-javascript-queue to Fastly Compute” with he Upstash service. However, I’m getting confused about these steps:
Fastly services consist of application code bundled into a binary, and a set of resources on the platform that your application code is connected to and can reference by name. For example, your application can forward requests to backend servers, emit data to log endpoints, or read and write data to stores.
This starter kit is designed to work with a backend server that tracks queue positions, and has been built to work with Upstash. To use it, you either need an Upstash account, or to adjust the code to use a different mechanism to persist queue counters.
In the screenshot you’ve posted, the four boxes are for you to enter the hostname or IP address of the backends that this starter kit requires - the boxes on the left are for the hostname and the right boxes are for the port number, eg my-account.upstash.com and 443 for the upstash backend. This starter kit also defines a second backend, which is where content will be served from if the user reaches the front of the queue, and this is normally the server you intend to use the queue to protect.
The tool you’re using here, cloud deploy, creates these resources for you on the live platform as part of creating a service for the first time. If you prefer you can also create the service manually, and then use the main control panel UI to create the associated resources.
To poke into this in more detail, you can clone the starter kit to your local machine and run it locally, defining simulated versions of the resources in your fastly.toml file:
I appreciate your fast response and the valuable information you provided. I will definitely take the information and try the starter kit on my local machine. If I have further questions, I won’t hesitate to reach out.
By the way, I was wondering what happens if a user goes directly to a page instead of going through the queue. Has this also been considered in the system?
Thank you for your assistance, and I look forward to continuing to explore Fastly!
The upstash hostname you’ve got there looks like it contains an authentication component - the bit before the @, and we don’t support specifying that syntax in backend hostnames (you would need to set an Authorization header in your compute code before sending the request to the backend).
If that was necessary for upstash I would have thought that we’d prompt for it in the service config, so it’s weird that we don’t, maybe the way upstash authenticates access to their API has changed. I’m checking with the person that wrote that starter kit.
Meanwhile for the protected content hostname, this is the server you want to use the queue to protect, so it’s normally your main application server, like say you were running a single-server app on Heroku, and too much traffic would overwhelm the instance, you might use a queue service like this to make sure that your server instance never received more traffic than it could handle. So basically that hostname is your own server. The port is likely to be 443, as you already noted.
Thank you for the clarification. Instead of using the starter kit, I am considering starting from scratch to gain a better understanding. Additionally, if I understand correctly, the intention is to implement a server-side check on the protected server. This check would verify whether the incoming request contains a valid queue cookie. If not, the user would be redirected to the waiting room. Is my understanding accurate? In case, a user would directly go to the protected server instead of going via the queue / waiting room
The way that the starter kit is implemented, all traffic goes through the Compute service. It is responsible for validating that a user has a valid queue cookie before passing traffic on to the backend. This means that no changes are required on your backend server at all, apart from validating that the request is coming from your Fastly service rather than the end user directly (see the mutual TLS example).