Anyone find Fastly Compute to be missing a certain development speed? And a sort of “develop from any computer” element?
For a while I’ve been fiddling with a Compute app with their JS SDK that would do, in short, eval(kv_store_get(url)). Super fast iteration time. Update KV store entry, change deployed in milliseconds.
I now have a web-compatible Visual Studio Code extension for managing those KV store entries that’s shaping up:
Gradual start
Have a proper enough development computer, one with npm etc. for running the Fastly CLI. This is only for this setup, and you don’t need to be at this computer to develop stuff later. I am interested whether this is possible to do without a full development environment, but I have not looked into it.
Run npm run deploy and follow interactive instructions. It’ll ask you about several files to use in the KV store, but they should already be set, so just press enter on those.
When it’s deployed, it’ll be at a domain. If you didn’t set up a custom domain, it’ll be (something).edgecompute.app. Remember this or save it.
Press F1 or Ctrl+Shift+P or similar and use “Drop by Squigil’s House.”
Select Allow. In the Installation Alias chooser, select “Other” and then enter your installation’s domain from the deployment earlier. In the Admin Secret input, enter your secret from the admin setup earlier.
If there’s any interest, I’ll see if I can write some more getting started guides.
This is a super cool project, but Fastly Compute is a little different from Glitch’s experience in that it requires an explicit compiler build before pushing the code up to Fastly. Continuously changing the code on the platform this way will likely blow out trial resource allocations quickly.
If the goal is to have an in-browser editing experience, I’d recommend using GitHub Codespaces paired with a GitHub Action that can automate the build/deploy step every time your main branch changes. Codespaces isn’t a perfect Glitch editor replacement by any stretch, but it’s a supported path that won’t introduce any surprise bugs.
hi @aspires, thanks for taking a look at this project
you’re referring to continuously rebuilding and deploying new versions of the Compute app, right? that indeed is costly.
avoiding that is sort of the charter of this project. so the editing workflow doesn’t change the “code” on the platform. it updates a KV entry. the “code” of the compute app serving this is an interpreter, which incidentally already existed, as the JS SDK supports eval.
updating KV entries isn’t free either, but I’m under the impression that updates produced by a solo dev typing occasionally is a modest level of use.
here’s my usage over the past ~month of myself building the included tooling (and precursors ):
I don’t know how to look at KV usage, but each of these would have taken 1-2 reads and 0-1 writes.
this all is true, and something like this is probably how normal work will have to be done, when it involves using dependencies from npm, codebases that need a build tool, etc. I do appreciate Fastly’s tutorials on using Compute in this way.
There’s still a big difference from the way that platforms like Glitch worked.
In Glitch, after you edit the code, a container is launched running a Node server which parses your JavaScript code and prepares to receive requests. When requests arrive, they are handled by the code already loaded into the interpreter.
In Fastly Compute, there are no long-lived instances; every request which lands on the Fastly edge network causes a new instance to be launched. In your case, that instance will then retrieve some JavaScript code from KV Store and run it through eval before executing it to handle the request. When the handling of that request is complete, all the parsing/evaluation results will be thrown away, to be recreated when the next request arrives.
that is consistent with my understanding of Fastly Compute.
this project isn’t meant to turn Fastly Compute into a Glitch-compatible Ubuntu server. when I build on this, I’m still writing Fastly Compute code that runs in a short-lived per-request-isolated environment.
I guess that’s worth clarifying for people coming to this thread.
if you don’t actually want to learn Fastly Compute or if you wanted to build something that doesn’t fit into Fastly Compute’s capabilities in the first place e.g. with long running requests, this project doesn’t help you get away from that.
and that all is fine, right? anyone who’s already made their way here is here because they believe in this architecture.
Hey wh0, this is so so cool, thank you for sharing it with us and the rest of the community! It’s really inspiring to see that you thought outside the box to make it easier to develop on Compute. The Squigil character is very cute too.
It makes sense from a performance and cost management perspective that the normal Compute deployment flow has the build phase where we “warm up” the JavaScript engine so that it doesn’t have to happen on every request, but you raise a very good point that this is probably an unnecessary step for prototyping that only slows things down from the perspective of a developer.
I’m imagining an A+ development experience that uses the technique you’ve shared here, along with what I think is missing – a final “publish” step so that once you’re done fiddling with the code, it does go through that build stage which would keep things fast and costs low. You could of course continue hacking away in this prototyping environment after that and re-publish as many times as you like.
Perhaps this functionality would be a useful improvement to Fastly Fiddle?
Either way, I appreciate you sharing. I’ll have to test it out next week. I look forward to seeing what you can create with this
“publish” step sounds good. I’m somewhat doing a publish step already, although only to publish reasonably stabilized scripts on github rather than to compile them into optimized wasm. what I’m reading above though is that there are CI setups that put the build+deploy automation within reach. I’ll probably look into that if I start hitting the 50 ms CPU limit with longer scripts/scripts that do more work.
not sure if this would be a net benefit for Fastly Fiddle. it’s specific to JavaScript and it’s really hacky, e.g. with imports not being supported.
When you get an opportunity, please also compare it our normal local development workflow, which uses fastly compute serve –watch to build and run the service in Viceroy, and automatically rebuilds/relaunches when the source code is changed. We’d be curious to see your thoughts on that workflow too!
I used viceroy a lot in getting the really early versions of this set up. Here’s that comparison:
quality
CLI+Viceroy
Squigil’s House
time overhead to run updated code
a few seconds, I can handle it. the js sdk doesn’t do much compilation. there’s only wizer and whatever viceroy’s startup delay is
feels like less than a second. note that I’ve only tried editing and viewing in the same region
where development happens
on a certain computer. may be a remote computer that you connect to from multiple computers, possibly with a subscription fee
any computer, in web browser. have to log in though
auto rebuild
available with --watch. restarts when files referenced in [local_server] change too. ironically I didn’t know about this while bootstrapping this project. starter had non-watch “start” script provided, so I never looked at the manual
more or less equivalent to auto save in editor
state across rebuilds
reinitialized to [local_server] spec. this behavior can be helpful in some use cases
persists. this behavior can be helpful in other use cases
language support
any sdk’s wasm output
js only. theoretically possible to extend to other languages that have wasm interpreter
js feature support
starlingmonkey/spider monkey, wintertc stuff
same, but imports are broken. need app rebuild to export new modules in the future
dev env separation
local server is intrinsically separate from production site
Basically the content of this file gets used as the body of an async function with some arguments given, and it should return a Response.
Currently the arguments are:
fetchEvent: FetchEvent
entranceKey: string // e.g. `a/a/index.js` for request URL path `/a/a/~/b/b`
innerPath: string // e.g. `/b/b` for request URL path `/a/a/~/b/b`
builtinModules: {
// see docs https://js-compute-reference-docs.edgecompute.app/docs/
fastlyAcl: import('fastly:acl'),
...,
}
That, similar to the dynamic page, runs some code as the body of an async function with some arguments given, and it should return a Response.
Currently the arguments are: