HTTP vals are far too slow #113
Replies: 7 comments 9 replies
-
This is worse still with large response sizes. Not the biggest deal for hobby projects but it's a noticeable slowdown, especially if a val makes a request to another val |
Beta Was this translation helpful? Give feedback.
-
It occurs to me that this might be something to consider in the context of the Pro tier; maybe there's a nice congruence between user desire/demand and infrastructural overhead. TurboVals! |
Beta Was this translation helpful? Give feedback.
-
I have some pretty big vals and performance is starting to become an issue. I think for me most of the overhead is that everything is created from scratch every time. I'm timing my handlers and for this specific val it responds in about 200ms, but the full request takes 1-2 seconds. If I also time the imports (By turning them all into If I understand correctly, vals are loaded and instantiated for each request, so each request is effectively a "cold start", doing all the val.town side loading as well as imports / initialization. I think if vals could be cached in memory (or something) and respond to multiple http requests it would speed it up a lot. Obviously, that's way more complicated but I'm really just here to +1 this discussion and am excited to see what improvements you all make. |
Beta Was this translation helpful? Give feedback.
-
We (mostly @maxmcd) are working on this! We are currently working on figuring out where things are slow. We are hopeful that there will be some quick wins🤞. We'll report back soon! |
Beta Was this translation helpful? Give feedback.
-
Hello! I'm working on performance improvements of vals and wanted to provide a quick update on the work so far. The use-case I'm focusing on at the moment is a dead-simple http val: export default async function(req: Request): Promise<Response> {
return Response.json({ ok: true });
} If I publish this val and make a request to https://maxm-examplehttp.web.val.run/ this request will complete in, at best, around ~190ms. Here's a breakdown of where that time is spent:
These numbers are rough estimates because they can vary quite a bit, but I think this is representative of where time is typically spent. A few takeaways from this breakdown:
Notably, when we fetch the val source code (the 72ms step above) we make a request to "https://esm.town/v/maxm/examplehttp" which makes a full round-trip out to the public internet. We can fix this directly and avoid the networking round-trip entirely, but for the moment we're working on figuring out why that amount of network overhead is present at all. This overhead is present in all Happy to answer any questions or take suggestions on how to proceed with these improvements 🙂 |
Beta Was this translation helpful? Give feedback.
-
Hello! There is finally a path to very fast HTTP vals. Wanted to give a quick update about how we're going to do that.
Today, a val spends most of its execution time starting up and loading dependencies, so these two features should make things very speedy. This doesn't solve everything, certain patterns will still be slow. Cold-starts will still be a bottleneck, we still have some network latency added by our current hosting provider, and requests will be slow if you are far away from Ohio. We'll keep improving things in these areas and push for things to to be faster and faster, but we're optimistic that these two features will lead to a large speedup for common HTTP val request patterns. |
Beta Was this translation helpful? Give feedback.
-
HTTP vals are very fast! ~50ms to get a reply from a basic val here in NYC We can (and will) make them faster but they are no longer "far too slow", so closing this ticket |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm not sure how you can solve this at the moment given this.
GET / takes ~210ms+ (sometimes 500-900ms)
ping / takes ~10ms
The GET / http handler:
Having 200ms of overhead for every request is just too much! That's before I do any processing of my own. It's a poor user experience.
Example 1: https://www.val.town/v/greatcode/htmx
Example 2: https://www.val.town/v/neverstew/htmxExample (Requests here are ~350ms with very little extra added to my above example)
P.S. Also, "cold starts" appear to take 500ms+. I had one cold start take nearly 1s with the above code.
Beta Was this translation helpful? Give feedback.
All reactions