Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Any solution towards the thread side of the spectrum will yield significantly larger memory footprints than solutions towards the CPS side of the spectrum.

The community-at-large decided that hand-tuning garbage collection was too finicky and not worth it, even though it obviously 'costs' memory.

I'm frankly at a loss as to why so, so, so many blogposts and tech experts are all-in on the CPS-side of this argument; it seems quite obvious to me that in the vast majority of cases, the considerably simpler* model of (green) threads means you're making the exact same trade-off: Simpler to write and debug code at the cost of needing more memory when running the app you write.

*) For sequential/imperative-style languages, that is. If you're writing in a language that is definitely and clearly intended to be written in a functional style, I can see how the gap between CPS-style and threading-style is far narrower. However, java, python, javascript - these are languages where the significant majority of lines of code written are sequential and imperative in nature.

Also note that in e.g. java you can actually configure stack sizes as you make threads. Thus, your choice of words of "'significantly' more memory footprint" is debatable.



Very thought provoking. A few of my thoughts:

A GC does not obviously cost memory. It might, or it might not. Both a GC and a traditional memory allocator have hidden costs. A GC with movable objects can sometimes do better, because it can manage fragmentation.

I prefer a message-passing style; on which side of the spectrum would this fall?

My experience with green threads is libraries that make I/O operations look like a regular function call. This is similar to RPC where a remote call and a local can look the same, even though the remote call is much slower. This can result in surprising performance characteristics. Even worse, a remote call can time out or take indefinitely long to complete; the same is not true of local calls. Message passing is more onerous but makes surprises more obvious.

I've often fancied writing for an architecture where main memory is treated as fast remote storage, accessible with message passing. I know such architectures exist but I've never had the opportunity to write for one. I wonder if the change in style would have a positive or negative effect on performance.


In CPS the continuations (closures) have to be allocated on the heap, but generally they are one-time use only, which means no GC is needed for them. Hello Rust.


> it seems quite obvious to me that in the vast majority of cases, the considerably simpler* model of (green) threads means you're making the exact same trade-off: Simpler to write and debug code at the cost of needing more memory when running the app you write.

I don't find it's quite that simple.

My experience is that the complexity of CPS tends to scale linearly with use, whereas threads scale exponentially. For small uses threads are easier, but CPS quickly catches up.

CPS forces you to actually declare a dependency tree for your data. Things depend on other things, and that exists in your code. It's very easy for threads to end up a mess, where it's not clear how data is passing through the code, which causes bugs like deadlocks and race conditions.

It's deceptively easy to write code where thread A tries to lock mutexes X and Y, and thread C tries to lock mutexes Y and X, and it deadlocks because neither thread can get both locks.

It would be much harder and more arcane to do that in Javascript or in Python's async. I'm not saying it's impossible, but I don't think I've ever accidentally created a race condition or deadlock in their CPS engines.

TL;DR if your functions are only marked async so you can await something, threading probably is simpler. If you're actually passing promises around, things become much more favorable to CPS.


Interesting take!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: