Feb 5Liked by Dylan Patel

As a traditional, last-century Comp. Sci. major (MS/MPhil 1984), the thing I look at is single-thread performance, and that stalled in 2005 and doesn't seem to be going much of anywhere. This is because most algorithms I'm interested in (the stuff in Knuth's The Art of Computer Programming) are largely of the sort in which the next step depends on what was computed in the prior step, so parallelism (multi-core, GPU (SIMD)) doesn't help. Moore's Law really is dead, I tell you.

In real life, people play games, edit photos and videos, and do things (or become interested in things tech types (who get to play with the parallel hardware first) persuade them to be interested in!) that are amenable to parallelism. Even here, the Katago Go program (a second-generation(!) open-source reimplementation of Alpha-Zero (with crowed-sourced computation for training) needs about a factor of 3 GPU performance improvement to play noticeably better, which I'd see if I upgraded my RTX 3080 to the latest, the RTX 4090. So number of cores/cache-sizes really are relevant in real life, despite the squawking of us dinosaurs.

(Translation: Hey! Nice article! Thanks!)

Expand full comment

Hi Dylan- Want to collaborate?

Expand full comment

Why would you link to a paywalled WSJ article for "Huang's law"? There's a perfectly fine Wikipedia article that I'm sure all of your readers have access to. Or is there some implicit assumption that all your readers here would or need to subscribe to WSJ as well?

I must also say, that is the first time I had heard of it. Things don't become a so called "law" just because some in the press call it so, and I doubt even Jen-Hsun is that megalomaniacal!

Expand full comment