16 Comments

On a slightly unrelated note: NVIDIA recently published a blog post about using AI to design parts of the H100(https://twitter.com/rjrshr/status/1545446397759016962). They're suggesting that AI can beat out EDA tools in many cases. Do you think that's plausible? Do you see EDA firms starting to integrate AI capabilities into their software or is there space for disruption here?

Expand full comment
author

EDA companies are definitely adding some capabilities but they are a few years behind Nvidia's custom inhouse flow which leverages external EDA with internal capabilities too

Expand full comment
author

Ya I saw. Nvidia has been putting out research on this for a long time. The circuits the AI is designing and scale is important part. The architecture is still defined by people.

Expand full comment

Yeah interesting to see how a lot of tech companies are shifting to in-house application-specific chip design (https://digitstodollars.com/2021/05/13/who-wants-to-build-a-chip/) and as a by-product, are leveraging their advantages in AI capability to do some next-level AI-assisted EDA, Google's use of AI for designing their AI-accelerator TPU chip being the other prime example.

I'm guessing that they're probably going to keep this kind of stuff in-house, so they can build better services on top of these chips, but it sounds like the traditional EDA folks are slowly moving up in this area too?

Expand full comment

Hey Dylan

Thanks for the comparison.

Curious about some points that occurred to me after reading the article:

(a) Is it the case that Google's TPUs are useful only for a select class of AI models? Is that why Google still buys thousands of Nvidia GPUs?

(b) From the perspective of a GCP customer, Google recently claimed on their blog that TPUs would reduce their costs by 35-40% (they compared TPUs on GCP to A-100s on Azure). So is Nvidia's good software stack the only reason that customers at GCP are not adopting TPUs en masse?

(c) Surprised that except Google, no one thinks it is worth the effort to create their own silicon for training & inference? Keep hearing about AWS's Graviton but nothing more & it seems to be a CPU chip, not a GPU.

(d) How is Nvidia so confident that GPUs will be the dominant hardware form-factor going ahead? I understand models are changing very fast at the cutting edge, but wouldn't some kind of ASIC/FPGA kind of hardware give lower TCO?

Sorry for so many queries but I am just so surprised at how dominant Nvidia has stayed over the last 5-6 years, and how no one seems to be posing any effective threat.

Expand full comment
author

Those are good questions. I have a lot to say about it though, not really the form factor for a comment. Maybe I'll write a post about it.

Expand full comment

Curious where Apple stands in all this. While they don't sell AI components or even AI itself, they clearly invest in it. Do you see the possibility that 3rd parties will use Apple's M-series systems for AI applications such as research or even sell AI application software packages on m-series systems?

Expand full comment
author

Inference sure, but not training.

I don't even know what support for frameworks like pytorch and tensor flow are like

Expand full comment

Enjoyed the read, with up-to-date numbers and something people interested in the Genomics / Bioinformatics field should keep an eye on.

Expand full comment
author

I'm curious, I see Nvidia has nice SDKs for those fields. Are those used much or just for show.

Expand full comment

I expected to see the market much open this time. But Nvidia still keeping up with a 2 years old piece of silicon is very impressive. Next submission with H100 will be a blood bath for the competition...

Expand full comment
author

Definitely. The more important thing is. The volumes is not people buying 1 or 8 accelerators for training. It's people buying hundreds or thousands. Nvidia NVSwitch expansion on Hopper will be huge for scalability.

Expand full comment

I don't know Betteridge's law of headlines. But, yes, I clicked and read article because of fancy headline.^^

---

Apologies for Betteridge's law of headlines to the title of this post, but it got you to click and read it didn’t it!

Expand full comment
author

Hopefully you enjoyed it :)

Expand full comment

Were any results submitted for AMD?

Would've been interesting to understand where they stand in this space (SW wise).

Expand full comment
author

Nope. They did not submit not did any of their partners

Expand full comment