This claim feels truthy, but is it true? Most of the big models were being trained on specialised hardware like tensor chips right?
And the biggest drop isn’t because crypto is worth less, it’s because Ethereum doesn’t use proof of work since November (Bitcoin hasn’t used GPUs for years). The explosion in novel AI models we saw last year predates this change.
Most big AI is trained on Nvidia GPUs but usually not the standard consumer ones found in the GeForce line-up. Instead it's usually their data centre GPUs like the current A100 or soon to be H100 that's just hitting the market.
Google does have their TPUs(Tensor Processing Units), however it is not cost efficient budget wise, so unless you have some kind of deal with Google or compute credits it doesn't make sense. They have pods upon pods of TPU clusters though so the main selling point of TPU training is that you can get your training done really fast with just the ease of scaling your workload to more TPUs.
So if you needed a big model like GPT-3 trained in a single day, you could spend an ungodly amounts of money and get it done with Google TPUs. Otherwise if you can wait weeks or months you can go with the standard Nvidia data centre solution and it'd be cheaper at the end by a significant margin.