• blashork [she/her]@hexbear.net
    link
    fedilink
    English
    arrow-up
    6
    ·
    11 months ago

    I’m well aware. While I don’t personally work on the gpu related stuff, I do work at a company that has to do a lot of gpu computing. My opinions on this topic are mostly informed by coworkers who do write gpu code, specifically a lot of opencl kernels. opencl has a lot of shortcomings and issues, and the kernel model is a bitch to work with. However, that isn’t the point. The point is about compatibility. You can have a really good gpu but if you don’t have adversarial compatibility with your competitors, it will just die. Specifically, amd have done a shit job at making cuda run on amd gpus. rocm is a disjointed mess, it sucked when I had to work with it in uni, it still sucks now. Cuda is bad and proprietary, but any modern gpu should still be able to run cuda crap simply because it’s useful to be able to do so, and there’s a lot of things already built with it that should remain accessible.

    The fact that amd have not been able to get a component as critical as their adversarial compatibility layer working while Moore has already implemented it for their early generation of cards shows:

    • how profound the failures of amd and western computing companies is
    • the breakneck pace and incredible technical achievements of chinese gpu development
    • It may also mean that Moore Threads has more legal freedom being Chinese to implement CUDA compatibility even if it infringes on Nvidia patents. Sometimes the limitations are legal rather than technical, either way AMD is in serious trouble if Moore Threads achieves sufficient CUDA compatibility