Afaik, gpu cores a very stripped down cpu cores that do basic math, but what sort of math can cpus do that gpus can’t

I’m talking, cpu core vs Tensor/Cuda cores.

  • corruptboomerang@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I mean if you’re creative enough, probably nothing.

    This is kinda like asking what can I do on a lathe that I can’t do on a mill. It’s more what’s better suited to be done on one or the other.

    CPUs are more generalised; they have a deep and complex instruction set and feature list. While GPUs are shallower and far more specialised, but do tasks that parallellalise more readily… Like calculating a metric shitload of triangles.

    You can see CPUs used to ‘push pixels’ in older computers since that’s all they had.

    • Wfing@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Follow up question, what allowed CPUs before the advent of eGPU/dGPU’s to output video, or what prevents them from doing so now?

      • KTTalksTech@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        You’ve probably seen this in older games already, hardware vs software rendering. Software rendering just asks your CPU to perform the same calculations your GPU would normally take care of. It still works today but games have such astronomically high performance requirements that most don’t even give you the option.

      • monocasa@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Video cards that didn’t have a GPU, but instead were just a RAMDAC or “scan out” engine.

      • GomaEspumaRegional@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Frame Buffers have been a thing since the 60s at least ;-)

        Basically it is a piece of memory that contains the color information for a set of pixels. The simplest would be a black and white frame buffer, there the color of each pixel is defined by it being 1 (black) or 0 (white).

        Let’s assume that you want to deal with a monitor that it is 1024x1024 pixels in resolution, so you need 1024x1024 (~1Mbit) bits of information to store the color of each pixel.

        So in the simplest case, you had a CPU writing the 1Mbit BW image that it just generated (by whatever means) into the region of memory that the video hardware is aware of. Then the display generator would go ahead and read each of the pixels and generate the color based on the bit information it reads.

        Rinse and repeat this process around 30 times per second and you can display video.

        If you want to display color, you increase the number of bits per pixel to whatever color depth you want. And the process is basically the same, except the display generator is a bit more complex as to generate the proper shade of color by mixing Red/Gree/Blue/etc values.

        That is the most basic frame buffer, unaccelerated meaning that the CPU does most of the work in generating the image data to be displayed.

        So assuming you had a CPU that was incredibly fast, you could technically do just about the same that a modern GPU can do. It just would need to be thousands of times faster than the fastest modern CPU to match a modern GPU. ;-)

        Hope this makes sense.