• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: October 24th, 2023

help-circle
  • The most important thing about GPU cores is that they are parallel in nature. A lot of GPUs out there use 1024-bit arithmetic units that can process 32 numbers at the same time. That is, if you do something like a + b, both a and b are “vectors” consisting of 32 numbers. Since a GPU is built to process large amount of data simultaneously, for example shading all pixels in a triangle, this is optimal design that has good balance between cost, performance, and power consumption.

    But the parallel design of GPU units also means that they will have problems if you want more execution granularity. For example in common control logic like “if condition is true do x, otherwise do y”, especially if both x and y are complex things. Remember that GPUs really want to do the same thing for 32 items at a time, if you don’t have that many things to work with, their efficiency will suffer. So a lot of common problem solutions that are formulated with “one value at a time” approach in mind won’t translate directly to a GPU. For example, sorting. On a CPU it’s easy to compare numbers and put them in sorted order. On a GPU you want to compare and order hundreds or even thousands of numbers simultaneously to get good performance, and it’s much more difficult to design a program that will do it.

    If you are talking about math specifically, well, it depends on the GPU. Modern GPUs are very well optimised for many operations and have native instructions to compute trigonometric functions (sin, cos), exponential functions and logarithms, as well as do complex bit manipulation. They also natively support a range of data values such as 32- and 16-bit floating point values. But 64-bit floating point value (double) support is usually lacking (either low performance or missing entirely).


  • This video entirely misses the point. The x86 emulator (technically a transpiler) is just a part of equation. The important thing is that Apple emulates some of the x86 behavior in hardware. And it’s not something a software emulator can do, at least not without significant performance problems. As far as I know, current Windows on ARM ”cheats” by pretending memory order is not a problem. This works until it doesn’t, as some software will crash or produce incorrect results. Microsoft has different emulation levels as described in their support docs, each with those come with increasing performance cost. As Qualcomm did not announce technology to safely emulate x86, I assume they don’t have it. Which would mean the same clusterfuck of crashing and incompatible software Windows on ARM has to deal with now.

    https://learn.microsoft.com/en-us/windows/arm/apps-on-arm-program-compat-troubleshooter


  • MrMobster@alien.topBtoHardwaremac Mx number of GPU cores
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Simplified version: an Apple GPU core contains four execution units, each of which is 32-wide (it performs an operation on 32 data values in parallel). An instruction in shader program is executed on one of these units. In other words, there are 128 scalar arithmetical units in an Apple GPU core, capable of executing up to four different 32-wide instructions per cycle.

    More complicated, but correct version. An Apple GPU core contains multiple execution units of different types. There are also four instruction schedulers which select a shader instruction and send it to an execution unit. Each scheduler controls one 32-wide FP32 unit, one 32-wide FP16 unit, and (presumably, not quite sure) one 16-wide INT32 unit. So in total you have 4x of those units in a core. On M1 and M2 a scheduler can dispatch one instruction to a suitable execution unit per cycle. This means the other units are idling (e.g. it can do either FP32, FP16, or half INT32 operation per cycle). On Apple M3, schedulers are capable of dual issue and can dispatch two instructions per cycle (e.g. one FP32 and one FP16 or INT) assuming appropriate instructions can be found in the instruction stream. This is why M3 can be much faster on complex shaders even though the nominal spec of the GPU didn’t change much.

    Each GPU core executes a large number of shader programs in parallel and switches between shaders every cycle, in order to make as much progress as possible. If it can’t find an instruction to execute (for example because all shaders are currently waiting for a texture load), the units have to go idle and thus your performance potential decreases. This is why it’s important to give the GPU as much work as possible, it helps to fill those gaps (the hardware can run some shaders while others are waiting).


  • Apple’s laptop chips use the same architecture their phones do, which are are within the ballpark of the latest Snapdragon chips as it is.

    Hardly. Comparing cores to cores, the latest Cortex X4 in the Snapdragon 8 gen 3 is slightly slower than the two year old A14. In multi core performance Snapdragon is slightly ahead, sure, mostly because it packs entire six performance cores against Apple’s two. Different design priorities. Not to mention that it’s much easier to pack together multiple slow cores than it is to design fast ones.


  • So, the story goes like this. A team of senior Apple CPU designers (who are pretty much behind the M1 architecture) wanted to build server CPUs, but Apple wasn’t interested. So they quit and created their own startup, Nuvia. Qualcomm bought Nuvia so that they can use their designs in PC laptops. In phones, Qualcomm is using CPU cores designed by ARM, which are slower.

    It is kind of difficult to get all the details from the Qualcomm presentations, but it seems that the Nuvia Orion core offers similar performance to Apple’s M1/M2, but can be clocked higher (they have a two-core turbo boost up to 4.3Ghz). Their multi-core performance also appears to be very good, but the power consumption goes up. I doubt we will understand better how these chips actually perform before the launch sometime in mid 2024.

    If everything goes according to Qualcomm’s plan, you should be able to buy a business/creative type laptop sometimes mid/fall next year. Gaming laptops, probably not for a while. Desktop PC tower — out of the question. Maybe Qualcomm will eventually sell mini-PCs or something like that but those CPUs are not made to be replaceable.

    P.S. The funny bit is that the original CPU design team is back to making laptop CPUs and not server CPUs as they wanted. But I can imagine all of them are at least a few million $ richer.