And thats relying on things being ‘just benchmarks’.
In real world usage on apple devices you’ve got typically far more performant libraries for the average dev, you’ve got far superior ML, especially on M series chips you’ve got things like the AMX which make a lot of operations desirable for ML literally twice as performant as the competition in many tasks. Fat memory bandwidth helps too of course.
Theres a lot up apple’s sleeves from being so vertically integrated, and I think as per usual it will continue to reflect in the real world experience of users and developers.
fixed that for them
google might be tinkering with AI but if you’ve used it one bit you know its a literal joke. If anything google is the one caught with their pants down here because they showed their cards, their cards are awful.