For all we know–they could be. People have gotten Meta’s Llama running on small devices. Definitely not ideal, but neat proof of concept. I’d love to see how far a company can push it with dedicated mobile hardware and software choices. My gut feeling is that Apple’s little experiment with learning and mimicking your voice using on-device ML algorithms is an indication of where they are going to take things.
Why not? Especially if they can make the models run locally on device?
For all we know–they could be. People have gotten Meta’s Llama running on small devices. Definitely not ideal, but neat proof of concept. I’d love to see how far a company can push it with dedicated mobile hardware and software choices. My gut feeling is that Apple’s little experiment with learning and mimicking your voice using on-device ML algorithms is an indication of where they are going to take things.