norcalnatv@alien.topB to HardwareEnglish · 1 year agoNvidia, Intel claim new LLM training speed records in new MLPerf 3.1 benchmarkventurebeat.comexternal-linkmessage-square8fedilinkarrow-up10arrow-down10
arrow-up10arrow-down1external-linkNvidia, Intel claim new LLM training speed records in new MLPerf 3.1 benchmarkventurebeat.comnorcalnatv@alien.topB to HardwareEnglish · 1 year agomessage-square8fedilink
minus-squarenorcalnatv@alien.topOPBlinkfedilinkEnglisharrow-up1·1 year agoAmong many new records and milestones, one in generative AI stands out: NVIDIA Eos — an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking — completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes.
minus-squareFlowerstar1@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoWow I wonder how long that would have taken in Turing/Volta era hardware.
minus-squareiDontSeedMyTorrents@alien.topBlinkfedilinkEnglisharrow-up1·1 year agoWhat does it mean to train on such-and-such number of tokens?
Among many new records and milestones, one in generative AI stands out: NVIDIA Eos — an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking — completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes.
Wow I wonder how long that would have taken in Turing/Volta era hardware.
What does it mean to train on such-and-such number of tokens?