Assuming the training software could be run on the hardware and that we could distribute the load as if it was 2023, would it be possible to train a modern LLM on hardware from 1985?
Assuming the training software could be run on the hardware and that we could distribute the load as if it was 2023, would it be possible to train a modern LLM on hardware from 1985?
The growth rate in computing has been exponential. Using the fastest available computer from 1985 continuously for 38 years and still going, you would be passed by a quad GPU-based server in hours.