I was recently reading Tracy Kidder’s excellent book Soul of a New Machine.

The author pointed out what a big deal the transition to 32-Bit computing was.

However, in the last 20 years, I don’t really remember a big fuss being made of most computers going to 64-bit as a de facto standard. Why is this?

  • bg370@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It was noticeable for us because Exchange 2008 was the first M$FT product that only ran 64. That was our first 64-bit box

  • jedrider@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    And to think that processors started out at 4-bits, the Intel 4004, the first general purpose microprocessor. That didn’t last long as it quickly transitioned to 8-bits so that it could go from being a calculator to a text processor as conventional text is represented as an 8-bit number, even though only 7 bits are technically required.

    After that, the processor word length depended more upon addressing limitations, as it went from 16-bits to 32-bits to, finally, 64-bits. Then it stopped. However, GPU’s have taken it to 128 bits and multiples thereof (no longer strictly powers of 2) such as 384 bits for the GeForce 4090, just for sheer data bandwidth.

    I’m not a processor developer, so maybe I got somethings wrong.

    • noiserr@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      384 bits for the GeForce 4090

      384bit is the memory bus width. AMD’s Fiji (r9 290x) had a 512-bit bus in 2013. Not to be confused with data types used for calculations.

    • bankkopf@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Memory bus width != CPU Bitness

      Those two numbers describe totally different things.

      For GPUs the bitness number is usually used to describe the width of the memory bus that is how much data can be “transported” concurrently.

      For CPUs the bitness describes the size of the data that can be processed at any given time. With AVX, CPUs can handle data vectors that are up to 512-bit long.

      GPUs are in fact 64-bit processing units, the largest data type they are designed to handle are 64-bit double precision floating-point numbers.

  • AcanthisittaFlaky385@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Well we are still kind of transitioning to 64-bit. Chip makers/designers are still having to include instruction sets for 16/32-bit which does limit in some respects on how big the transition really is.

    • Nicholas-Steel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Removal of those would break a lot of software, especially removal of 32bit support. Bye, bye thousands (if not millions) of Windows 95/98/XP games & programs!

      One of the big features of Windows is its backwards compatibility.

      • AcanthisittaFlaky385@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        Gee really? Here’s me thinking 32-bit instruction sets were cosmetic. Thank you for ignoring the part where I said we’re still in a transition phase.

        Also, with a bit of tinkering, you can run 16-bit applications. It’s just recommend to use virtualisation applications because Microsoft doesn’t ensure quality updates for 16 bit applications.

  • advester@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    With 16bit, programmers had to deal with far pointers, near pointers, segmentation. That was a lot harder than the flat linear 32 bit pointers. Also, the switch to 32bit was largely simultaneous with a switch to protected mode virtual memory, another huge quality of life improvement. The switch from 32 bit to 64 bit on x86 didn’t change anything about how programmers had to write their code.

  • huuaaang@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    32 bit was big because, for the personal computers, it brought true memory protection and allowed much more complex and stable computing. 64 bit was mainly just more memory, which is certainly useful but not a game changer.

  • GomaEspumaRegional@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    It depends on what you meant by 64bit computing, which is not the same as x86 becoming a 64bit architecture.

    FWIW, 64 bit computing had been a thing for a very long time in the supercomputer/mainframe space since the 70s. And high end microprocessors had supported 64bit since the early 90s.

    So by the time AMD introduced x86_64 there had been about a quarter century of 64bit computing ;-)

    It was a big deal for x86 vendors, though. As that is when x86 took over most of the datacenter and workstation markets.

  • Lardzor@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    Originally, A 64bit OS was popular in the enterprise space. Businesses were running Windows NT using 64bit. I think the first version of Windows meant for consumers that got the 64bit upgrade was a version of Windows XP Pro X64, but it was uncommon. 64bit didn’t go mainstream until Windows 7. Before that, most PC hardware upgrades didn’t have 64 bit device drivers available. Even now, 64bit isn’t a requirement most of the time. You can get 64bit versions of a lot of applications, but a lot of applications still come as 32bit only.

    • Nicholas-Steel@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      Windows XP 64bit was a… oddball operating system, it wasn’t just Windows XP but 64bit, there were notable technical differences between 32bit and 64bit Windows XP that can hinder software compatibility (Plus driver support wasn’t that particularly good for the 64bit version either).

      Windows Vista was when the 64bit version was essentially the 32bit version but 64bit. ie: they’re no longer significantly different.

      • lordofthedrones@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        10 months ago

        XP 64 was a Server 2003 64bit edition for workstations. They had the same kernel as well. Oddball, but it did work well if you could find your drivers. I went straight to 7 64 after that.

        • Nicholas-Steel@alien.topB
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          Right,

          • Windows XP Professional 64bit is Windows 2003 kernel & something like the XP UI. This is why you can run in to software compatibility issues.
          • Windows XP 64bit (non-professional) was only ever available for Intel Itanium and Itanium 2 CPU’s.
  • triemdedwiat@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    As someone who started on/with 8 bit computers, it was all just another round of ho-hum. All the other changes were really about major hardware changes, whereas 32->64 was just a slide and a less than usual price increase.

  • JaggedMetalOs@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I think it’s more that in the PC world, early x86 really wasn’t very good and the 386 brought not just 32-bits, but also a lot of other improvements that made more advanced OSs like Windows 95, NT and Linux possible. So you had this major step up in capability, not specifically because of the move from 16 to 32 bits but happening at the same time.

    IIRC for platforms that used the Motorola 68k the move from the 16bit 68000 to the 32bit 68020 wasn’t nearly as big because the chips were more similar (the 68000 kind of being a 16/32 bit hybrid anyway, the Atari ST even being named after Sixteen/Thirtytwo)

    And the move from 32bit to 64bit CPUs in modern times is the same, there weren’t any major steps up in terms of capability other than much larger RAM address space.

    And obviously for consoles “bits” was still a big marketing gimmick at the time so calling the newer console generation “32bits” was a big thing even though it doesn’t really mean anything (eg. bits dropped from the 64bit N64 to the 32bit GameCube, because no-one cared about bits anymore).

  • theholylancer@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    any time you use ram more than 4 GB that is part of the 64 bit change

    or having a file bigger than 4 GB, or a disk partition that is bigger than 32 GB

    but yeah, because they quickly became the norm, not a whole lot of noise was made about them after the transition.

    • 3G6A5W338E@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      any time you use ram more than 4 GB that is part of the 64 bit change

      64bit cpu not needed for that. See PAE.

      The actual limiting factor (in x86 specifically) is that a single process view on memory is 32bit thus 4GB. This is specific to the design of the CPU; it’s very well possible to get around that with techniques such as overlays or segmentation, as 16bit x86 demonstrated very well.

      Then there’s processors like the 68000, which offered a 32bit ISA with direct 32bit addressing (although only 24 exposed in the physical bus, until 68010 had versions with more address lines, and 68020 with full 32bit), despite 16bit ALU.

      Similarly, SERV implements a compliant RISC-V in a bit-serial manner.

      Of course, having 64bit GPRs specifically is very convenient past 4GB.

      or having a file bigger than 4 GB

      Large offsets are possible in 32bit too. In e.g. Debian Linux, it is common in all architectures other than x86.

      or a disk partition that is bigger than 32 GB

      32bit block addressing to 512 byte blocks yields 2TB.

      And again, software can handle 64bit values in 32bit (even 16 and 8) architectures no problem. It’s just slower and more cumbersome, but the compiler will abstract this away. For disk I/O addressing, it is a non-issue, as latency of the disk will make the cost of these calculations irrelevant.