It started with notebooks, but that wasn’t the master plan.

  • Dudewitbow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    8 months ago

    its not that simple. high performamce parts are high performamce because the devices that need the fastest speeds have the shortest traces from CPU to said device. its for instance, why the ram slots, and the fastest m.2/slot as well as pci-e lanes are nearest to the cpu, else youd have to resort to adding a south bridge.

    the pi compute module works that way because the ram is already on board making it not a problem, and latency to whatever it gets mounted on isnt of highest priority for performance.

    its why sodimm for instance has hit a peak speed limit, while lpdde hasnt, and why dell pitched the camm form factor for ram. distance of components to the cpu and its stability is cruicial for performance.

    • bitfucker@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 months ago

      What I am saying is that the current soldered CPU approach on laptop space is not that different from switching an ARM SoC on a daughter board. The only difference is that you cannot change RAM. Maybe that too will change as you said with CAMM standard. Next is that some SBC already supported PCIe for external M.2 storage, so you can theoretically hook up a removable GPU there.

      Now, what to do with the old SoC daughter board? The same as with the old framework motherboard. You can repurpose it as another computer.

      The point is, framework repairability comes not only from part swapping, but also the promise of providing schematic for board level repair. They can totally make ARM based laptops with SoC repairable if they wanted to. But I suspect they will not (at least in the near future) since there is a lot to do for them.

      • Dudewitbow@lemmy.zip
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        m.2 to gpu isn’t completely foreign nor new, but less practical than more recent standards like Occulink. the problem, specifically with the lower end model in particular, is using 4/8 pci-e lanes for a port that not everyone is going to use is a waste of the already limited amount of pci-e lanes available to the user because of the CPU choice. hence, it makes sense to keep 1 users with the side option to using usb4/thunderbolt gpu docks