"These price increases have multiple intertwining causes, some direct and some less so: inflation, pandemic-era supply crunches, the unpredictable trade policies of the Trump administration, and a gradual shift among console makers away from selling hardware at a loss or breaking even in the hopes that game sales will subsidize the hardware. And you never want to rule out good old shareholder-prioritizing corporate greed.

But one major factor, both in the price increases and in the reduction in drastic “slim”-style redesigns, is technical: the death of Moore’s Law and a noticeable slowdown in the rate at which processors and graphics chips can improve."

  • I Cast Fist@programming.dev
    link
    fedilink
    English
    arrow-up
    14
    ·
    1 day ago

    Moore’s law started failing in 2000, when single core speeds peaked, leading to multi core processors since. Memory and storage still had ways to go. Now, the current 5nm process is very close to the limits imposed by the laws of physics, both in how small a laser beam can be and how small a controlled chemical reaction can be done. Unless someone can figure a way to make the whole chip fabrication process in less steps, or with higher yield, or with cheaper machines or materials, even if at 50nm or larger, don’t expect prices to drop.

    Granted, if TSMC stopped working in Taiwan, we’d be looking at roughly 70% of all production going poof, so that can be considered a monopoly (it is also their main defense against China, the “Silicon Shield”, so there’s more than just capitalistic greed at play for them)

    https://www.youtube.com/watch?v=po-nlRUQkbI - How are Microchips Made? 🖥️🛠️ CPU Manufacturing Process Steps | Branch Education

    • thanks AV@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      17 hours ago

      Very interesting! I was aware of the 5nm advancements and the limitations of chip sizes approaching the physical limitations of the material but I had been assuming since we worked around the single core issue a similar innovation would appear for this bottleneck. It seems like the focus instead was turned towards integrating AI into the gpu architecture and cranking up the power consumption for marginal gains in performance instead of working towards a paradigm shift. Thanks for the in depth explanation though, I always appreciate an opportunity to learn more about this type of stuff!