• Faceman🇦🇺@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    23
    ·
    9 months ago

    I mean, we know the absolute limits of computational efficiency thanks to the Landauer limit and the Margolus–Levitin theorem, and from those we know that we are so far from the limits that it is practically unfathomable.

    If they can show some evidence that they can perform useful calculations 100x more efficiently than whatever they chose to compare against (definitely a cherry picked comparison) then I’ll give them my attention, but others have made similar claims in the past then turned out to be in extremely specific algorithms that use quantum calculations that are of course slower and less efficient on any traditional computer.

    • ☆ Yσɠƚԋσʂ ☆@lemmy.mlOP
      link
      fedilink
      arrow-up
      3
      arrow-down
      3
      ·
      9 months ago

      I’d like to see these chips benchmarked in the wild as well before getting too excited, but the claims aren’t that implausible. Incidentally, this approach is why M series chips are so much faster than x86 ones. Apple uses SoC architecture which eliminates the need for the bus, and they process independent instructions in parallel on multiple cores. And they’re just building that on existing ARM architecture. So, it’s not implausible that a chip and a compiler designed for this sort of parallelism from ground up could see a huge performance boost.