Didn’t @Jack quit Twitter right after Elon bought it?
Didn’t @Jack quit Twitter right after Elon bought it?
My Grandmother won’t eat them until they’re like this.
I’m a ‘just a hint of green’ girl.
The article literally says they’re doing FSD and cheap, together.
What they aren’t doing is Cheap and Human operated.
It’s a choice of Cheap+FSD or Expensive+Human Controlled or FSD. There won’t be a cheap human operated model.
It probably won’t, but the article doesn’t say they’ve dropped FSD like the parent poster implied.
Actually read the article: It’s the exact opposite, they’re still making the FSD version.
They’re not making the version that people can drive.
If the store only exists to make iPhones worse, then yes, that’s the point.
EGS will be full of Chinese Malware in seconds, Epic is owned by TenCent and China is in a war with the west.
No, I don’t think I will.
That’s how it started, as more of a replacement for Corel Painter, but today it’s a very competent photo editor too, personally I find it much better than GIMP.
It’s not free of pain points though - text editing sucks compared to Photoshop, (it’s similar but better than GIMP though, both input text into dialog prompt then render it, GIMP is one and done, you need do it again if you want to edit, Krita lets you edit) no WYSIWYG on the canvas.
Also getting used to the UI will take a bit from PS.
The only way to do that is to completely disable Out-of-order execution to begin with and disable any shared caches, which would completely neuter modern CPUs. Not a little bit, that’s going to be around ~30% of the prior performance - not a 30% loss, a 70% loss…
From ChatGPT- (query: How much performance would a modern Zen 5 or Intel Alder Lake CPU lose if you completely stripped out/disabled SMT, Out of Order Execution and shared caches - operating in-order and only using dedicated (non-shared) caches?)
Stripping out or disabling key performance-enhancing features like Simultaneous Multithreading (SMT), Out-of-Order Execution (OoOE), and shared caches from a modern CPU based on architectures like AMD’s Zen 5 or Intel’s Alder Lake would result in a significant performance loss. Here’s an overview of the potential impact from disabling each feature:
Simultaneous Multithreading (SMT)
Impact: SMT allows a single core to execute multiple threads simultaneously, improving CPU throughput, especially in multi-threaded applications. Disabling SMT would reduce the ability to handle multiple threads per core, decreasing performance for multi-threaded workloads. Expected Loss: Performance drop can be around 20-30% in workloads like video encoding, rendering, and heavily threaded applications. However, single-threaded performance would remain relatively unaffected.
Out-of-Order Execution (OoOE)
Impact: OoOE allows the CPU to execute instructions as resources become available, rather than in strict program order, maximizing utilization of execution units. Disabling OoOE forces the CPU to operate in-order, meaning that it would stall frequently when waiting for data dependencies or slower operations, like memory access. Expected Loss: This could lead to performance drops of 50% or more in general-purpose workloads because modern software is optimized for OoOE processors. Tasks like complex branching, memory latency hiding, and speculative execution would suffer greatly.
Shared Caches (L2, L3)
Impact: Shared caches (particularly L3 caches) help reduce memory latency by sharing frequently accessed data among multiple cores. Disabling shared caches would increase memory access latency, causing more frequent trips to slower main memory. Expected Loss: Performance could drop by 15-30% depending on the workload, especially for applications that benefit from high cache locality, such as database operations, scientific simulations, and gaming.
Operating In-Order Only with Dedicated Caches
Overall Impact: Without OoOE and SMT, and with only in-order execution and dedicated caches, the CPU would be much less efficient at handling multiple tasks and hiding latency. Modern CPUs rely heavily on OoOE to keep execution units busy while waiting for slow memory operations, so forcing in-order execution would significantly stall the CPU. Expected Loss: Depending on the workload, the overall performance degradation could be upwards of 70-80%. Some specialized applications that rely on high parallelism and efficient cache usage might perform even worse.
Summary of Overall Performance Impact:
Single-threaded tasks: May see performance drop by 50-70% depending on reliance on OoOE and cache efficiency.
Multi-threaded tasks: Could experience a combined drop of 70-80%, as the lack of SMT, OoOE, and shared caches compound the inefficiencies.
This hypothetical CPU configuration would essentially mimic designs seen in early microprocessors or microcontrollers, sacrificing the massive parallelism, latency hiding, and overall efficiency that modern architectures provide. The performance would be more in line with processors from a couple of decades ago, despite the higher clock speeds and core counts.
Case in point, it’s not feasible, if you’re looking for that in your own computer, you can do it already. I doubt anyone will follow you though.
Not sure why Elon didn’t just call them ‘Drones’ instead of ‘Robots’ - since it’s a given that drones are human controlled with varying levels of process automation now.
“Freedom of Speech”
Could you be any more openly racist?
Yeah, instead of 0%, fixed payment loans, people should take on 18-60% apr credit cards instead! That’ll help people unable to afford large purchases in a single go!
Even better, they could just buy cheaper stuff! Poor people deserve to buy leaky boots every year! /s
Seems little has changed from 2011, when Apple cancelled plans for a Llano based Macbook Air, as AMD couldn’t guarantee stable supply.
Additionally, both Microsoft and Sony secured their own contracts with TSMC for fabrication of their console APUs, since they couldn’t trust supply from AMD to be stable.
And AI will eventually go down (it will go up a lot before that though) as hardware becomes fast enough.
Crypto by design will never decrease.
That already happened with Crypto.
AI will use less power over time, as hardware gets faster and we approach a ‘good enough’ level of computation power, similar to Desktops/Laptops - outside gaming, electrical power for the average desktop has only decreased since Sandy Bridge, a 2600K is still good enough for the average desktop, it can still even pull punches gaming.
Crypto, by design, will never decrease in power use and only and forever increase.
Servo: The dead software that is trying to invent new reasons to exist after it was excised from Firefox!
It’s OK for Google to massively overcharge for Phones, it’s only bad when Apple does it.
What does that even mean?