• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: June 20th, 2023

help-circle

  • Dran@lemmy.worldtoLinux@lemmy.mlWhich distro?
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    2 months ago

    I run ubuntu’s server base headless install with a self-curated minimal set of gui packages on top of that (X11, awesome, pulse, thunar) but there’s no reason you couldn’t install kde with wayland. Building the system yourself gets you really far in the anti-bloatware dept, and the breadth of wiki/google/gpt based around Debian/Ubuntu means you can figure just about any issues out. I do this on a ~$200 eBay random old Dell + a 3050 6gb (slot power only).

    For lighter gaming I’ll use the Ubuntu PC directly, but for anything heavier I have a win11 PC in the basement that has no other task than to pipe steam over sunshine/moonlight

    It is the best of both worlds.



  • I’m sorry but this is just a fundamentally incorrect take on the physics at play here.

    You unfortunately can’t ever prevent further breakdown. Every time you run any voltage through any CPU, you are always slowly breaking down gate-oxides. This is a normal, non-thermal failure mode of consumer CPUs. The issue is that this breakdown is non-linear. As the breakdown process increases, it increases resistance inside the die, and as a consequence requires higher minimum voltages to remain stable. That higher voltage accelerates the rate of idle damage, making time disproportionately more damaging the more damaged a chip is.

    If you want to read more on these failure modes, I’d recommend the following papers:

    L. Shi et al., “Effects of Oxide Electric Field Stress on the Gate Oxide Reliability of Commercial SiC Power MOSFETs,” 2022 IEEE 9th Workshop on Wide Bandgap Power Devices & Applications

    Y. Qian et al., “Modeling of Hot Carrier Injection on Gate-Induced Drain Leakage in PDSOI nMOSFET,” 2021 IEEE International Conference on Integrated Circuits, Technologies and Applications


  • The “problem” is that the more you understand the engineering, the less you believe Intel when they say they can fix it in microcode. Without writing an entire essay, the TL/DR is that the instability gets worse over time, and the only way that happens is if applied voltages are breaking down dielectric barriers within the chip. This damage is irreparable, 100% of chips in the wild are irreparably damaging themselves over time.

    Even if Intel can slow the bleeding with microcode, they can’t repair the damage, and every chip that has ever ran under the bad code will have a measurably shorter lifespan. For the average gamer, that sometimes hasn’t even been the average warranty period.









  • That is usually more incompetence than malice. They write a game that requires different operation on amd vs Nvidia devices and basically write an

    If Nvidia: Do x; Else if amd: Do Y; Else: Crash;

    The idea being that if the check for amd/Nvidia fails, there must be an issue with the check function. The developers didn’t consider the possibility of a non amd/Nvidia card. This was especially true of old games. There are a lot of 1990s-2000s titles that won’t run on modern cards or modern windows because the developers didn’t program a failure mode of “just try it”


  • I actually had one of these myself. I worked at a college help desk as a student, and I got a call and the guy said “every time I flush the toilet, Xbox live disconnects”

    My first thought was that it was a joke, the absurdity of the thing right? I unironically asked if I was being pranked, and he said he knew we wouldn’t believe him so he made a video. Sure enough, he walks into the bathroom, flushes the toilet, and like 5s later his Xbox shows a disconnection message on the TV.

    Absolutely dumbfounded, I sent the networking guys up to his room, and like all of these stories, it does have a reasonable explanation. They ran the xbox’s Ethernet cable under a rug that was in front of the bathroom. Every time someone went to the bathroom, they would step on the cable, and the Xbox would disconnect. The timeout was 30s or so, just long enough that they’d pee or flush the toilet or whatever before they noticed the disconnection.





  • I have condensed almost all of my workflows into pure bash scripts that will run on anything from bare metal to a vm to a docker container (to set up and/or run an environment). My dockerfiles mostly just run bash scripts to set up environments, and then run functions within the same bash scripts to do whatever things they need to do. That process is automated by the bash scripts that built my main host. For the very few workflows I have that aren’t quite as appropriate for straight docker (wireguard for example) I use libvirt to automate building and running virtual machines as if they were ephemeral containers. Once the abstraction between container and vm is standardized in bash, the automation doesn’t really need to care which is which, it just calls start/stop functions that change based on what the underlying tech is. Because of that, I can have the canary system build and run containers/vms in a sandbox, run unit tests, and return whether or not they passed. It does that via cron once a week and then supplants all the running containers with the canary versions once unit tests pass.

    Basically I got sick of reinventing the wheel every time a new technology came out and eventually boiled everything down into bash so that it’ll run on anything it needs to. Maybe podman in userland becomes the new hotness next year, or maybe I run a full fat k8s like I do at work. Pure bash lets me have control over everything, see how everything goes together, and make minor modifications to accommodate anything I need it to.

    It sounds more complicated than it really is, It took me like a week of evenings to write and it’s worked flawlessly for almost a year now. I also really really really hate clicking things by hand lol, so I automate anything I can. Since switching off proxmox, this is the first environment that I have entirely automated from bare-metal to fully running in a single command.

    I’m incredibly lazy; it’s one of my best qualities.


  • Virtual machines also exist. I once got bit by a proxmox upgrade, so I built a proxmox vm on that proxmox host, mirroring my physical setup, that ran a debian vm inside of the paravirtualized proxmox instance. They were set to canary upgrade a day before my bare-metal host. If the canary debian vm didn’t ping back to my update script, the script would exit and email me letting me know that something was about to break in the real upgrade process. Since then, even though I’m no longer using proxmox, basically all my infrastructure mirrors the same philosophy. All of my containers/pods/workflows canary build and test themselves before upgrading the real ones I use in my homelab “production”. You don’t always need a second physical copy of hardware to have an appropriate testing/canary system.


  • Generally end-user applications like Firefox would be the latest/same version, but system libraries might be a few versions different. Generally security patches are written for a few major versions of libraries/daemons at the same time. So features might be different but it’s all the same security for the most part.

    That’s the major draw between one distro to another, they will have different philosophies on what to include, and what major version to use. Debian for example is much more reluctant to upgrade something unless there’s a large demand for a new feature. The theory is it is more stable and consistent to use that way.

    Ubuntu on the other hand features much more modern versions of libraries because they want to be more hip and modern, expecting users to learn new things more often because they think the new features are worth it and they want to support all the things.