Point being that OP must’ve installed Windows before and therefore should be able to build a computer hardware-wise?
Point being that OP must’ve installed Windows before and therefore should be able to build a computer hardware-wise?
Absolutely. Anything can be learned and unless things build on top of each other you can’t really compare difficulties.
What downsides though right? “We” object to Ubuntu over matters like Canonical being a for-profit company or their choice of Desktop Environment. At the end of the day, who cares? If it works it works, right?
It’s an interesting angle, the hostility thing. People in the know have largely fallen out of love with Ubuntu but imho that’s not necessarily because Ubuntu fell in quality but just because so many “better” things have come up since Ubuntu 04.10. It is definitely a sound choice for non-techy people, maybe more than ever. Personally I’d prefer (almost) any contemporary desktop over Gnome these days, but I can definitely see the appeal for others in terms of simple design language.
Basically you can turn any old laptop into a Chromebook these days using Linux, and most people just like your parents most definitely do not need more than a functional webbrowser. Basically a smartphone with a larger screen and physical keyboard. Even if you don’t care about your privacy (or freedom from notification-spam) why still pay the Microsoft-tax.
Which are about as related as the knowledge required to mount drywall and the knowledge required to run a ham radio station. You tell me which is more complicated but either way there are most certainly radio amateurs out there that don’t know the first thing about handywork and handymen that could barely find the on-off switch on a broadcast-rig.
Yes @upriver4458@sh.itjust.works please save yourself dozens of future headaches by listening to this person.
oftentimes (and this is more of a general statement) throwing into google exactly what you would otherwise type into your shell of choice should get you on the right track, ie searching for “man systemctl”
as far as the inability to reboot goes, if a regular sudo reboot
can’t bring the machine back up either then this is probably a hardware issue outside the sphere of the operating system’s influence. can’t say I experienced something like that myself. I guess the closest I witnessed would be a computer that when rebooted with an old USB-Keyboard plugged in just refused to get past the POST screen. The keyboard worked fine if plugged in later, but the computer couldn’t reliably get through the boot process with the thing present. Maybe there’s a similar variable to your setup.
Found the sister.
They often say it’s the oldest business in the world. Which might not be relevant to how we should treat it as a society today, but what seems obvious to me is that when you de-facto criminalize and discourage something the working conditions are going to suffer.
There probably isn’t a place in the world where it isn’t practiced yet we love to pretend like we’re somehow past that. Not sure how much of that is based in religion and how much is just us being in denial of our own biology-based desires in a secular modern society. Either way it is hurting people who are just as entitled to making a living as anybody else.
Not the guy you’re asking but I agree. There would be no need for Falcon Sensor on every Windows-machine deployed inside an Enterprise (assuming that Falcon Sensor serves a purpose worth fulfilling in the first place) if the critical devices on their network were sufficiently hardened. The main problem (presumably the basis of such a solution existing) is that as soon as you have a human factor, people who must be able to access critical infrastructure as part of their job, there will be breakages of some kind. Not all of those must be malicioius or grow into an external threat. They still need to be averted of course.
I feel that CrowdStrike is an idea that seems appealing to those making technological decisions because it promises something that cannot be done by conventional means as we have known and deployed them before. I can’t say whether or how often this promise has ever enabled companies to thwart attacks at their inception, but again, I feel that in a sufficiently hardened environment, even with compromisable human actors in play, you do not need self-surveillance (at the deepest level of an OS) to this extent.
And to also address OP’s question: of course there is no need for this in a *NIX environment. There hasn’t been any significant need for antivirus of any kind in any of the UNIX-based world including macOS. So really this isn’t about whether an anti-malware solution in itself can satisfy the needs of a company per se, the requirements very much follow the potential attack vectors that are opened up by an existing infrastructure. In other words, when your environment is Windows-based, you are bound to deploy more extensive security countermeasures. Because they are necessary.
Some may say that this is due to market-share, but to those I say, has the risk-profile of running a Linux-based server changed over the last 20 years? They certainly have become a lot more common in that timeframe. One example I can think of was a ransomware exploit on a Linux-based NAS-brand, I think it was QNAP. This isn’t a holier than thou argument. Any system can be compromised. Period. The only thing you can ensure is that the necessary investment to break your system will always be higher than the potential gain. So I guess another way to put this is that in a Windows-based environment your own investment into ensuring said fact will always be higher.
But don’t get me wrong, I don’t mean to say Windows needs to be removed from the desks of office-workers. Really this failure and all these photographs of publically visible bluescreens (and all the ones in datacenters and server-rooms that we didn’t see) shows that Windows has way too strong of a foothold in places where plenty smart people are employed to find solutions that best serve the interests of their employers, including interests (i.e. security and privacy) that they are unaware of because they can’t be printed on a balance-sheet.
Not having DOS-Mode anymore must’ve been a bummer though.
Serious question how do you get bored of Windows during its heyday?
My first experience with Linux was Ubuntu 4.10 and it seemed super cool and all but I could’ve never switched fully during those days. And if we’re honest most legit Linux users up until not too long ago were forced to have a dual boot setup because so many things just hadn’t been universalized yet.
So just to illustrate where I’m coming from asking that question, my first personal computer (as opposed to family PC) ran XP and that was a pretty exciting time when it comes to market dominance and all the advantages that came with being a user of the biggest platform. Looking back I just don’t see how I could’ve ever made that switch in the noughties let alone the 90s. The adoption just wasn’t there yet.
I can’t chime in on that specific angle but on exactly the opposite. I’d call myself an Arch guy, or Manjaro and Endeavour more specifically. But recently I started hearing more and more about Nobara, I own a Steam Deck and use GE Proton on there which is from the same guy so I said I wana try Nobara and I immediately felt at home. I’m not a big KDE fan but really the out of the box Nobara experience when it comes to gaming needs felt and feels so complete to me I really couldn’t complain about a single thing.
It obviously wont replace Arch in my homelab but I don’t think I’ll ever consider anything else besides Nobara for my desktop again. Point being I had next to zero practical Fedora experience up to that point. I tried Garuda before which is also Arch based and supposed to cater to gaming needs but with that direct comparison I now feel like Nobara is the only distro that truly gets gaming. It’s SteamOS for the KBM based Desktop.
yea imagine if 0 was worth 1 all of a sudden
If Mullvad is not available as a Snap or Flatpak (2 ways of installing self-sufficient auto-updateable packages without dependencies on other packages) then youre probably stuck with either adding this 3rd party repository (something which isn’t always recommendable either) which gives you automatic updates or using a .deb installation file like you would probably prefer and then manually retrieving updates when needed.
Anyways, others have told you as much already anyways. What I’d like to add is that it is definitely worth it to learn to work the terminal. I get that there are many people looking for an alternative to Windows or just an open approach to computing in general without looking for added complexity. Who wants complexity right? Whether such an experience exists in the Linux world is probably subjective. Ubuntu has definitely been a safe bet for the flattest learning curve required since its inception in 2004. But its still a niche thing that won’t experience user-friendly support from everyone (ie Mullvad).
So one could conclude that in order to truly be “free” (as in Free Software freedom) one needs to claim that freedom. You will fuck things up. You will learn from your mistakes. You will regroup and you will grow as a user and dare I say PC-curious person.
Does that mean you weren’t able to implement those changes or didn’t want to regress back from Wayland?