Die Platte in meinem ersten Rechner musste auch partitioniert werden. Weil MS-DOS 3.3 meine 40 MB Festplatte nicht als eine einzige Partition ansprechen konnte… max. nur 32MB pro Partition wurden unterstützt.
Die Platte in meinem ersten Rechner musste auch partitioniert werden. Weil MS-DOS 3.3 meine 40 MB Festplatte nicht als eine einzige Partition ansprechen konnte… max. nur 32MB pro Partition wurden unterstützt.
I was answering under the assumption/the context of of “Amazon wants to release an Android-based OS that doesn’t contact any of Googles services”.
So, when I said “easy enough to remove” that was relative to releasing any commercial OS based on AOSP, as in: this will be one of the smallest tasks involved in this whole venture.
They will need an (at least semi-automated) way to keep up with changes from upstream and still apply their own code-changes on top of that anyway and once that is set up, a small set of 10-ish 3-line patches is not a lot of effort. For an individual getting started and trying to keep that all up to do date individually it’s a bit more of an effort, granted.
The list you linked is very interesting, but I suspect that much of that isn’t in AOSP, my suspicion is that at most the things up to and excluding the Updater even exist in AOSP.
A cop out or a coping mechanism. Employers steal so much from employees: time, wages, sense of purpose, sometimes even health. And most of us don’t have good ways to stop them (because socienty). So stealing a bit back might actually help feeling less hopeless.
Yes, but those minor traces are easy enough to remove, especially if you don’t care about being “ceritified” by Google (i.e. are not planning to run the Google services).
Also für mich sieht’s ziemlich exakt genauso aus wie feddit.de, außer dass ein paar Menüpunkte anders heißen und es ein paar mehr gibt. Aber zum Glück ist ja eh alles vernetzt, kann jeder nehmen was ihm am besten passt.
Kbin geht eher in diese Richtung im Moment. In der Tat poste ich gerade von https://kbin.social/ und kann auch auf Mastodon content zugreifen.
In der Tat. Ich weiß nicht was ich ohne subsurface machen würde…
I just checked it out. That licensing documentation is a mess. They say that it’s released under the AGPL, but not all of it? So what they are saying is that the whole product is not actually under the AGPL. I wonder if their “freeware” part can actually be removed without major loss of functionality. Because if that’s possible, then you could simply rebundle that one.
But I suspect it exists exactly to “taint” the open source nature of the product.
Note that they said “not intended” and not “not allowed”. you are perfectly within your right to use the program under the GPL without licensing it otherwise.
But the company would prefer if you paid for a license (and support). If you weren’t allowed the use you do, they would have said as much, but they didn’t.
This is a common business practice with open source software and I don’t particularly think it s “wrong”, but the fact that they are apparently trying to use confusion to make it look like you have to buy a license for commercial use is very icky in my opinion (but is unfortunately also very common).
Not to diminish what Valve has achieved there (it’s an amazing PC/console hybrid, love mine).
But a smooth experience without any hitches is much easier to achieve when your hardware variation basically boils down to “how big is the SSD”. The fact that all Steamdecks run the same hardware helps keep things simple.
I guess that’s also the reason why they are not (yet?) pushing the new SteamOS as a general-purpose distribution for everyone to use. Doing that would/will require much more manpower.
Not OP, but as someone using Ubuntu LTS releases on several systems, I can answer my reason: Having the latest & greatest release of all software available is neat, but sometimes the stability of knowing “nothing on my system changes in any significant way until I ask it to upgrade to the next LTS” is just more valuable.
My primary example is my work laptop: I use a fairly fixed set of tools and for the few places where I need up-to-date ones I can install them manually (they are often proprietary and/or not-quite established tools that aren’t available in most distros anyway).
A similar situation exists on my primary homelab server: it’s running Debian because all the “services” are running in docker containers anyway, so the primary job of the OS is to do its job and stay out of my way. Upgrading various system components at essentially random times runs counter to that goal.
It’s a product that Atlassian is selling: https://www.atlassian.com/software/statuspage
Not to be confused with their statuspage for their services: https://status.atlassian.com/
Or the status page for their status page system (which apparently has an ongoing incident): https://metastatuspage.com/
I fully appreciate the desire for more civil discussion.
But please be aware that tone policing has been used as an offensive weapon against many marginalized groups: “We get that you want to fight for your rights, but could you please do that in the form of civil discourse?” That phrase is almost always heard when years of civil discourse lead nowhere.
Yes, I agree with the trust and that it’s effectively equivalent for most users. But I read OPs reply as asking specifically for “a more automatic way of checking (that it was build from source cleanly)”.
I think expecting that this has already been cleanly solved years ago is not unreasonable (until you realize how many rabbit holes this whole topic has to go down to work well).
AFAIK most Linux distributions don’t (yet) have fully reproducible builds (simply because it’s a HUGE ask). For Debian for example, it’s continued ongoing work: https://wiki.debian.org/ReproducibleBuilds
As the article/SO answer posted by cwagner tells you you effectively can’t, because a “trojan” could be injected at many different levels and even self-compiling the source code depends on some compiler binary that you have to get from somewhere (build your own compiler, you tell me, but what do you use to compile THAT?).
In practice for most people the correct answer is “get the binary from your distributions normal repository”. By using a given distribution you already implicitly trust that distribution (because if you don’t, why use it?), so non-core software from their repository should also be considered trustworthy (at least in the sense that no additional trojans were introduced that aren’t in the source).
That doesn’t really help with Windows, though. There your best bet is to get a binary that’s from as close to the original authors themselves. Ideally from their project home page themselves.
Fundamentally there’s no need for the user/account that saves the backup somewhere to be able to read let alone change/delete it.
So ideally you have “write-only” credentials that can only append/add new files.
How exactly that is implemented depends on the tech. S3 and S3 compatible systems can often be configured that data straight up can’t be deleted from a bucket at all.
Just a little addition: the majority of things that people associate with Linux as per your first item are actually shared by many/most Unix-like OS and are defined via the various POSIX standards.
That’s not to say that Linux doesn’t have it’s own peculiarities, but they are fewer than many people think.
You know that you too are writing in a script, right?