I’m surprised no one’s mentioned the security implications. Mounting with nosuid and nodev options can undermine rootkit or privileged escalation exploits.
I’m surprised no one’s mentioned the security implications. Mounting with nosuid and nodev options can undermine rootkit or privileged escalation exploits.
deleted by creator
deleted by creator
Flatpak is itself a file manager.
That duplicate of your folder in /run is due to filesystem links (or more likely a fuse mount, I’ve never actually looked into how flatpak works). But either way, they aren’t copies of the data.
Free tier is super limited and super easy to accidentally break out of. I had a single file in S3, but because my logging settings were wrong, I broke the free tier with junk logs.
The t2 micro ec2 instances are fine, but you need to be very careful about their storage and network egress.
Best use I’ve had for AWS that has managed to stay within the free limits has been Lambda. Managed to convert a couple self hosted discord bots to a few Lambda functions, works great. Plugging it into CloudFormation and tying up CI/CD with CodePipeline and the like were overkill but good learning exp.
I don’t think there’s any ECS free tier, but you can fit a private container repository in the free S3 limits as well.
Don’t “declutter” manually. Use your package manager.
You’re going to want to look up things like symlinks, hard links, fuse filesystems, and bind mounts among other concepts. Your “whole directory” and other duplicates are artifacts of how the filesystem and process management works, and simply running fsearch or find over them is going to be confusing if you don’t know what you’re looking at.
One Unix concept that carries over to Linux is that everything is a file. Your shared memory space, process data, device driver interfaces, etc, all of it is accessible somewhere in the same virtual filesystem tree as the actual files.
Because of this, there’s very little reason to have the whole filesystem indexed from root. If you’re worried about space usage, you want to work with packages through the package manager. If you’re worried about system integrity, you’ll want package validators.
The above is accurate, and can be considered accurate for any directory below or at well.
Per /run, it’s also mounted in memory, so trying to “declutter” it won’t get you anywhere and things will return on reboot.
Man, I use my switch all the time. But I love little metroidvania and smaller indie and single player games. Any time I see something interesting on steam, I’ll buy it on the switch if available.
I’ve also been using it to replay older stuff. The first red dead, the Arkham trilogy, currently going through Nier: Automata again.
I actually want to learn enough code to contribute, but there’s this gap between “how to code” and “how to participate in a modern software project”.
Like, I’ve created plenty of little things. Discord bots, automation scripts, plenty of sysadmin stuff for work, etc. But like, I clone a git repo cause there’s a home assistant bug I’d like to fix for example, and I’m immediately lost on where to start.
deleted by creator
Still less than an equivalent RAID array. Particularly if you consider that archives are very rarely extracted as a complete bulk, vs pulling the specific records needed.
Guess it depends on how much you trust that Amazon is going to steal your data instead of doing the thing you’re paying them for, vs a house fire or media failure or whatever.
There’s also pretty clear rules about unpaid bills, the data doesn’t just vaporize.
This is what we call a “risk assessment”, and imo if I must have that data available long-term, then a single copy on DVDs in a closet isn’t good enough.
I think you can technically do it, but it’s expensive to retrieve. But that isn’t the point of an archive.
OP said “archive”, not “backup”. Glacier is for days you need to keep but rarely touch.
Us-East. Look specifically at glacier, which is long term, near free to store, expensive to remove.
The data remains yours if you encrypt it. Someone else’s computer saves you all the time and effort of maintaining and monitoring hardware.
You want to use the actual services meant for this. S3 or glacier or something, not just consumer cloud storage like Google drive or Dropbox.
Self hosting principals aside, is this data actually important? If so, then don’t fuck around with self hosting it. Are you looking for lowest cost? Then don’t waste a bunch of money spinning your own disks.
Amazon glacier to guarantee availability and your own encryption to guarantee privacy.
It’s currently running me about $4/month for around 10tb that I don’t want to lose but just don’t want to deal with. An equivalent HDD solution would be around $500, that’s 10 years to break even assuming zero disk failures and zero personal maintenance time.
Plus it’s guaranteed. Inherent multiple copies, has SLA, and there’s no worry about the service just disappearing. It’s they decide to shut down or raise prices or whatever, you can reevaluate and move.
Lol imagine ever having considered megaupload as your backup solution.
So… you’re afraid of the command that does the thing you’re trying to do?