Stuttering and texture pop-in makes me immediately wonder if your SSD shit itself.
Maybe see if there’s anything in the system logs and/or SMART data that indicates that might be a problem?
Stuttering and texture pop-in makes me immediately wonder if your SSD shit itself.
Maybe see if there’s anything in the system logs and/or SMART data that indicates that might be a problem?
Cloudflare tunnels are the thing you’re looking for, if you’re not opposed to cloudflare.
You run the daemon on your local system, it connects to cloudflare, and presto, you’ve bypassed this entire mess.
I think the thing a LOT of people forget is that the majority of steam users aren’t hardcore do-nothing-but-gaming-on-their-pc types.
If you do things that aren’t gaming, your linux experience is still going to be mixed and maybe not good enough to justify the switch: wine is good, and most things have alternatives, but not every windows app runs, and not every app alternative is good enough.
Windows is going to be sticky for a lot longer because of things other than games for a lot of people.
It is mostly professional/office use where this make sense. I’ve implemented this (well, a similar thing that does the same thing) for clients that want versioning and compliance.
I’ve worked with/for a lot of places that keep everything because disks are cheap enough that they’ve decided it’s better to have a copy of every git version than not have one and need it some day.
Or places that have compliance reasons to have to keep copies of every email, document, spreadsheet, picture and so on. You’ll almost never touch “old” data, but you have to hold on to it for a decade somewhere.
It’s basically cold storage that can immediately pull the data into a fast cache if/when someone needs the older data, but otherwise it just sits there forever on a slow drive.
…depends what your use pattern is, but I doubt you’d enjoy it.
The problem is the cached data will be fast, but the uncached will, well, be on a hard drive.
If you have enough cached space to keep your OS and your used data on it, it’s great, but if you have enough disk space to keep your OS and used data on it, why are you doing this in the first place?
If you don’t have enough cache drive to keep your commonly used data on it, then it’s going to absolutely perform worse than just buying another SSD.
So I guess if this is ‘I keep my whole steam library installed, but only play 3 games at a time’ kinda usecase, it’ll probably work fine.
For everything else, eh, I probably wouldn’t.
Edit: a good usecase for this is more the ‘I have 800TB of data, but 99% of it is historical and the daily working set of it is just a couple hundred gigs’ on a NAS type thing.
I’ll admit to having no opinion on windowing systems.
If the distro ships with X, I use X, and if it ships with Wayland, I use Wayland.
I’d honestly probably not be able tell you which systems I’ve been using use one or the other, and that’s a good thing: if you can’t tell, then it probably doesn’t matter anymore.
One thing you probably need to figure out first: how are the dgpu and igpu connected to each other, and then which ports are connected to which gpu.
Everyone does funky shit with this, and you’ll sometimes have dgpus that require the igpu to do anything, or cases where the internal panel is only hooked up to the igpu (or only the dgpu), and the hdmi and display port and so on can be any damn thing.
So uh, before you get too deep in planning what gets which gpu, you probably need to see if the outputs you need support what you want to do.
If you don’t read the man pages, we’re going to ban you from the chat.
Better performance, but only for gay games, such as uh, well uh and um…
generally replaced by systemd’s journald service
Basically this, and quite a long time ago. Anything even remotely modern (and by that I mean like, the last decade or so) is either using systemd, or in the case of debuntu, rsyslog.
Wonder what kind of funky environment is using syslog-ng, and to what scale so that there’s literally a ‘syslog-ng engineer’ job posting.
Yep, and there’s a shocking number of them to pick from: https://www.qemu.org/docs/master/system/qemu-cpu-models.html
So, this is a ~15 year old laptop?
The first two things that immediately come to mind when you’re kernel panicing is bad ram, and bad cpu temperatures.
Thermal paste doesn’t last forever, and it’s worth checking if your CPU or GPU are overheating, and repasting if so.
And, as always, a memtest is a quick and easy step to rule that out - I’d say half the “weird crashes” I’ve ever seen ends up being bad ram and well, at least it’s cheap and easy to replace?
Honestly, I would have assumed 1080p was an acceptable default assumption.
Is this just a case of older hardware, or are there still laptops that don’t have 1080p panels at this point?
A quick review of stuff on BestBuy indicates that $150 laptops have 1080p displays now, and anything more than that does as well, so uh, what devices are still using these?
Does !12345:p do what you want?
Edit: that also makes hitting the up arrow result in whatever command that was, so if you wanted to edit the line or whatever, you could !12345:p, up, then edit and execute.
Uh, are you sure your shell you’re using is bash and not zsh or something else?
Bash is indeed just !12345.
Why not save time and do it the other way?
Install the minimal/netinstall image, and then add what you need.
You’ll probably spend less time adding than trying to figure out what’s installed that you do or don’t need and trying to remove random packages without breaking anything.
two commands: dd and resize2fs, assuming you’re using ext4 and not something more exotic.
one makes a block-level copy of one device to another like so: dd if=/dev/source-drive of=/dev/destination-drive
the other is used to resize the filesystem from whatever size it was, to whatever size you tell it (or the whole disk; I’d have to go read a manpage since it’s been a bit)
the dd is completely safe, but the resize2fs command can break things, but you’d still have the data on the original drive, so you could always start over if it does - i’d unplug the source drive before you start doing any expansion stuff.
dd then resize the fs?
Edit: one caveat here I forgot: if your fstab is using UUIDs, you’re going to have to update that, since the new drive won’t be the same UUID because, well, it’s not the same drive.
Also, if you like htop, youre going to love btop.
One thing I ran into, though it was a while ago, was that disk caching being on would trash performance for writes on removable media for me.
The issue ended up being that the kernel would keep flushing the cache to disk, and while it was doing that none of your transfers are happening. So, it’d end up doubling or more the copy time because the write cache wasn’t actually helping removable drives.
It might be worth remounting without any caching, if it’s on, and seeing if that fixes the mess.
But, as I said, this has been a few years, so that may no longer be actively the case.