• 1 Post
  • 82 Comments
Joined 2 years ago
cake
Cake day: June 15th, 2023

help-circle
  • The latter is I think aiming for Linux ABI compatibility.

    I had never hard of Asterinas, but this sounds like a the best approach to me. I believe alternative OS’s need to act as (near) drop-in replacements if they want to be used as daily drivers. ABI-incompatible alternatives might be fine for narrower use cases, but most people wouldn’t even try out a desktop OS that doesn’t support most of the hardware and software they already use.


  • I’m not sure why they feel it’s Linus’ responsibility to make Rust happen in the kernel.

    That’s not what’s being said here, as far as I can tell. Linus is not expected to somehow “make Rust happen”. But as a leader, he is expected to call out maintainers who block the R4L project and harass its members just because they feel like it. Christoph Hellwig’s behavior should not be allowed.

    I’m not saying Marcan is necessarily correct, to be clear. It might well be that Linus chose to handle the issue in a quieter way. We can’t know whether Linus was planning on some kind of action that didn’t involve him jumping into the middle of the mailing list fight, eg contacting Christoph Hellwig privately. I’m merely pointing out that maybe you misunderstood what Marcan is saying.

    Or fork it and make a Rust Linux with blackjack and hookers, and boy, will everyone left behind feel silly that they didn’t jump on the bandwagon.

    That’s what they’re doing. But if you read the entire post carefully, he explains why maintaining a fork without eventually upstreaming it is problematic. And it’s not like they’re forcing their dream on the linux project, because the discussions have already been had and rust has officially been accepted into the kernel. So in the wider context, this is about individual maintainers causing friction against an agreed-upon project they don’t like.



  • Do you have access to Signal servers to verify your claims by any chance?

    That’s not how it works. The signal protocol is designed in a way that the server can’t have access to your message contents if the client encrypts them properly. You’re supposed to assume the server might be compromised at any time. The parts you actually need to verify for safe communication are:

    • the code running on your device
    • the public key of your intended recipient

  • Yeah, that section is bad.

    For one, it’s has classic vibe “if you want to keep the nazis out, you’re the one who’s exclusionary”.

    But also, how is refusing to engage on a platform “shutting out a significant portion of [the] community”? That sounds backwards to me. Blocking people from engaging with Debian on its own platforms would be shutting them out. The implication in the article is that Debian is obligated to be unconditionally present on every social platform its users might be on.



  • Most of Proton code is Wine. So basically if you have Wine in your system, library dependencies are not an issue anymore, apart from DLLs that some games require

    If I have wine on my system and try to run steam-managed proton without any sort of runtime or container, then I’m running proton on different versions of libraries than the ones it was compiled for and tested on. Proton also has additional components which might mean additional dependencies, so your statement is false to begin with.

    Why are they doing a fork instead of contributing?

    The fork is open source. As far as I know, some contributions do get merged into wine. Valve is also funding work from Collabora which is contributed directly into wine. They cannot contribute the entirety of proton to wine because wine does not want all their contributions. This is a very common situation to arise when someone wants to use an open source project but their goals don’t align.

    But I expect it will be easier to push back on using containerization in Proton, than making Valve allow us such control

    Valve is never going to rip out a solution that is working great for them and risk causing issues for customers for no good reason. Thinking that Valve are more likely to remove containerization than they are to allow you to modify the container is, frankly, delusional. It’s also completely irrelevant, as I’ve already said. If Valve wants to “fuck us up” then they’re going to do it. Steam is a proprietary piece of software that supports DRM for all your (also proprietary) games, which are stored on the cloud. You have no control over your games, but containers have nothing to do with it. And if they did, and Valve really wanted to pull a trick on us, asking them to remove the containers would make even less sense…


  • We are going through more or less Wine anyway, the libraries on the system don’t matter as long as Wine compiles

    Which wine though?

    The one pre-packaged by your distro? That doesn’t work because Valve needs to control the version you use and to provide additional stuff not part of vanilla wine.

    The one part of proton that is built and delivered to your system by Valve? They would have to compile and support it for every set of dependency versions out there.

    One of the core features of containers is process and process memory separation from host.

    As far as container technology is concerned, the isolation is configurable. pressure-vessel is most likely using (possibly indirectly) namespaces and/or cgroups to achieve the isolation. I don’t see a technical reason that you can’t disable the isolation of shared memory or any other resource. The issue is whether you are given access to disable it.

    According to the docs the runtime is based on flatpak and uses bubblewrap and libcapsule. I don’t know about libcapsule, but I recall that bubblewrap has granular control over what resources it isolates.

    We have no control over what they put in those containers.

    Apparently, you can modify the container as shown here. But there’s no reason why you shouldn’t be able to install custom containers alongside the default ones in the same way that you can install custom proton versions. Steam just doesn’t provide the interface for it.

    Once they disable the PRESSURE_VESSEL_SHELL=instead we will have no insight into what’s inside.

    There already exists an alternative that is “more likely to be extended in future” rather than being removed as shown here. But I believe you would always be able to gain access to the container because it remains a chroot + namespace + cgroup isolation, all of which you can control on your system.

    and app developers neither have!

    App developers don’t control what’s on your system either. The container is a huge improvement for them because it at least gives them a known target to build for. They can still bundle dependencies in any way that they would on a non-containerized system. There’s no loss of control from their perspective.

    if it doesn’t work for some reason (with Wine I don’t really see it happening as what we run doesn’t rely on our OS libraries directly), you can create chroot, additional library packages with old versions, etc.

    That’s what pressure-vessel is and as shown above you can modify it. And if you couldn’t it would be a tooling issue, not an inherent container disadvantage.

    Worst case scenario, Linux community will figure something out

    No, they won’t. Compatibility significantly increased after Valve got involved. In fact, the linux community is porting pressure-vessel outside of Steam to use it across different launchers as umu. The community is headed towards using pressure-vessel for everything.

    Now I replied to each claim individually, but it’s not really about any specific point you’re making. The general idea is that there’s nothing inherent to container technology that prevents you from tinkering with it. Anything that you can’t do currently is because Steam is not designed to allow you to do it. It’s got nothing to do with whether Steam uses containers or not. Any control that you’ve lost over your system is because you’re using a proprietary app. They could remove the containers and still prevent tinkering, eg by using a bundled wine with no way for you to modify it or its launch options. It’s not about what Steam does, but about how it does it.



  • No way. Containers are absolutely necessary to provide reliability across a wide range of distros and to keep games working in the future.

    It makes running additional programs harder (opentrack for example)

    Then we need better tooling and documentation to interact with the container, not to get rid of them. I don’t see any technical limitation that would prevent your use case. It’s just not implemented or maybe simply undocumented.

    our computers less ours

    How so? The end result is probably the opposite. Without the containers Steam would be less reliable on unsupported distros, which might mean your only choice would be to use Ubuntu LTS. That would be a much bigger loss of control.


  • It’s a custom solution called pressure-vessel, which seems to be based on flatpak. You can read about it here. This is used to create a reproducible linux environment and has nothing to do with the windows translation layer. They run wine (proton) inside the container as you would expect.

    There is a recent effort to port this solution outside of steam in the form of umu. As far as I know it’s in a working state but I don’t know if it’s at feature parity with steam, especially on the game-specific fixes front. The end goal is to be a universal launcher that can be used from all frontends, so that all windows games run reliably and identically regardless of which GUI you use to manage your games.

    EDIT: welp, I just now noticed this info has already been posted by another user 🤷



  • My question: How do I actually physically notice the difference between these kernels?

    Generally, you don’t. You can look for some benchmark to try and find a difference between them, but if you don’t notice a difference in your day to day tasks, then it’s all the same. In my experience you should pick a kernel based on your desired experience. For my needs this is how the kernels differ:

    • Generic kernel: a sane default for most regular users
    • LTS: only makes sense if you’re worried about regressions in the generic kernel causing issues, and only viable if you can afford to stay behind on hardware driver updates, ie you use old hardware and/or optimal performance is not required
    • Zen: sometimes better for gaming, but often indistinguishable from the generic kernel
    • Realtime: rarely what you want, it sounds “faster” but it’s basically optimized for very specific use cases and if you’re not among them you’ll see the same or worse performance



  • These are valid concerns but to me they sound more like lack of tooling rather than inherent disadvantages of immutable distros. Linux distros have not historically been designed from the ground up for immutability and it makes sense that there are issues that aren’t handled optimally. Surely we can come up with clean and simple solutions to basic problems like setting up daemons and drivers if we work on it!




  • The sooner the screen stops moving the sooner your eyes can lock on, focus and read.

    On the other hand, if I’m reading through a command’s output and searching for something, abrupt movement of the contents make me lose track of where I am and it costs more time to reorient myself than the smooth scrolling animation would take to play out. More importantly (to me), it takes less mental effort as well. It’s just a more comfortable experience. Ever since I switched to neovide instead of plain nvim I find myself enjoying long coding sessions much more.

    It sounds like you just might not be negatively affected by the abrupt movement as much as some of us are. You might now get why we care about smooth scrolling because it happens to not do anything for you. That’s fine and a good implementation would allow the user to toggle it on/off based on their needs.

    Also you could scroll and end up with half a line visible on the top or bottom, which is just kinda weird and wasting space.

    No, I imagine that’s not the way most terminal emulators would implement it. Scrolling would still be done in whole lines, it would just animate smoothly towards the final position rather than jump instantly to it. You would not be able to end up on a half-line or something.


  • Yes, this is normal and it’s a good thing (unless you’ve come across a bug). I don’t know exactly what app the screenshot is showing, but I’m guessing that the caching shown is referring to the filesystem cache. The kernel is keeping a cache of files you are likely to access again so that it doesn’t have to read them from storage again. So what you’re seeing here is that some memory contents were moved to swap to make room for filesystem cache. This is because the kernel believes you’re more likely to access those files again rather than the memory contents. If it’s right, then this a performance improvement despite the fear surrounding swap usage.

    Setting a low non-zero swappiness value is telling the kernel that memory contents have priority over filesystem cache for remaining in RAM, or conversely that file cache is more likely to be evicted from the RAM. A value of 100 would mean that they have equal priority. So that memory content must have been very stale to be evicted despite having a significantly higher priority to reside in RAM.

    So:

    • don’t worry about swap usage unless you’re experiencing actual performance issues
    • for ssd’s the value should be close to 100
    • for hdd’s it should be low
    • if you’re using both on your system, the default value of 60 is probably a decent approximation of the optimal value

    Source: https://chrisdown.name/2018/01/02/in-defence-of-swap.html