i’m lizard

  • 0 Posts
  • 24 Comments
Joined 5 months ago
cake
Cake day: June 21st, 2024

help-circle

  • Sorry, I’ve had a (self-imposed) busy week, but I have to admit, that also has me rather stumped. As far as I can tell, your second entry should work. If the device is visible in /dev/mapper under a name, it should be able to mount under that name.

    The only thing I can think of is that some important module like the ext4 module might be missing somehow? You can get pretty confusing errors when that happens. Dracut is supposed to parse /etc/fstab for everything needed to boot, and maybe that’s not recognizing your root for some reason. dmesg might have some useful info at the end after you try to mount it. If that’s what’s happening, you could try to add add_drivers+=" ext4 " in your dracut.conf and regenerate it (the spaces are important!). But if that’s not it, then I’m probably out of ideas now.


  • I think you should check your root= line and add a rd.luks.uuid= to make it open it. Dracut will by default open the root FS as /dev/mapper/luks-abcdef... based on the LUKS container UUID. You can get that with cryptsetup luksUUID. /dev/mapper/root is just never going to show up unless you’ve assigned a custom name to that with the barely documented rd.luks.name, and I don’t see that in your setup. The cryptroot and cryptdm parameters aren’t used by Dracut either.

    With all of that missing it’s just gonna wait for that /dev/mapper/root to magically show up out of nowhere, without ever trying to open it.

    A correct cmdline will probably look something along the lines of root=/dev/mapper/luks-<uuid> modules=sd-mod,usb-storage,ext4 rootfstype=ext4 rootflags=rw,relatime rd.luks.uuid=<uuid> and once opening with passphrase works, you can start to mess with rd.luks.key=/awesome.key (and readd quiet when done debugging, if you want it that way).

    ldconfig errors and the missing modules should be fine. musl’s ldconfig is just a bit different but also isn’t required in quite the same way. I don’t think you should need to mess with modules manually. I don’t think you’re using LVM’s userland for your setup, just all the device-mapper kernel modules. Dracut will pull all the necessary bits in for you if you’re setting it up for LUKS.


  • Dracut may have this functionality already built in via rd.luks.key, so a custom module would really only make sense if you’re trying to do more than that. You can probably get away with just using that if you just want it to work, but if you want to customize stuff:

    I suspect your module is running well after the device is already supposed to be cryptsetup opened. The way the default crypt module handles it is by setting up udev configuration in a very early phase, and then having udev request the password a little bit later when it finds the device it’s trying to open, until all devices are ready. It’s a complex mechanism compared to Alpine’s straightforward script, but it’s much more flexible when it comes to ordering of things like RAID/network devices/LUKS/etc.

    The result of that is that your code would have to run much earlier. There’s some documentation on how hooks work, and the builtin rd.luks.key / keydev handler runs at cmdline 10. That’s well before your pre-mount, and probably where you’d want to run your code. Based on a cursory inspection of the other code, you could either cryptsetup open it yourself if you use the name it expects (rd.luks.name= cmdline parameter or luks-$luks_container_uuid), or you could use that /tmp/luks.keys mechanism (it’s a dracut-internal thing so you won’t find much documentation, but it lives in crypt-lib.sh, cryptroot-ask.sh and probe-keydev.sh).

    As for debugging, the cmdline manpage has a few decent enough options. rd.break=cmdline or similar can force a shell before Dracut goes through a specific phase of hooks. You should be able to manually test doing things similar to your script at that point.


  • You’d be looking for /usr/share/mkinitfs/initramfs-init . I’ve never customized that myself, but it looks like there’s already some support for a keyfile if you look for KOPT_cryptroot and check that block of code. That looks like it’s mostly set up for a keyfile embedded into the initramfs, but I guess it should be possible to replace that code with something that grabs the keyfile off an USB drive.

    I suppose you’d make a copy of it, put it somewhere in /etc or whatever and change the mkinitfs.conf to point to it. init="/etc/whatever/myinitramfs-init" should do the trick since the config file just gets sourced in. That said you’re definitively heading into unknown territory here. It might be easier to just use Dracut or the like instead.


  • mkinitfs doesn’t support running custom shell hooks. mkinitfs is very, very, very bare-bones custom code and the whole features concept exists only to pull extra files and kernel modules into the initramfs, not for extra logic.

    You’d either have to customize the init script itself (not impossible, it’s 1000 lines) and pass -i/set init= in the .conf, or install Dracut/Booster instead (which should “just work” if you apk add them, but I’ve had no need to do so).


  • All of the cool development-related Nix things like pinning a project to known-good library versions (for regression tests or otherwise) don’t really need you to run NixOS. If you like NixOS then it’s a perfectly usable distro for development work, but all of the powers come from Nix itself, and that can be installed anywhere you feel comfortable with.

    The only real pro of running full NixOS is that everything you work on will test a relatively uncommon *nix setup by its nature. Things like developer-only scripts with hardcoded #!/bin/bash shebangs are more likely to break on NixOS than they would on a conventional Linux distro with Nix installed. That’s something potentially worth fixing as it might also hurt the developer experience on *BSD/Mac systems.


  • That already happens constantly and I’d consider this the consequence of it, rather than the cause. You can only issue so many vetoes before people no longer want to deal with you and would rather move on.

    The recent week of Wayland news (including the proposal from a few hours ago to restate NACK policies) is starting to feel like the final attempt to right things before a hard fork of Wayland. I’ve been following wayland-protocols/devel/etc from the outside for a year or two and the vibes have been trending that way for a while.


  • Digging into the GitLab & related discussions, the main takeaway I got is that FFmpeg’s API supposedly meshes better with what Wine needs to provide to Windows code, simplifying things overall. GST is pretty heavy on asynchronous/background processing, which is normally something I’d consider good for media, but if the API you’re expected to implement is synchronous then I guess it only adds complexity.


  • Moderation is handled by each instance’s version of that community separately.

    Reddit/Lemmy/etc communities differ from something like Tumblr/Cohost by also having per-community rules, and nobody has the time to moderate hundreds of communities according to their per-community rules.

    It’s relatively easy to keep an instance free of spam/overly blatant hate/etc, since that is a fairly common set of rules. But it’s much harder to keep a “world news” style community being overran with US-centric posts, or a discussion community on a specific subject from being filled to the brim with memes, or posts that are only very vaguely adjacent. Without centralized per-community moderators, it would fall on general instance moderation to make decisions about whether a post about an Undertale hack fits in the Undertale community. That’s probably going to go wrong more often than not.

    You can have a website that is only moderated according to global rules with tags being a free-for-all, but you fundamentally end up building something along the lines of Tumblr or Cohost, which attracts a different audience, including those that know how to rules lawyer their way in such an environment; tagging 20 mediocre photos a day with #photography instead of just a good one, for example. With the end of Cohost approaching, I wouldn’t be surprised if some tried to build that kinda thing, but it’d likely end up having a very different vibe.


  • It depends on if you can feasibly implement compatibility layers for large parts of the “required” but very work-intensive drivers. FreeBSD has the same driver struggles and ended up with LinuxKPI to support AMD/Intel GPUs. I know there’s a whole bunch of toy kernels that implemented compatibility layers for parts of Linux in some fashion too.

    It’s a ton of work overall but there’s room to lift enough already existing stuff from Linux to get the ball rolling.


  • In my experience, most hangs with a message about amdgpu loading on screen are caused by an amdgpu issue of some kind. I’d check to see if amdgpu ends up being loaded correctly via lsmod | grep amdgpu and just a general journalctl -b 0 | grep amdgpu to see if there’s any obvious failures there. Chances are that even if it’s not amdgpu, the real failure is in the journal somewhere.

    Could be a wrong setting of hardware.enableRedistributableFirmware (should be true) or the new-ish hardware.amdgpu.initrd.enable (can be either really but either true or false might be more or less reliable on your system).








  • Requiring agreement to some unspecified ever-changing terms of service in order to use the product you just bought, especially when use of such products is required in the modern world. Google and Apple in particular are more or less able to trivially deny any non-technical person access to smartphones and many things associated with them like access to mobile banking. Microsoft is heading that way with Windows requiring MS accounts, too, though they’re not completely there yet.


  • My suggestion is to use system management tools like Foreman. It has a “content views” mechanism that can do more or less what you want. There’s a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.

    With the way Debian/Ubuntu APT repos are set up, if you take a copy of /dists/$DISTRO_VERSION as downloaded from a mirror at any given moment and serve it to a particular server, that’s going to end up with apt update && apt upgrade installing those identical versions, provided that the actual package files in /pool are still available. You can set up caching proxies for that.

    I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let’s say bookworm) and their associated -updates and -security repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren’t currently on fire, it does rsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in /pool were served by apt-cacher-ng, but I don’t know if that’s still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).