• 0 Posts
  • 100 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • FreeCAD requires a lot more clicks. Simple example: You want to extrude part of a sketch. In Fusion360 you select the part, hit extrude, done. In FreeCAD you can’t extrude a part of a sketch, only whole sketches, so you have to make a new sketch, important the geometry of your previous sketch, repaint over the imported geometry to make it an actually sketch and now you are allowed to extrude it. When you have an extrusion that would result in multiple bodies, you have to redo this produce for each and every body, since FreeCAD extrusions are only allowed to produce one body. This can easily turn a 5sec operation into a 10min operation.

    On top of that you have the topological naming problem that forces you do basically remoddel your whole thing from scratch if you want to change anything in the early build steps.

    There are numerous ways to ease the pain (MasterSketch, Datum planes, ShapeBinder), but they all require a lot of discipline and planing ahead. You can’t just YOLO your models in FreeCAD the way you can in Fusion360.

    On the plus side, the discipline FreeCAD forces on you can result in cleaner results. In Fusion360 it’s quite easy to model yourself in a corner were everything is underconstrained and will just exploded if you touch anything. Fusion360 will let you get away with a lot until it is to late. FreeCAD will go “I can’t do that, Dave” a lot sooner and force you to clean up.

    All that complaining aside, FreeCAD is my CAD tool of choice. I am never going to touch Fusion360 with its ever more restrictive licensing scheme ever again.


  • lloram239@feddit.detoLinux@lemmy.mlELI5 the whole Wayland vs X11 going on.
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Not really. Systemd had the complete opposite problem, it did far more than the previous hackery of shell scripts. The complaints were that it was too big, had too many features, violated Unix philosophy and was less deterministic. Systemd had no problem fully replacing init, cron, DNS and Co. Wayland simply can’t replace X11 in it’s current state, it just can’t do a lot of basic things.

    such as no massive gaping security issues.

    That’s an utter strawman that doesn’t get any more true by repeating it. Nobody cares about display manager security at this point, since every app you run already has full system access anyway. Wayland security is like making sure the door is locked after the thief is already in the house. It might become relevant in a future when every app you run is in a Flatpak sandbox, but we are a very long way away from that. Even apps that use Flatpak are rarely sandboxed to the point that it would improve security. And on top of that, the sandboxing model Flatpak uses fundamentally doesn’t really work with a lot of Unix tools, e.g. how would you Flatpak something like make?


  • The issue isn’t just that the features had to be reimplemented, but that they were not part of Wayland to begin with. Wayland does only do the most basic stuff and leaves everything else to the compositor (aka Gnome or KDE). That means every compositor will implement their own hacky version of the missing functionality and it takes ages until that gets unified again, so that apps can actually use that functionality.

    Wayland is a classic case of an underspecified software project, they do a thing and they might even do it well, but what they are doing is only a fraction of what is actually needed for it to work properly in the real world. That’s why we are 15 years later and the new “simpler” Wayland is still not ready.



  • lloram239@feddit.detoLinux@lemmy.mlELI5 the whole Wayland vs X11 going on.
    link
    fedilink
    arrow-up
    62
    arrow-down
    7
    ·
    edit-2
    11 months ago

    X11 is an multiple decade old dinosaur, the developer decided it was growing too complex and no longer representing how graphics are done on modern systems and decided a rewrite. While doing so they decided to simplify some things along the way and in doing so they drastically overshoot their target and removed tons of fundamental functions that was present in X11 (stuff like being able to take screenshots, window manager, etc.). Some of that is slowly getting reimplemented and Wayland is getting closer to actually being a feature-parity X11 replacement, but it’s also taken 15 years and is still not done. The whole drama is the conflict between people wanting it as default and the other group of people for which it simply doesn’t work in its current state.


  • Windows has much better forward and backward compatibility than Linux, that’s why 10 year old Windows is still fine. 10 year old Linux on the other side just means nothing modern will work on it. That’s really only usable in extreme edge cases. Flatpak and Snap somewhat address this, but that also puts you back into the forced-upgrade treadmill, as Flatpak runtimes don’t have LTS support (not sure how Snap handles this).




  • I am not terribly impressed. The ability to build and run apps in a well defined and portable sandbox environment is nice. But everything else is kind of terrible. Seemingly simple things like having a package that contains multiple binaries aren’t properly supported. There are no LTS runtimes, so you’ll have to update your packages every couple of months anyway or users will get scary errors due to obsolete runtimes. No way to run a flatpak without installing. Terrible DNS based naming scheme. Dependency resolving requires too much manual intervention. Too much magic behind the scene that makes it hard to tell what is going on (e.g. ostree). No support for dependency other than the three available runtimes and thus terrible granularity (e.g. can’t have a Qt app without pulling in all KDE stuff).

    Basically it feels like one step forward (portable packages) and three steps back (losing everything else you learned to love about package managers). It feels like it was build to solve the problems of packaging proprietary apps while contributing little to the Free Software world.

    I am sticking with Nix, which feels way closer to what I expect from a Free Software package manager (e.g. it can do nix run github:user/project?ref=v0.1.0).


  • NixOS uses a naming convention for packages that keeps them all separate from each other, that’s how you get /nix/store/b6gvzjyb2pg0kjfwrjmg1vfhh54ad73z-firefox-118.0/. /usr isn’t used for packages and only contains /usr/bin/env for compatibility, nothing else.

    The whole system is held together by nothing more than shell scripts, symlinks and environment variables, standard Unix stuff. Making it very easy to understand if you are already familiar with Linux.

    “Declarative” means that you whole configuration happens in one Nix config file. You don’t edit files in /etc/ directly, you write your settings in /etc/nixos/configuration.nix and all the other files are generated from there. Same is true for package installation, you add your packages to a text file and rebuild.

    If that sounds a little cumbersome, that’s correct, but Nix has some very nice ways around that. Due to everything being nicely isolated from each other, you do not have to install software to use them, you can just run them directly, e.g.:

    nix run nixpkgs#emacs

    You can even run them directly from a Git repository if that repository contains a flake.nix file:

    nix run github:ggerganov/llama.cpp

    All the dependencies will be downloaded and build in the background and garbage collected when they haven’t been used in a while. This makes it very easy to switch between versions, run older versions for testing and all that and you don’t have to worry about leaving garbage behind or accidentally breaking your distribution.

    The downside of all this is that some proprietary third party software can be a problem, as they might expect files to be in /usr that aren’t there. NixOS has ways around that (BuildFHSEnv), but it is quite a bit more involved than just running a setup.sh and hoping for the best.

    The upside is that you can install the Nix package manager on your current distribution and play around with it. You don’t need to use the full NixOS to get started.


  • Quite hard. We had Open Source’ish LLMs for only around six months, if they are even up to the task of verifying a translation is another issue and if they are up to Debian’s Open Source guidelines yet another. This is obviously going to be the long term solution, but the tech for that has simply not been around for very long.

    And of course once you have translation tools good enough for the task, you might just skip the human translator altogether and just use machine translations.





  • C has no memory protection. If you access to the 10th element of a 5 element array, you get to access whatever is in memory there, even if it has nothing to do with that array. Furthermore this doesn’t just allow access to data you shouldn’t be able to access, but also the execution of arbitrary code, as memory doesn’t make a (big) difference between data and code.

    C++ provides a few classes to make it easier to avoid those issues, but still allows all of them.

    Ruby/Python/Java/… provide memory safety and will throw an exception, but they manually check it at runtime, which makes them slow.

    Rust on the other side tries to proof as much as it can at compile time. This makes it fast, but also requires some relearning, as it doesn’t allow pointers without clearly defined ownership (e.g. the classic case of keeping a pointer to the parent element in a tree structure isn’t allowed in Rust).

    Adding the safeties of Rust into C would be impossible, as C allows far to much freedom to reliably figure out if a given piece of code is safe (halting problem and all that). Rust purposefully throws that freedom away to make safe code possible.


  • That’s the idea, and while at it, we could also make .zip files a proper Web technology with browser support. At the moment ePub exists in this weird twilight where it is build out of mostly Web technology, yet isn’t actually part of the Web. Everything being packed into .zip files also means that you can’t link directly to the individual pages within an ePub, as HTTP doesn’t know how to unpack them. It’s all weird and messy and surprising that nobody has cleaned it all up and integrated it into the Web properly.

    So far the original Microsoft Edge is the only browser I am aware of with native ePub support, but even that didn’t survive when they switched to Chrome’s Bink.



  • I’d setup a working group to invent something new. Many of our current formats are stuck in the past, e.g. PDF or ODF are still emulating paper, even so everybody keeps reading them on a screen. What I want to see is a standard document format that is build for the modern day Internet, with editing and publishing in mind. HTML ain’t it, as that can’t handle editing well or long form documents, EPUB isn’t supported by browsers, Markdown lacks a lot of features, etc. And than you have things like Google Docs, which are Internet aware, editable, shareable, but also completely proprietary and lock you into the Google ecosystem.


  • .tar is pretty bad as it lacks in index, making it impossible to quickly seek around in the file. The compression on top adds another layer of complication. It might still work great as tape archiver, but for sending files around the Internet it is quite horrible. It’s really just getting dragged around for cargo cult reasons, not because it’s good at the job it is doing.

    In general I find the archive situation a little annoying, as archives are largely completely unnecessary, that’s what we have directories for. But directories don’t exist as far as HTML is concerned and only single files can be downloaded easily. So everything has to get packed and unpacked again, for absolutely no reason. It’s a job computers should handle transparently in the background, not an explicit user action.

    Many file managers try to add support for .zip and allow you to go into them like it is a folder, but that abstraction is always quite leaky and never as smooth as it should be.


  • Do the Maintainers of most distros manually read the code to discover whether an app is malware?

    No. At best you get a casual glance over the source code and at worst they won’t even test that the app works. It’s all held together with spit and baling wire, if an malicious entity wanted to do some damage, they could do so quite easily, it just would require some preparation.

    The main benefit of classic package maintenance is really just time, as it can take months or even years before a package arrives in a distribution, and even once arrived, it has to still make it from unstable to stable, leaving plenty of room for somebody to find the issue before it even comes to packaging and making it substantially less attractive for any attacker, as they won’t get any results for months.