You’re recommending Flatpak for users that are confused by packages?
You’re recommending Flatpak for users that are confused by packages?
It looks like sonarr is not in the official Ubuntu mirrors. The website mentions adding a new repo to apt. Is this what you did, or something else?
Also, how are you starting it? I’m looking at the Arch package in the AUR (not your distro, but just looking), and I notice that it includes a .service file. This means that it would be started as a service, and not as a user, like you’re probably attempting to do.
What directory is it trying to write to? Can you show us the full error, preferably as text and not a screenshot?
What happens when you try to start it?
If there is a dependency problem in the upstream packages, then there is a bug in Ubuntu. This doesn’t happen often, and isn’t a good reason to go to Flatpak by itself. A bug should be filed upstream and it’ll likely get fixed quickly.
For real. It’s so much better to think about using the screen space you already have. People can do what they want, but I am happy with one screen, a tiling window manager, and workspaces. I can have a dozen or more things going on, and have it packed on a workspace. Fullscreen a window of I need to, then pop it back.
It’s incredibly efficient. I see stuff like this, and I imagine what it’s like to have text several feet away, screens covered by other screens, lots of neck fatigue, all the monitor borders… like it’s truly bad. It feels like someone watched a lot of TV and “felt” that this was the best way to do it without trying it.
Butt I digress. It’s not my setup. If they’re efficient with it, more power to them.
From man systemd
:
DESCRIPTION
systemd is a system and service manager for Linux operating systems. When run as first process on boot
(as PID 1), it acts as init system that brings up and maintains userspace services. Separate instances
are started for logged-in users to start their services.
systemd is usually not invoked directly by the user, but is installed as the /sbin/init symlink and
started during early boot. The user manager instances are started automatically through the
user@.service(5) service.
For compatibility with SysV, if the binary is called as init and is not the first process on the
machine (PID is not 1), it will execute telinit and pass all command line arguments unmodified. That
means init and telinit are mostly equivalent when invoked from normal login sessions. See telinit(8)
for more information.
When run as a system instance, systemd interprets the configuration file system.conf and the files in
system.conf.d directories; when run as a user instance, systemd interprets the configuration file
user.conf and the files in user.conf.d directories. See systemd-system.conf(5) for more information.
Otherwise monitors, cables and video cards would have compatibility issues.
You’re right, and this was absolutely a thing. Video cards could produce whatever they were capable of, and monitors could display whatever they were also capable of. You could also push resolutions and refresh rates to monitors that was beyond the monitors’ specs, and you would also risk damaging the monitor by doing this.
I don’t think you were pushing 4000x3000 resolution through VGA.
You don’t need to believe me. That’s your choice. I had friends that could do the same. This was with a Matrox card and a 21" Acer CRT. The display was nearly impossible to read, and the color mask broke up the individual pixels too much, anyway.
Just like today no one is pushing video streams to giant building sized screens over consumer HDMI or DVI.
Digital video has upper limits in its specs. This is the whole point of this conversation.
Another example is XLR VS 3.5mm jack. In theory you can push audio signal of any quality over both, but XLR by spec is balanced and shielded, while 3.5mm is not. This means that XLR is capable of pushing much better audio.
A bit of incorrect information here. There is no “unshielded 3.5mm spec.” Good cables have shields, but not all. XLR doesn’t have the ability to transport higher frequencies because it’s balanced, or “much better audio.” On paper, unbalanced audio is better for short runs because there is more opportunity for XLR signals to have extremely minute signal quality issues due to the hot and cold signal mirroring, but it’s so small that it doesn’t matter.
In general, what is the highest frequency that can be carried over a wire?
I know it can do these resolutions in practice because I have personally operated CRTs at 4000x3000 resolution in the early 2000s. This could be considered “the 4:3 of 4K.” It was not done on fancy equipment or high-end monitors. Analog stuff really could just go to really high resolutions and refresh rates with above-average, but typical stuff.
CRTs simply respond to waveforms for red, green, blue, vertical sync, and horizonal sync. That’s it. If you want more horizonal pixels, make your scan lines denser. If you want more vertical pixels, add more scan lines. Want a faster refresh rate? Simply run all the signals faster.
There is no hard upper limit to it. With digital signals, there are throughput limits per spec due to bit rates, but with analog, there are no bits. Resolutions like 40k x 30k are theoretically possible. The difficult parts are rendering the signal at these high frequencies, and being able to meaningfully display them. The VGA connection itself has no limits.
It’s analog. It always has.
Wow, what a horrible setup.
On my Ubiquiti APs? I suppose I could. I’m also looking to upgrade the network capabilities to a modern 6E setup if I can swing it.
Yeah, Ubiquiti has the “great at most things with a point-and-click UI” market down pat. Although, personally, I don’t really care about webapp UIs and such for networking gear. Give me a man page and configuration file, and I’ll get down to it.
Here’s a small ad block list for your Unifi controller, if it helps: https://github.com/synthead/unifi-adfree
This is what’s important. If you don’t enable power saving in some fashion, your hardware will always be “on” at full specs. Even if the machine isn’t actually being used, it’s still powering everything to be ready to jump at any opportunity to process something quickly without ramping down.
TLP has pretty excellent default settings. Simply turning it on will likely make your battery life go 2-3x longer than without it being on, and you will have about 80% of the performance from a UX perspective. And if you want to crunch numbers faster on battery, you can tune TLP or turn it off temporarily.
I wouldn’t want to reduce security by allowing privileged ports as any user, or running modified operating systems that have lessened security baked-in. This security principle is in place for good reasons, and they should remain in place.
If you are exposing your LAN to your Internet connection, you’re doing something wrong. If you are not, but are using a firewall that doesn’t support NAT, then I don’t trust your firewall. If your firewall supports NAT, and you’re attempting to subvert Linux security measures instead of using it, then you’re doing something wrong.
I don’t think it’s a great idea to host a website on cellular data. If I had to serve something with a mobile device, I’d use USB networking, or a USB to Ethernet adapter.
The reason you can’t host as port 80 on unmodified Android isn’t because “Google won’t let you.” Android is open source. You can do what you want with it. Android runs on Linux, and ports 0-1023 are privileged ports that can only be used as root.
Unmodified Android does not allow userland apps to run as root for very good reasons, so you don’t have access to these ports. That’s all there is to it. If you attempted to do the same thing on Ubuntu, you would also not be able to use port 80 without root.
However, this is a naive approach to hosting a website. Production web stacks, when hosted on a machine, typically use a least-privileged model where not only ports are banned, but most file access is, too.
Most dynamic web stacks won’t host on port 80 directly. Most will serve either a socket connection or host multiple ports on threads, i.e. ports 3000 to 3007. These connections would then be proxied via something like Nginx to serve as a load balancer, and Nginx can also manage SSL for you, too.
If Nginx is started as root, it can host on port 80. If not, serve on port 8080 and use NAT to redirect it to port 80 with your firewall. You are using a firewall for publicly-hosted content, right?
If the package manager leaves you with broken dependencies, a broken system, or a system that “doesn’t work,” then there are significant bugs in how the distro has packaged things. It happens, but seldomly.
Package managers aren’t “hard.” There are GUIs where you can search and install packages, even. In my opinion, if you have a Linux user that has avoided learning how package managers work, then they’re skipping a core foundation of how to use their operating system.