Dude the last thing I needed for my “talking to an idiot online” bingo card was “(ignores point) aPpLe fAnBoY”
Dude the last thing I needed for my “talking to an idiot online” bingo card was “(ignores point) aPpLe fAnBoY”
Two professional 27" 4k dell monitors cost ~$800 combined. You overpaid like a mf if you spend $2000 on a monitor.
Sorry, but you don’t understand the needs of the market that we’re talking about if you think that a pair of ~$400 dell monitors is equivalent to a high-end display. The difference between $800 and $2500 amounts to a few days’ worth of production for my workstation, which is very easily worth the huge difference in color accuracy, screen real estate, and not having a bezel run down the middle of your workspace over the 3-5 years that it’s used.
blah blah blah
I already said that I’m talking about the Vision Pro as a first step in the direction of a fully-realized AR workstation. As it currently stands, it’s got some really cool tech that’s going to be a lot of fun for the guinea pig early adopters that fund the development of the tech I’m personally interested in.
What purpose does a MacBook serve that an office from the 1980’s wasn’t equipped to handle?
AR devices in an office serve the same purpose as existing tools, but there are ways that they can improve efficiency, which is all the justification office tech needs. Shit, my monitor costs 2/3 the price of the Vision Pro, and an ideal piece of AR hardware would be immeasurably better. Meetings in virtual space would negate how much meetings suck remotely. Having unlimited screen real estate would make a huge difference in my line of work. Also, being able to use any area in my home or out of it with as much screen real estate as I want would be huge.
I’m not saying that the Vision Pro does all of those things, but it does some of them, and I’m 100% okay with it being the thing that introduces the benefit of AR to those without imagination.
Only thing keeping on my disk is fusion360, so annoying to have to deal with booting into windows just to use a single piece of software.
I haven’t had this happen in years, maybe it’s my config? I’m using GPT on a UEFI system (in UEFI mode), with systemd-boot.
I do remember having tons of issues back when I was using grub on an MBR system using legacy bios emulation.
I use windows for ~10 hours per day, 5 or 6 days per week because my team is currently maintaining a legacy .NET framework codebase. I’m sure there are people on earth who use windows more than I do, but I think it’s extremely unlikely that you’re one of them.
Sure.
MacOS is an excellent workspace operating system, largely due to its near-POSIX compliance and the fact that it has access to the enormous body of tools developed for UNIX-like OSs. For development work in particular, it can use the same free and open source software, configured in the same way, that Linux uses. Aside from the DE, a developer could swap between Linux and MacOS and barely realize it. Everything from Node, to Clang, to openJDK, to Rust, along with endless ecosystems of tooling, is installable in a consistent way that matches the bulk of online documentation. This is largely in contrast to Windows, where every piece of the puzzle will have a number of gotchas and footguns, especially when dealing with having multiple environments installed.
From a design perspective, MacOS is opinionated, but feels like it’s put together by experts in UX. Its high usability is at least partially due to its simplicity and consistency, which in my opinion are hallmarks of well-designed software. MacOS also provides enough access through the Accessibility API to largely rebuild the WM, so those who don’t like the defaults have options.
The most frequent complaint that I hear about MacOS is that x feature doesn’t work like it does in windows, even though the way that x feature works in windows is steaming hot garbage. Someone who’s used to Windows would probably need a few hours/days to become as fluent with MacOS, depending on their computer literacy.
People also complain about the fact that MacOS leverages a lot of FOSS software, while keeping their software closed-source and proprietary. I agree with this criticism, but I don’t think it has anything to do with how usable MacOS is.
I’m not going to start a flame war about mobile OSs because I don’t use a mobile OS as my primary productivity device (and neither should you, but I’m not your mom). The differences between mobile OSs are much smaller, and are virtually all subjective.
You’re welcome.
Having the highest market share doesn’t mean that windows uses logical conventions, it just means that lots of people are accustomed to the conventions that it uses. The vast majority of professionals that I’ve interacted with strongly dislike having to work on a windows machine once they’ve been exposed to anything else.
Off of the top of my head, the illogical conventions that Windows uses are: storing application and OS settings together in an opaque and dangerous, globally-editable database (the registry), obfuscating the way that disks are mounted to the file system, using /cr/lf for new lines, using a backslash for directory mappings, not having anything close to a POSIX compatible scripting language, the stranglehold that “wizards” have on the OS at every level, etc. ad nausium. Most of these issues are due to Microsoft deciding to reinvent the wheel instead of conforming to existing conventions. Some of the differences are only annoying because they pick the exact opposite convention that everyone else uses (path separators, line endings), and some of them are annoying because they’re an objectively worse solution than what exists everywhere else (the registry, installation/uninstallation via wizards spawned by a settings menu).
For basic usability functions, see the lack of functional multi-desktop support 20 years after it became mainstream elsewhere. There is actually no way to switch one monitor to a 2nd workspace without switching every monitor, which makes the feature worse than useless for any serious work. In addition to that, window management in general is completely barebones. Multitasking requires you to either click on icons every time you want to switch a window, or cycle through all of your open windows with alt-tab. The file manager is kludgy and full of opinionated defaults that mysteriously only serve to make it worse at just showing files. The stock terminal emulator is something out of 1995, the new one that can be optionally enabled as a feature is better, but it still exposes a pair of painful options for shells. With WSL, the windows terminal suddenly becomes pretty useful, but having to use a Linux abstraction layer just serves to support the point that windows sucks.
I could go on and on all day, I’m a SWE with a decade of experience using Linux, 3 decades using Windows, and a few years on Mac here and there. I love my windows machine at home… as a gaming console. Having to do serious work in windows is agonizing.
Of the three major desktop operating systems, windows is by far the worst.
The only advantage windows has is that Microsoft’s monopolistic practices in the 90s and 00s made it the de-facto OS for business to furnish employees with, which resulted in it still having better 3rd party software support than the alternatives.
As an OS, it’s hard to use, doesn’t follow logical convention’s, is super opinionated about how users should interact with it, and is missing basic usability features that have been in every other modern OS for 10+ years. It’s awesome as a video game console, barely useable as an adobe or autodesk machine, but sucks as a general purpose OS.
How weird. My sample size is now 2, I think I’m ready to draw a conclusion and only consider evidence that confirms it going forward.
Hmm well if an object passed through that portal and it wasn’t moving ~2236mph relative to the surface of the moon, then I guess the question from the OP has been answered already haha.
Yeah sounds very similar. And weird coincidence, but the guy I’m talking about is also German. Lives in the US now, but his parents don’t speak English, he came here as a kid I believe.
No that’s a totally valid question and I’d wonder the same thing.
But he definitely is all of those things, he’s got a dozen published nonfiction books that are easy to find, with a picture of his face on them haha. Listed as faculty/former faculty at Utah State University, CSU Chico, two BYU campuses, University of San Diego, University of Malaysia. Reasonably high profile on LinkedIn.
I used to go on family vacations with this guy’s family as a teenager, his whole family are genuinely some of the best people I know. But he’s a perfect example of the incredible power of the confirmation bias. I just try to remember that someone like him can have such seemingly obvious blind spots, I definitely can too.
I would imagine that the relative motion between the entry/exit portal would be more important than the absolute motion of the two portals.
I’ve known a guy for like 20 years, currently in his 60s, who firmly believes that anthropogenic climate change is entirely false.
He has a bachelors degree in physics, a bachelors degree in mathematics, and a Ph.D in economics. He’s written a handful of high level Econ textbooks, he’s worked as a professor off and on at 3 or 4 respected universities here in the US. He was most recently employed at a supply chain consulting firm, making an ungodly amount of money.
By all accounts, he’s an extremely smart, well-educated, well-read guy. But holy shit if that boomer isn’t constantly reposting the most transparently fake anti-science nonsense on his Facebook page. Think, “New research proves that Climate Change is a liberal myth” - The Religious Conservative Storm.
Just demonstrates how it doesn’t matter how educated someone is if they don’t think critically about information that confirms their expectations.
Oh yikes sorry for the hostility, I definitely did mix you up with OP.
Someone has invested, the solution is tiling window managers.
As 217 people have told you in this thread, tiling window managers allow you keep all your windows full screen if you want.
Sounds like your screen is too close to your face.
Yeah, definitely a matter of workflow and personal preference. Nobody wants to convert anyone else, you just ask why people use tiling WM, and people are answering.
why tile windows at all
I can answer that pretty comfortably. There are two main reasons, the first is that it’s very common to have to look at two things at once. If I’m taking notes while reading something complicated, or writing some complex code while referencing the documentation, or tweaking CSS rules while looking at the page I’m working on, it’s just way too disruptive to constantly have to switch windows.
The second main reason (for me) is that a lot of the time, the content of a single window is too small to make use of the space on your monitor. In those cases, if I have something else I’m working on and it’s also small, I’ll tile them. It might be easy to toggle between windows with a hotkey, but it’s strictly easier to not have to toggle, and just move your eyes over. Peripheral vision means that you don’t entirely lose the context of either window. When you’re ready to switch back to the one you just left, you don’t have to touch anything, and you don’t have to wait for the window to render to visually locate where you left off.
I’m actually laughing over here, that was pretty good.