

I’ve tried the AIO container. The issue I’ve had is that I already have a file system for documents and try to attach it as a network drive, it’s at this point everything falls apart (not to mention just generally slow performance).
I’ve tried the AIO container. The issue I’ve had is that I already have a file system for documents and try to attach it as a network drive, it’s at this point everything falls apart (not to mention just generally slow performance).
Sorry I can’t really help, but can commiserate. Nextcloud is the one service I’ve never gotten to run right. Not sure if its gotten any better but a year or two ago I was trying and just wasn’t getting consistent results from it.
I commented further up, but will add here. I also have a Go 1 and Ubuntu worked okay. Webcam was definitely a no go, but it ran well enough for some productivity and light gaming.
Only thing I really hate is hibernate doesn’t really work on Linux. For a tablet, maybe you always keep yours on, but I liked hibernate to help keep the battery going longer.
Commenting to add here, this is not going to be about distro choice as much as seeing what configuration on the Linux-surface guide provides greatest support.
There are many things that may not work on your surface due to support, so definitely follow those guides.
Spent hours trying to get the Webcam to work on Surface Go 1.
UK went through industrialization leading to its empire, and the US was the industrial power during its ascent. Same thing with Japan before WWII.
Many imoeralistic powers seem to go through big industrial growth before expansion.
I’m not sure how good a source it is, but Wikipedia says it was multimodal and came out about two years ago - https://en.m.wikipedia.org/wiki/GPT-4. That being said.
The comparisons though are comparing the LLM benchmarks against gpt4o, so maybe a valid arguement for the LLM capabilites.
However, I think a lot of the more recent models are pursing architectures with the ability to act on their own like Claude’s computer use - https://docs.anthropic.com/en/docs/build-with-claude/computer-use, which DeepSeek R1 is not attempting.
Edit: and I think the real money will be in the more complex models focused on workflows automation.
My main point is that gpt4o and other models it’s being compared to are multimodal, R1 is only a LLM from what I can find.
Something trained on audio/pictures/videos/text is probably going to cost more than just text.
But maybe I’m missing something.
My understanding is it’s just an LLM (not multimodal) and the train time/cost looks the same for most of these.
I feel like the world’s gone crazy, but OpenAI (and others) is pursing more complex model designs with multimodal. Those are going to be more expensive due to image/video/audio processing. Unless I’m missing something that would probably account for the cost difference in current vs previous iterations.
Just some quick Google searches so not sure how reputable, but didn’t feel like copying random links.
But yeah, that’s why I called them out as estimates as I suspect there is a lot of room for error in those numbers.
I had to looks this one up, but missed the “galaxy” vs “universe”. There are an estimated 3 trillion trees, 100-400 billion stars in the milky way galaxy, but potentially 1 septilliom stars in the universe.
However all three of these are estimates, so who actually knows.
I’m actually not sure how you’d label the axis here. The info being conveyed is the relationship between two separate things.
That’s a good catch!
You are right that it is more of just a spec bump, but given the warning that not all switch games may be compatible, I think the controllers are going to have different sensors (some have speculated a more mouse-like feature).
I’ve had every Nintendo console since the gbc and suspect I’ll eventually get this too, but they’ve got an uphill battle vs the steamdeck for me. Really going to depend on the first party games.
I feel like the nes->snes, gb->gbc->GBA, ds->3ds, and wii->WiiU were all pretty similar advancements.
In all of those except nes->snes you had backwards compatability, and the wii->WiiU had hardware backwards compatibility (which the switch 2 doesn’t, at least for controllers).
Yeah in the modern age internet access should be considered a necessity. There are a lot of things you can’t do without the internet (like get a job or pay bills).
So LLMs can trace their origin back to the 2017 paper “Attention is all you need”, they with diffusion models have enabled prompt based image generation at an impressive quality.
However, looking at just image generation you have GANs as far back as 2014 with style GANs (ones that you could more easily influence) dating back to 2018. While diffusion models also date back to 2015, I don’t see any mention of use in images until early 2020’s.
Thats also ignoring that all of these technologies go back further to lstms and CNNs, which go back further into other NLP/CV technologies. So there has been a lot of progress here, but progress isn’t also always linear.
I feel like one of those isn’t like the others
With jupyter notebooks in a devops perspective you could just build a process to export the notebooks to standard py files and then run them.
There are actually a lot of git hooks that will actually expoet/convert .ipynb to .py files automatically since notebooks don’t work great with git.
Appreciate the feedback!