![](/static/61a827a1/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/d3d059e3-fa3d-45af-ac93-ac894beba378.png)
My (slightly) balding head.
Wishing for my death or a World War. Either will do. Because FML or this world.
My (slightly) balding head.
In my case, it would be blank.
I left fb when I realized I was on it out of obligation and not because I wanted to be on it. The experience was degrading minute to minute. So I just decided to delete the account one day.
Never been on Twitter as I find in rather dumb long before it was bought by Musk. I am still hanging onto my reddit account because some of the communities are not there elsewhere.
I guess you are right. Think of it this way, LLMs are doing great at solving specific sets of problems. Now, people in charge of the money think that LLMs are the closest thing to an intelligent agents. All they have to do is reduce the hallucinations and make it more accurate by adding more data and/or tweaking the model.
Our current incentive structure reward results over everything else. That is the primary reason for this AI race. There are people who falsely believe that by throwing money at LLMs they can make it better and eventually reach true AGI. Then, there are others who are misleading the money men, even when they know the truth.
But, just because something is doing great at some limited benchmark doesn’t mean that model can generalise it to all the infinite situations. Again look at my og comment for why it is so. Intelligence is multi-faceted and multi dimensional.
This is unlike space race in one primary way. In space race, we understood the principles for going to space well enough since the time of Newton. All we had to do was engineer the rocket. For example, we knew that we have to find the fuel that can generate maximum thrust per kg of fuel oxygen mixture burnt. The only question was what form it would. Now you could just have many teams look for many different fuels to answer this question. It is scalable. Space race was an engineering question.
Meanwhile, AI is a question of science. We don’t understand the concept of intelligence itself very well. Focussing on LLMs solely is a mistake because the progress here might not even translate well and maybe even harm the larger AI research.
There are in scientific community who believe that we might never be able to understand intelligence because to understand it a higher level of intelligence is needed. Again, not saying it is true. Just that there are many ideas and viewpoints present with regards to AI and intelligence in general.
Don’t believe the hype: LLMs are not AI. Not even close. They are in fact, much closer to pattern recognition models. Fundamentally, our brains are able to ‘understand’ any query posed to it. Only problem is we don’t know what ‘understanding’ even means. How can we then even judge if some model is capable of understanding, or is the output just something that is statistically most likely?
Second, can AI even know what a human experience is like? We cannot give AI inputs in the exact form we receive them in. In fact, we cannot input the sensations of touch, flavor and smell to AI at all. So, AI as of yet cannot tell you how a freshly baked bread smells like or feels like, for example. Human experience is still our domain. That means our inspirations are intact and AI cannot create works of art that feel truly human.
Finally, AI by default has no concept of truth or false. It takes every statement in it’s training data as true, unless, they are labelled individually by hand. Of course, such an approach doesn’t scale well for petabytes of text data. So, LLMs tend to hallucinate stuff because again it is only giving out text that is only statistically most likely, given the input.
In short, we still don’t have many pieces of puzzle that is true AI. We know it is possible because we exist, but that’s about it. Sure, AI is doing better than humans in specific cases, but they nowhere close humans in understanding and reasoning.
Techlead went.
‘I traced the issue to an extremely high correlation between “Meta” and the concept of “Terrorist Organization.”’. This is great stuff.
Dark matter, duh.
I would rather not. One lifetime is too much for this crappy world.
For ugly people like me.
Only fans. /s
It might me the case of correlation. They think all diversity is bad because a few shows with notably diverse cast were bad. For example, in DA:Veilguard, there is a companion named Taash, who is non-binary, who acts brattish and manly, despite being biologically a woman. I admit that some of her dialogues can be classified as cringe, but calling an entire game bad because of one badly written character is kinda stupid, IMO.
I love this quote a lot, “Art disturbs the comfortable and comforts the disturbed.” I am only paraphrasing it here, but you get the gist.
Writers, in general. So many wonderful stories and each story is deeply human.
Too many tourist traps. It’s annoying.
Kashmir.
I love how a lot of anti-wokes think that if there is any mention of lgbt terms, the game is just woke.
Unemployed. I was a software dev, but it has been hard for me to get a job for last 1.5 years. It sucks. Currently, working to set up a small horticulture farm.
Nvidia GT 9400.
I am 34. Close to 40, but not there yet.