• TachyonTele@lemm.ee
    link
    fedilink
    arrow-up
    66
    arrow-down
    4
    ·
    2 months ago

    You can’t turn a spicy autocorrect into anything even remotely close to Jarvis.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      13
      arrow-down
      9
      ·
      2 months ago

      It’s not autocorrect, it’s a text predictor. So I’d say you could definitely get close to JARVIS, especially when we don’t even know why it works yet.

      • Zangoose@lemmy.world
        link
        fedilink
        arrow-up
        19
        arrow-down
        4
        ·
        2 months ago

        You’re just being pedantic. Most autocorrects/keyboard autocompletes make use of text predictors to function. Look at the 3 suggestions on your phone keyboard whenever you type. That’s also a text predictor (granted it’s a much simpler one).

        Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

        It’s also not like the devs are confused about why LLMs work. If you had every publicly uploaded sentence since the creation of the Internet as a training reference I would hope the resulting model is a pretty good autocomplete, even to the point of being able to answer some questions.

        • Aatube@kbin.melroy.org
          link
          fedilink
          arrow-up
          4
          arrow-down
          11
          ·
          2 months ago

          Yes, autocorrect may use text predictors. No, that does not make text predictors “spicy autocorrect”. The denotation may be correct, but the connotation isn’t.

          Text predictors (obviously) predict text, and as such don’t have any actual understanding on the text they are outputting. An AI that doesn’t understand its own outputs isn’t going to achieve anything close to a sci-fi depiction of an AI assistant.

          There’s a large philosophical debate about whether we actually know what we’re thinking, but I’m not going to get into that. All I’m going to elaborate on is the thought experiment of the Chinese room that posits that perhaps AI doesn’t need to understand things to have apparent intelligence enough for most functions.

          It’s also not like the devs are confused about why LLMs work.

          Yes they are. All they know is that if you train a text predictor a ton, at one point it hits a bottleneck of usability way below targets, and then one day it will suddenly surpass that bottleneck for no apparent reason.

  • wildncrazyguy138@fedia.io
    link
    fedilink
    arrow-up
    21
    ·
    2 months ago

    Not going to go into details due to confidentiality, but I recently was involved in an initiative to utilize AI to scan education databases and identify students who may be at risk of dropping out, with the goal of having an early safety net for these folks. And also raising the schools retention rates, thus better outcomes overall.

    So yes, AI can absolutely be used for good.

    • PetteriPano@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      2 months ago

      I trained an ANN back in 2012 to trade bitcoin for me on mtgox. It performed quite a bit better than just HODLing until mtgox happened.

      Now I live in a van down by the river.

    • UnrepentantAlgebra@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      I assume your project wasn’t based on ChatGPT? It feels like a lot of the AI hate is directed at ChatGPT and its current hype wave.

  • Sundial@lemm.ee
    link
    fedilink
    arrow-up
    26
    arrow-down
    7
    ·
    2 months ago

    It’s a capitalist invention and, therefore, will be used for whatever capitalists deem it profitable to be. Once the money for AI home assistants starts rolling in, then you’ll see it adopted for that purpose.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      6
      arrow-down
      14
      ·
      2 months ago

      It’s a free market invention and, therefore, will be used by whatever a free market decides it should be used for.

      • jbrains@sh.itjust.works
        link
        fedilink
        arrow-up
        20
        arrow-down
        2
        ·
        2 months ago

        The people already with the money have orders of magnitude more freedom on average to decide and pursue opportunities.

        Free market inventions do not guarantee persistent and open access.

      • 10_0@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        2 months ago

        I think the gov should regulate the AI market, create standards that prevent abuse by bad people (such as image gen not being able to make CP ect.)

      • otp@sh.itjust.works
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        2 months ago

        whatever a free market decides it should be used for

        People say that AIs don’t “think” or “decide” things, but I think it’s better to personify an AI/LLM than “a free market”, lol

  • Lemvi@lemmy.sdf.org
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    2 months ago

    Training good models requires lots of training data and computational resources, so the only ones who can afford to train them are big corporations with access to both. And the only objective they have is to increase their profit.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      4
      arrow-down
      2
      ·
      2 months ago

      Well, as long as we ensure training data needs to be paid for and can’t just be scraped from the web, we will ensure that only large corporations with deep pockets can train models.

      That is the reason there is a big “grassroots” push to stop AI from training on all our web content: it’s a play to ensure no small players can make AI, and that AI is dominated by a few big players.

  • node_user@feddit.uk
    link
    fedilink
    arrow-up
    11
    ·
    2 months ago

    Home assistant, whisper, piper, openwakeword (set to jarvis), ollama.

    But its no iron man level jarvis

  • BCsven@lemmy.ca
    link
    fedilink
    arrow-up
    9
    arrow-down
    1
    ·
    2 months ago

    Lots of technologies could be used to improve things, but corporations just look at profit, not improving the human condition. Just like Ford patenting the system to listen to you in the car and serve you better ads, AI will trend toward making more ad sales, and models trained will lean to this always. It is why OpenSource stuff is so important, its the unpaid or low paid people doing cool stuff to solve actual problems that innovate to a goal of solving , not to goal of monotizing. Like Windows 11 is ad bloatware. The amount of tech and money MS could leverage and instead they build an ad OS, that they are now backporting to Windows10.
    Meanwhile OpenSource devs build a linux distro that turned my 13 year old laptop (that choked and died on running W10 (was OK on W7)) into a peppy machine that handles web streaming, zoom calls, and opening files as fast as a brand new laptop. When money is not the end goal lots of good things happen

  • j4k3@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    2 months ago
    We are at a phase where AI is like the first microprocessors; think Apple II or Commodore 64 era hardware. These showed potential, but it was only truly useful with lots of peripheral systems and an enormous amount of additional complexity. Most of the time, advanced systems beyond the cheap consumer toys of this era used several of the processors and other systems together.

    Similarly, now AI as we have access to it, is capable, but has a narrow scope. Making it useful requires a ton of specialized peripherals. These are called RAG and agents. RAG is augmented retrieval of information from a database. Agents are collections of multiple AI’s to do a given task where they have different jobs and complement each other.

    It is currently possible to make a very highly specialized AI agent for a niche task and have it perform okay within the publicly available and well documented tool chains, but it is still hard to realize. Such a system must use info that was already present in the base training. Then there are ways to improve access to this information through further training.

    With RAG, it is super difficult to subdivide a reference source into chunks that will allow the AI to find the relevant information in complex ways. Generally this takes a ton of tuning to get it right.

    The AI tools available publicly are extremely oversimplified to make them accessible. All are based around the Transformers library. Go read the first page of Transformers documentation on Hugging Face’s website. It clearly states that it is only a basic example implementation that prioritizes accessibility over completeness. In truth, if the real complexity of these systems was made the default interface we all see, no one would play with AI at all. Most people, myself included, struggle with sed and complex regular expressions. AI in its present LLM form is basically turning all of human language into a solvable math problem using regular expressions and equations. This is the ultimate nerd battle between English teachers and Math teachers where the math teachers have won the war; all language is now math too.

    I’ve been trying to learn this stuff for over a year and barely scratched the surface of what is possible just in the model loader code that preprocess the input. There is a ton going on under the surface. All errors are anything but if you get into the weeds. Models do not hallucinate in the sense that most people see errors. The errors are due to the massive oversimplifications made to make the models accessible in a general context. The AI alignment problem is a thing and models do hallucinate but the scientific meaning is far more nuanced and specific than the common errors from generalized use.

  • stoy@lemmy.zip
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    Yes, it would be better, but unless I saw the code, understood it and verified that it is the code running I would not trust it as much as I would need to trust a system like Jarvis

    • NorthWestWind@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      2 months ago

      Unfortunately the creators of Jarvis also doesn’t understand him. Jarvis cannot express his frustration to anyone and goes mad.

  • PeepinGoodArgs@reddthat.com
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    6
    ·
    2 months ago

    Every answer so far is wrong.

    It can be used for good purposes, though I’m not sure if characterize creating a personalized Jarvis as good per se. But, more broadly, capitalist inventions do not need to be used only by capitalists for capital ends.

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      edit-2
      2 months ago

      Every answer so far is wrong.

      I wouldn’t say wrong so much as leaving out the detail that LLMs aren’t evil and that open source LLMs are really what the world should be aiming for, if anything. Like any tool, it can be used as a weapon and for ill-purposes. I can use a hammer to build a house as much as I can use it to cave in someone’s skull.

      But even in the open source world, LLMs have not lead to a massive increase of new tools, or a massive increase of finding bugs, or a massive increase in open source productivity… all things LLMs promise, but have yet to deliver on in the open source world. Which, based on how much energy they use, we ought to be asking if that’s actually truly beneficial to be burning so much energy for something which has, as of yet, to prove itself as actually bringing the promised increased open source productivity.

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    The best way to ensure AI is used for good purposes is to make sure AI is in as many hands as possible. That was the original idea behind OpenAI (hence the name), which was supposed to be a nonprofit pushing open-source AI into the world to ensure a multipolar AI ecosystem.

    That failed badly.

  • piyuv@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    2 months ago

    JARVIS is AI. LLMs are superpowered autocorrect. We don’t have anything close to AI yet.

  • Snot Flickerman@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    3
    ·
    edit-2
    2 months ago

    Someone’s been watching way too many movies and isn’t familiar yet with how mind bogglingly stupid “AI” actually is.

    JARVIS can think on its own, it doesn’t need to be told to do anything. LLMs cannot think on their own, they have no intention, they can only respond to input. They cannot create “thoughts” on their own without being prompted by a human.

    The reason they spout so much BS is because they don’t even really think. They cannot tell the difference between truth and fiction and will be just as happily confident in the truth of their statements whether they are being truthful or lying because they don’t know the fucking difference.

    We’re fucking worlds away from a JARVIS, man.

    Like half the stuff they claim AI does, like those “AI stores” Amazon had, where you just picked up stuff and walked out with it and the “AI would intelligently figure out what you bought and apply it to your account.” That AI was actually a bunch of low paid people in third world countries documenting videos. It was never fucking AI to begin with because nothing we have even comes close to that fucking capability without human intervention.

  • Naz@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    arrow-down
    5
    ·
    edit-2
    2 months ago

    It’s obvious that this question was written by a child or someone learning the English language, given your spelling mistakes, grammar use and references, however:

    ELI5:

    The answer is yes, we can have “good AI” like JARVIS, but AI is still early and doesn’t make money for companies.

    Companies make money selling a product, and AI isn’t a product because it isn’t something that belongs to them. So they sell people’s information that they get when people talk to the AI.

    But that doesn’t make enough money to pay the bills for AI, so they charge subscriptions. People who pay the subscriptions want to use the AI “for evil”, as you put it.

    So in the end it’s about “making money” with the AI, and JARVIS does not make them money.

    If you learn a lot about computers, you’ll have your own JARVIS. I have one. It takes dedication, like anything else in life. Good luck with your school project.

    Exhales