• 0 Posts
  • 5 Comments
Joined 3 years ago
cake
Cake day: August 14th, 2023

help-circle

  • The fact is though the average person is starting to replace their search engine with chatgpt, gemini, grok or whatever

    Yeah, and it makes sense for the average person to do that. Because Google, Bing, etc, have enshittified their search results so badly that the first few pages of results for any question are almost guaranteed to be AI-generated websites anyway. So you can take the answer the AI gives you, or you can click through to an AI generated website, which is just using the AI with extra steps. Or you can commit the extra time and energy to actually get a useful result written by a human being, which is significantly harder than it used to be, because the useful results are hidden behind decades of search engine optimization and the last few years of AI slop.

    None of those options are actually good.

    The ubiquity of LLMs hasn’t made search results better. It’s made people more willing to accept worse results.


  • Yeah, I get that it seems like a fine use for average people doing basic math. The nonzero chance of error could end up not mattering. But it could matter very much, depending on the use case. If you’re asking an LLM the volume of a bucket, it’s not a big deal. If you’re asking an LLM “how many milligrams of this drug is the correct dose for a 80 kg man”, that’s a big fucking deal.

    If people don’t know LLMs can’t be trusted to give the corect answer, they’re not going to realize they need to do the math themselves in important use cases. And that is certainly not something Microsoft and Google are encouraging people to learn.

    Then there’s the efficiency issue - Big Tech spent trillions of dollars to develop and train machine learning processors, which perform quadrillions of energy-intensive processes per second, and they’re being marketed to do a job that a 99 cent solar powered calculator from the 1980s can do better.

    God, I just realized tax season is coming up. And after all the layoffs and political firings and general dogebaggery at the American IRS, they’re going to have to deal with people using AI to do their taxes 😆


  • stabby_cicada@lemmy.blahaj.zonetoMicroblog Memes@lemmy.worldeveryone hates AI
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    edit-2
    2 days ago

    What you’ve given is an example of a problem where an LLM is inherently the wrong tool.

    See, variation is built into LLMs. They’re programmed to evaluate probable responses and select from them on the basis of probability - to simplify ridiculously, if a particular word follows another 90% of a time, in 90% of the content it generates the LLM will have that word follow the other, and in the other 10% it won’t.

    If you give an LLM the exact same prompt multiple times, you will get multiple different responses. They’ll all be similar responses, but they won’t be exactly the same, because how LLMs generate language is probabilistic and contains randomness.

    (And that is why hallucination is an inherent feature of LLMs and can’t be trained out.)

    But math isn’t language. Math problems have correct answers. When you use a software tool to answer a math problem, you don’t want variation. You want the correct answer every time.

    To solve a math problem, you need to find the appropriate formula, which will be the same every time. Then you use a calculator, which always gives the correct result. You plug the numbers into the formula and calculate the result.

    What I’m getting at is, if you use a calculator to do the math problem yourself, and you put in the correct formula, you’ll always get the correct result. If you use a LLM to generate the answer to a math problem, there is always a non-zero chance it will give you the wrong answer.

    But what if, you might ask, you don’t know the correct formula? What if you’re not good enough at math to calculate the correct answers, even with a calculator? Isn’t this a time when the LLM can be useful, to do something you can’t?

    The problem is, the LLM could be wrong. And if you haven’t looked up the formula yourself, from a reliable source that is not an LLM, you have no way to check the LLM’s work. Which means you can’t trust it for anything important and you have to do the math yourself anyway.

    (This is true for everything an LLM does, but is especially true for math.)

    And if you have looked up the formula yourself, it’s just as easy to use a calculator the first time and skip the LLM.

    Right? This is what I’m getting at. An LLM can do some of the same things a human does, but it’s always going to be worse at it than a human, because it’s not conscious, it’s not reasoning its way to a correct answer, it’s just generating a string of linguistic tokens based on probabilities. And math problems might be the clearest possible example of this.


  • There are things you can do with the Internet that are impossible to do without the Internet. Everything you mentioned is very real harm that the Internet does to humanity in the world - even if you meant it sarcastically - but that harm has to be weighed against the benefits the Internet provides that can’t be replicated by anything else.

    There’s nothing a LLM can do that a human can’t. The only thing LLMs are good at is convincing managers to replace human employees with LLMs. Because even though LLMs do a worse job than any human employee, they’re cheaper and won’t unionize.

    The cost-benefit analysis for society is very different.