• Eccitaze@yiffit.net
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 day ago

    There’s a pretty big difference between chatGPT and the science/medicine AIs.

    And keep in mind that for LLMs and other chatbots, it’s not that they aren’t useful at all but that they aren’t useful enough to justify their costs. Microsoft is struggling to get significant uptake for Copilot addons in Microsoft 365, and this is when AI companies are still in their “sell below cost and light VC money on fire to survive long enough to gain market share” phase. What happens when the VC money dries up and AI companies have to double their prices (or more) in order to make enough revenue to cover their costs?

    • obbeel@lemmy.eco.br
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 hours ago

      I understand that it makes less sense to spend in model size if it isn’t giving back performance, but why would so much money be spent on larger LLMs then?

    • theherk@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Nothing to argue with there. I agree. Many companies will go out of business. Fortunately we’ll still have the llama3’s and mistral’s laying around that I can run locally. On the other hand cost justification is a difficult equation with many variables, so maybe it is or will be in some cases worth the cost. I’m just saying there is some merit.