Why do people host LLMs at home when processing the same amount of data from the internet to train their LLM will never be even a little bit as efficient as sending a paid prompt to some high quality official model?

inb4 privacy concerns or a proof of concept

this is out of discussion, I want someone to prove his LLM can be as insightful and accurate as paid one. I don’t care about anything else than quality of generated answers

  • theotherbelow@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    8 hours ago

    100% you don’t have to train a thing ollama uses open availability models. They many are decent, the best use a lot of ram/vram.