• Grandwolf319@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    77
    arrow-down
    1
    ·
    28 days ago

    Pretty sure Valve has already realized the correct way to be a tech monopoly is to provide a good user experience.

    • Jesus@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      28 days ago

      I remember this being some sort of Apple meme at some point. Hence the gum drop iMac.

    • wise_pancake@lemmy.ca
      link
      fedilink
      English
      arrow-up
      23
      arrow-down
      8
      ·
      28 days ago

      The model weights and research paper are, which is the accepted terminology nowadays.

      It would be nice to have the training corpus and RLHF too.

      • kryptonidas@lemmings.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        28 days ago

        The training corpus of these large models seem to be “the internet YOLO”. Where it’s fine for them to download every book and paper under the sun, but if a normal person does it.

        Believe it or not:

      • Stovetop@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        3
        ·
        edit-2
        28 days ago

        A lot of other AI models can say the same, though. Facebook’s is. Xitter’s is. Still wouldn’t trust those at all, or any other model that publishes no reproduceable code.

      • ayyy@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        28 days ago

        I wouldn’t call it the accepted terminology at all. Just because some rich assholes try to will it into existence doesnt mean we have to accept it.

      • legolas@fedit.pl
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        28 days ago

        well if they really are and methodology can be replicated, we are surely about to see some crazy number of deepseek comptention, cause imagine how many us companies in ai and finance sector exist out there that are in posession of even larger number of chips than chinese clamied to have trained their model on.

        Although the question rises - if the methodology is so novel why would these folks make it opensource? Why would they share results of years of their work to the public losing their edge over competition? I dont understand.

        Can somebody who actually knows how to read machine learning codebase tell us something about deepseek after reading their code?

      • sem@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        They are trying to make it accepted but it’s still contested. Unless the training data provided it’s not really open.

      • The Octonaut@mander.xyz
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        the accepted terminology

        No, it isn’t. The OSI specifically requires the training data be available or at very least that the source and fee for the data be given so that a user could get the same copy themselves. Because that’s the purpose of something being “open source”. Open source doesn’t just mean free to download and use.

        https://opensource.org/ai/open-source-ai-definition

        Data Information: Sufficiently detailed information about the data used to train the system so that a skilled person can build a substantially equivalent system. Data Information shall be made available under OSI-approved terms.

        In particular, this must include: (1) the complete description of all data used for training, including (if used) of unshareable data, disclosing the provenance of the data, its scope and characteristics, how the data was obtained and selected, the labeling procedures, and data processing and filtering methodologies; (2) a listing of all publicly available training data and where to obtain it; and (3) a listing of all training data obtainable from third parties and where to obtain it, including for fee.

        As per their paper, DeepSeek R1 required a very specific training data set because when they tried the same technique with less curated data, they got R"zero’ which basically ran fast and spat out a gibberish salad of English, Chinese and Python.

        People are calling DeepSeek open source purely because they called themselves open source, but they seem to just be another free to download, black-box model. The best comparison is to Meta’s LlaMa, which weirdly nobody has decided is going to up-end the tech industry.

        In reality “open source” is a terrible terminology for what is a very loose fit when basically trying to say that anyone could recreate or modify the model because they have the exact ‘recipe’.

      • maplebar@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        28 days ago

        the accepted terminology nowadays

        Let’s just redefine existing concepts to mean things that are more palatable to corporate control why don’t we?

        If you don’t have the ability to build it yourself, it’s not open source. Deepseek is “freeware” at best. And that’s to say nothing of what the data is, where it comes from, and the legal ramifications of using it.

  • Dave@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    28 days ago

    DeepSeek shook the AI world because it’s cheaper, not because it’s open source.

    And it’s not really open source either. Sure, the weights are open, but the training materials aren’t. Good look looking at the weights and figuring things out.

    • Hackworth@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      28 days ago

      True, but they also released a paper that detailed their training methods. Is the paper sufficiently detailed such that others could reproduce those methods? Beats me.

  • meowmeowbeanz@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    28 days ago

    Wall Street’s panic over DeepSeek is peak clown logic—like watching a room full of goldfish debate quantum physics. Closed ecosystems crumble because they’re built on the delusion that scarcity breeds value, while open source turns scarcity into oxygen. Every dollar spent hoarding GPUs for proprietary models is a dollar wasted on reinventing wheels that the community already gave away for free.

    The Docker parallel is obvious to anyone who remembers when virtualization stopped being a luxury and became a utility. DeepSeek didn’t “disrupt” anything—it just reminded us that innovation isn’t about who owns the biggest sandbox, but who lets kids build castles without charging admission.

    Governments and corporations keep playing chess with AI like it’s a Cold War relic, but the board’s already on fire. Open source isn’t a strategy—it’s gravity. You don’t negotiate with gravity. You adapt or splat.

    Cheap reasoning models won’t kill demand for compute. They’ll turn AI into plumbing. And when’s the last time you heard someone argue over who owns the best pipe?

  • legolas@fedit.pl
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    14
    ·
    edit-2
    28 days ago

    Apparently DeepSeek is lying, they were collecting thousands of NVIDIA chips against the US embargo and it’s not about the algorithm. The model’s good results come just from sheer chip volume and energy used. That’s the story I’ve heard and honeslty it sounds legit.

    Not sure if this questions has been answered though: if it’s open sourced, cant we see what algorithms they used to train it? If we could then we would know the answer. I assume we cant, but if we cant, then whats so cool about it being open source on the other hand? What parts of code are valuable there besides algorithms?

    • Pennomi@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      3
      ·
      28 days ago

      The open paper they published details the algorithms and techniques used to train it, and it’s been replicated by researchers already.

      • legolas@fedit.pl
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        28 days ago

        So are these techiques so novel and breaktrough? Will we now have a burst of deepseek like models everywhere? Cause that’s what absolutely should happen if the whole storey is true. I would assume there are dozens or even hundreds of companies in USA that are in a posession of similar number but surely more chips that Chinese folks claimed to trained their model on, especially in finance sector and just AI reserach focused.

        • ArchRecord@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          28 days ago

          So are these techiques so novel and breaktrough?

          The general concept, no. (it’s reinforcement learning, something that’s existed for ages)

          The actual implementation, yes. (training a model to think using a separate XML section, reinforcing with the highest quality results from previous iterations using reinforcement learning that naturally pushes responses to the highest rewarded outputs) Most other companies just didn’t assume this would work as well as throwing more data at the problem.

          This is actually how people believe some of OpenAI’s newest models were developed, but the difference is that OpenAI was under the impression that more data would be necessary for the improvements, and thus had to continue training the entire model with additional new information, and they also assumed that directly training in thinking times was the best route, instead of doing so via reinforcement learning. DeepSeek decided to simply scrap that part altogether and go solely for reinforcement learning.

          Will we now have a burst of deepseek like models everywhere?

          Probably, yes. Companies and researchers are already beginning to use this same methodology. Here’s a writeup about S1, a model that performs up to 27% better than OpenAI’s best model. S1 used Supervised Fine Tuning, and did something so basic, that people hadn’t previously thought to try it: Just making the model think longer by modifying terminating XML tags.

          This was released days after R1, based on R1’s initial premise, and creates better quality responses. Oh, and of course, it cost $6 to train.

          So yes, I think it’s highly probable that we see a burst of new models, or at least improvements to existing ones. (Nobody has a very good reason to make a whole new model of a different name/type when they can simply improve the one they’re already using and have implemented)

    • ayyy@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      edit-2
      28 days ago

      It’s time for you to do some serious self-reflection about the inherent biases you believe about Asians Chinese people.

      • legolas@fedit.pl
        link
        fedilink
        English
        arrow-up
        3
        ·
        28 days ago

        WTF dude. You mentioned Asia. I love Asians. Asia is vast. There are many countries, not just China bro. I think you need to do these reflections. Im talking about very specific case of Chinese Deepseek devs potentiall lying about the chips. The assumptions and generalizations you are thinking of are crazy.

        • ayyy@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          28 days ago

          And how do your feelings stand up to the fact that independent researchers find the paper to be reproducible?

          • legolas@fedit.pl
            link
            fedilink
            English
            arrow-up
            2
            ·
            28 days ago

            Well maybe. Apparntly some folks are already doing that but its not done yet. Let’s wait for the results. If everything is legit we should have not one but plenty of similar and better models in near future. If Chinese did this with 100 chips imagine what can be done with 100000 chips that nvidia can sell to a us company