• ignirtoq@fedia.io
    link
    fedilink
    arrow-up
    54
    ·
    6 days ago

    The open availability of cutting-edge models creates a multiplier effect, enabling startups, researchers, and developers to build upon sophisticated AI technology without massive capital expenditure. This has accelerated China’s AI capabilities at a pace that has shocked Western observers.

    Didn’t a Google engineer put out a white paper about this around the time Facebook’s original LLM weights leaked? They compared the rate of development of corporate AI groups to the open source community and found there was no possible way the corporate model could keep up if there were even a small investment in the open development model. The open source community was solving in weeks open problems the big companies couldn’t solve in years. I guess China was paying attention.

    • Sl00k@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      ·
      6 days ago

      China “open sources” a lot of their technologies. They treat it as a form of competition, we’ll show you how to do x but you show us how to do u and whoever is better at both wins out, there’s a lot of short videos on how BYD taught other Chinese EV manufacturers and even Ford how their automated manufacturing plants work. The end result, everything becomes a highly optimized process. Glad to see they’re also adopting this framework with open sourcing AI development.

      This is also a reason why there’s a hugeee cultural clash with US IP theft.

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    11
    ·
    5 days ago

    Turns out when you build your entire business on copyright infringement, a. it’s easy to steal your business and b. you have no recourse when someone does.

  • Creative Computerist@lemmings.world
    link
    fedilink
    arrow-up
    17
    ·
    6 days ago

    Sometimes I am happy to be able to say that I am not surprised by a piece of news and for once it does not mean in a political terror/economic destruction/environmental eradication way.

  • Linktank@lemmy.today
    link
    fedilink
    arrow-up
    10
    ·
    6 days ago

    Okay, can somebody who knows about this stuff please explain what the hell a “token per second” means?

    • IndeterminateName@beehaw.org
      link
      fedilink
      arrow-up
      22
      ·
      6 days ago

      A bit like a syllable when you are talking about text based responses. 20 tokens a second is faster than most people could read the output so that’s sufficient for a real time feeling “chat”.

      • SteevyT@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        5 days ago

        Huh, yeah that actually is above my reading speed assuming 1 token = 1 word. Although, I found that anything above 100 words per minute, while slow to read, feels real time to me since that’s about the absolute top end of what most people type.

    • IrritableOcelot@beehaw.org
      link
      fedilink
      arrow-up
      10
      ·
      6 days ago

      Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!

      “Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.

      I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.

  • Flax@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    ·
    6 days ago

    Of course, the Chinese flag has to be in the article thumbnail.