cross-posted from: https://pawb.social/post/28223553

OpenAI launched ChatGPT Agent on Thursday, its latest effort in the industry-wide pursuit to turn AI into a profitable enterprise—not just one that eats investors’ billions. In its announcement blog, OpenAI says its Agent “can now do work for you using its own computer,” but CEO Sam Altman warns that the rollout presents unpredictable risks.

[…]

OpenAI research lead Lisa Fulford told Wired that she used Agent to order “a lot of cupcakes,” which took the tool about an hour, because she was very specific about the cupcakes.

  • kescusay@lemmy.worldM
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 hour ago

    Locking this post because people are getting downright vicious to each other in the comments.

  • Angelusz@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    2
    ·
    1 hour ago

    You all like to be so negative about this because you’re scared. Truth is, he’s making the right move. Another company would have released the same, but with different ethics in mind. He also makes sure to note that people shouldn’t trust it with anything important.

    We all know that not everyone reads manuals and uses tech responsibly. That doesn’t stop time, nor progression. Best that a non-profit company with at least somewhat positive ethics comes first with stuff like this, instead of for-profit capitalist leeches.

    Say all you want, there’s arguments against OpenAI, for sure. But I’m not wrong.

  • xxce2AAb@feddit.dk
    link
    fedilink
    English
    arrow-up
    51
    arrow-down
    3
    ·
    18 hours ago

    So it takes ChatGPT 10 minutes to an hour of servertime and the energy equivalent of a tank of gas or two to complete a simple task the user could have done in thirty seconds using their 40W brainmeats and a couple of pudgy fingers. That’s just great. Good stuff, Altman. /s

    • kadu@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      13 hours ago

      But it will get better we promise please use AI. I swear you’re going to love it we are going to replace all workers with agents, just use it, use AI. Please use AI. We are going to put AI into your taskbar use it. Just use AI. It will get better, might as well use AI now. Use AI now.

    • foggy@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      21
      ·
      edit-2
      16 hours ago

      You’re not wrong today. But this is exactly the basis of the critique of computers in the 50s. And you probably created this post using a mobile Internet connected computer that fits in your pocket.

      • SippyCup@feddit.nl
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        2
        ·
        13 hours ago

        The first computers were astonishingly faster and more accurate than human calculation.

        • foggy@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          8
          ·
          edit-2
          11 hours ago

          Not for negative numbers floating point … Just read the list man, no they were not. You’re incorrect here.

          They were faster. Not more accurate.

          Just like AI.

          • SippyCup@feddit.nl
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            2
            ·
            10 hours ago

            Just to be clear, you’re saying that ENIAC was just as prone to mathematical error as a guy doing long division on paper would have been?

            • foggy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              3 hours ago

              Stop building straw men.

              I never claimed ENIAC was as error prone as a human; I cited its specific technical limits to refute your oversimplification.

              You’re dodging my actual point by moving the goalposts. My whole point has been pretty clearly stated that today’s complaints about AI’s inefficiency sound exactly like the old complaints about the first computers.

              If you’re going to keep employing logical fallacies and move goalposts to make an argument with me and waste my time, I’m just going to block you.

      • Bizzle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        2
        ·
        6 hours ago

        I actually wrote this comment as an HTTP request on an index card and mailed it to the server admin who added it to the disk with a magnifying glass and a magnetized needle

      • foggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        16
        ·
        edit-2
        15 hours ago

        Okay, down vote away. Lemmy has such an ignorant hate boner against AI.

        Computers were fucking trash in the 50s. Dumb tech enthusiasts all said the same shit people say about AI today: computers are unreliable, create more problems than they solve, are ham-fisted solutions to problems that require human interaction, etc. here are the HUGE problems computers had that we solved before the 70s.

        1. Signed Number Representation

        Problem: No standard way to represent negative numbers in binary.

        Solution: Two’s complement became the standard.

        1. Error Detection & Correction

        Problem: Bit errors from unreliable hardware.

        Solution: Hamming codes, CRC, and other ECC methods.

        1. Floating Point Arithmetic

        Problem: Inconsistent and error-prone real number math.

        Solution: IEEE 754 standardized floating-point formats and behavior.

        1. Instruction Set Standardization

        Problem: Each computer had its own incompatible instruction set.

        Solution: Standardized ISAs like x86 and ARM became dominant.

        1. Memory Access and Management

        Problem: Memory was slow, small, and expensive.

        Solution: Virtual memory, caching, and paging systems.

        1. Efficient Algorithms

        Problem: Basic operations like sorting were inefficient.

        Solution: Research produced efficient algorithms (e.g., Quicksort, Dijkstra’s).

        1. Circuit Logic Design

        Problem: No formal approach to designing logic circuits.

        Solution: Boolean algebra, Karnaugh maps, and FSMs standardized design.

        1. Program Control Flow

        Problem: Programs used unstructured jumps and were hard to follow.

        Solution: Structured programming and control constructs (if, while, etc.).

        1. Character Encoding

        Problem: No standard way to represent letters or symbols.

        Solution: ASCII and later Unicode standardized text encoding.

        1. Programming Languages and Compilation

        Problem: Code was written in raw machine or assembly code.

        Solution: High-level languages and compilers made programming more accessible.

        Its just ignorant to be acting like any of the problems we face with AI won’t be sorted just as they were with computers.

        • LowtierComputer@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 hours ago

          A counter as I somewhat agree with you. Computers in that period weren’t purchased and used by every company under the sun. It was a specialized system, mostly used by universities and research.

          AI is being shoved into every possible orifice of modern society.

        • Rhaedas@fedia.io
          link
          fedilink
          arrow-up
          9
          ·
          15 hours ago

          I agree on the point of solving a problem, it’s just a matter of time, skill, and some luck. The biggest problem I see with AI right now is that it’s marketed as something it’s not. Which leads to a lot of the issues we have with “AI” aka LLMs put in places they shouldn’t be. Surprisingly they do manage pretty well a lot of the time, but when they fail it’s really bad. I.e., AI as sold is a remarkable illusion that wow, everyone has bought into even knowing full well it’s not near perfect.

          The only thing that will “fix” current AI is true AGI development that would demonstrate the huge difference. AI/LLMs might be part of the path there, I don’t know. It’s not the real solution though, no matter how many small countries worth of energy we burn to generate answers.

          I say all this as an active casual experimenter with local LLMs. What they can do, and how they do it is amazing, but I also know what I have and it’s not what I call AI, that term has been tainted again by marketers trying to cash in on ignorance.

          • foggy@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            1
            ·
            14 hours ago

            What I am saying is computers were also marketed as something they were not (yet) and eventually became.

            And so, history repeats itself.

        • MajorasMaskForever@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          11 hours ago

          I’m genuinely curious, how often does spouting off random bullshit work for you? Nothing you listed backs up your argument that the problems around AI are a result of it’s infancy and first cut implementations.

          Also, half of what you say is either untrue or disingenuous as all hell. “programs use unstructured jumps and were hard to follow”? What the fuck are you talking about? Please, find me a computer that didn’t use something like a branch statement and didn’t go in numerical sequence of instructions. I’ll wait while you learn this so called “Instruction Set Standardization” of yours doesn’t exist

          • 7toed@midwest.social
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            6 hours ago

            Of course the AI defender uses AI to argue, because they don’t need to understand shit if their AI girlfriend takes enough time and energy from their naysayers

        • SippyCup@feddit.nl
          link
          fedilink
          English
          arrow-up
          7
          arrow-down
          5
          ·
          13 hours ago

          Did you at any point in this raving lunacy of a rant stop to think that maybe, just maybe the reason people hate AI is because it’s bad?

            • SippyCup@feddit.nl
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              10 hours ago

              It’s like hating a shitty Black and Decker oscillating clipper, when it breaks, randomly cuts your thumb off, or fails to clip the weeniest of leaves from your hedge, when a pair of manual clippers work just fine.

              If it’s a tool, it’s a bad tool being marketed as the best and only tool you’ll ever need again.

          • MajorasMaskForever@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            11 hours ago

            raving lunacy of a rant

            Hey now, words have meaning. Lunacy implies there’s a brain there that can be in the state of “insane”. That entire thing was probably shit out by a LLM which is why it makes no logical sense

          • foggy@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            4
            ·
            11 hours ago

            Did you think that maybe just maybe, that people referred to computation as the mark of the beast in the 50s and associated it with satansim?

            Cool, cool.

            • SippyCup@feddit.nl
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              9 hours ago

              That is almost the most unhinged thing you’ve said today. Almost.

        • altkey@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          15 hours ago

          In your opinion, what would LLM usage look like in thirty years? Would it’s inefficiency be solved somehow? Would it’s generalistic approach (even conditioned, e.g. culinary LLM trained on recipes), become better than existing specialized tools? Would LLM cease to be the ‘natural’ playground for big corporations alone as no private citizen can train a comparable model? Would it still persist as an unpredictable black box? Would there arrive and stay new professions dedicated to be AI operators, e.g. forming correct text queues to the LLM, designing them and probably even getting patents on them?

          • foggy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            14 hours ago

            Lots of questions. Any of which I could only provide a very opinionated answer on. But to answer the bulk of your response here, I think we look to sociologists to predict the future of AI integrating into the division of labor.

            Basically, the division of labor will become more organic and complex, less rigid and mechanical.

            (i.e. nobody was paying bills by walking the neighborhood dogs in 1920. As technology increases, the division of labor becomes more organic/less mechanical.)

            So with this Is say that “Software Developer” is not a job in the future, but that statement carries more weight than it should. The software developer of today will be invaluable as a technician working with AI. In this example “software developer” is a mechanical division of labor where something in the future might be like “Development Strategist” as a more organic division of labor. As to what that looks like, your guess is as good as mine.

          • foggy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            3
            ·
            11 hours ago

            Cool, I am a well decorated expert in my field so hate all you want.

              • foggy@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                5 hours ago

                Yeah, so now no one will take you seriously. And hominems are a bad look, kiddo.

                I’m a finally just gonna block you now. I emoloy a 2 strike rule on Lemmy.

                Peace out girl scout.

  • Krauerking@lemy.lol
    link
    fedilink
    English
    arrow-up
    6
    ·
    13 hours ago

    I read this article name out loud and the person I read it to literally said
    “Oh, is this The Onion?”