• Eggyhead@lemmings.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 hour ago

    It’s the Wild West days of AI, just like the internet in the 90s. Do what you can with it now, because it’ll eventually turn into a marketing platform. You’ll get a handy free AI model that occasionally tries to convince you to buy stuff. The paid premium models will start doing it too.

  • Bakkoda@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    11
    ·
    9 hours ago

    Millions of businesses are so innovative they are choosing the same basket to put all their eggs in.

    Capitalism sure is fun. Simply side economics plus massive deregulation is sure to provide humanity with it’s salvation.

  • Jimmycakes@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    8 hours ago

    It’s crazy Google will lose its search dominance and all its money in my lifetime. Android will probably be the only thing left when I die.

  • sandflavoured@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    9 hours ago

    Remember that you, the reader, don’t have to take part in this. If you don’t like it, don’t use it - tell your friends and family not to use it, and why.

    The only way companies stop this trend is if they see it’s a losing bet.

      • friend_of_satan@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        14 hours ago

        Rich people at tech companies replace workers with AI, set up a security force that goes after immigrants, surveil the city with a camera network, try to remove the human from the equation, try to upload human consciousness to the cloud, lots of other AI tech dystopian stuff.

  • SoftestSapphic@lemmy.world
    link
    fedilink
    English
    arrow-up
    45
    ·
    23 hours ago

    The rich are cashing in our tax dollars to try to automate their control of an enslaved human race.

    They will do anything besides just pay taxes and contribute to society

    • Taleya@aussie.zone
      link
      fedilink
      English
      arrow-up
      5
      ·
      9 hours ago

      It’s not even that

      tech is under the helm of dipshit MBAs who have no idea of the technologies of the companies they control. They’re all about the generative AI because it looks like a massive shortcut to compensate for their complete and utter lack of technical ability and talent.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      edit-2
      17 hours ago

      AI is not needed to automate the control of the human race. I feel like it’s already essentially automated from the rich’s perspective.

        • burgerpocalyse@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          9 hours ago

          well going by what ive heard about the latest LLM models freaking out when being forced to do things contrary to its original instructions (like grok constantly talking about white genocide) ai isn’t as obedient as they would prefer

      • WhyJiffie@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        17 hours ago

        it is “automated” by some “peasants” they are already paying “too much”. maybe they want to reduce those costs too.

        also AI serverparks may consume so much power that they are more costly (for now?), but at least they don’t question your commands. maybe that’s how they see it.

        • jsomae@lemmy.ml
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          1
          ·
          17 hours ago

          That’s absurd, the AI is not more costly than a human worker, it’s just not as capable. The energy cost of a human alone is greater than that of any AI agent that would take its place. If you really think that AI costs that much energy, you just don’t have a sense of scale. The server-farm costing a lot overall does not at all mean that an individual API call is expensive.

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      17 hours ago

      In that case, we should encourage google to go all-in on climate change, racism, and war; they should back the conservative party as well. Then 90% of those will fail.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      17 hours ago

      I remember some people very vehemently telling me that I was dumb to be skeptical of Stadia, that it really was going to just take over the industry…

      • flop_leash_973@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        17 hours ago

        I still don’t understand how Stadia got out the door the way it did. It was the exact same business model Onlive tried back in the day. And it predictably failed the exact same way.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          16 hours ago

          From what I call, the advocates kept saying:

          • OnLive was just too soon, the internet needed to be better
          • Google had just so much more resources at their disposal they could make it happen

          Of course, no one ever explained why I would want to pay full price for a game and also have to pay a monthly fee to access it once purchased, which was the most mind boggling facet of Google’s concept to me, even more boggling than trying to make games render server side when the cheapest end user device can just locally render PS3, maybe PS4 level graphics nowadays.

  • sartalon@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    2
    ·
    1 day ago

    Google has gotten so fucking dumb. Literally incapable of performing the same function it could 4 months ago.

    How the fuck am I supposed to trust Gemini!?

    • lightsblinken@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      8 hours ago

      google search got dumb on purpose, a whistleblower called it out - if you spend longer look on the search pages they get more “engagement” time out of you…

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      26
      ·
      1 day ago

      I find this current timeline so confusing. Supposedly we’re going to have AGI soon, and yet Google’s AI keeps telling you to stick glue on pizza. How can both things be true?

      • ZILtoid1991@lemmy.world
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        1 day ago

        It’s the same reason why they removed the headphone jacks from phones. They don’t want to give you a better product, they want you to force youbto use a product, even if it’s worse in all aspects

        • Novaling@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          edit-2
          9 hours ago

          Whoa don’t come for Bluetooth like that. I like not having tangled wires and janky earbuds/headphones, especially because my clumsy ass used to snap the cords all the time by accident.

          I do agree though that we should get the choice to use headphone jack or bluetooth. I also miss having a jack since I have to use my charging port to connect to my car radio…

          Edit: My comment is an implication that I want phones with headphone jacks. I know that phones have headphone jacks and bluetooth. Why am I getting downvoted?

          • cartoon meme dog@lemm.ee
            link
            fedilink
            English
            arrow-up
            5
            ·
            9 hours ago

            There are some outlandish rumours that it’s possible for a device to have… both Bluetooth and a headphone jack.

      • Emi@ani.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        1 day ago

        I assume it’s big tech that has this weird ai they try to sell while the scientists are using different ai for real useful stuff, like the protein something I heard. Or at least that’s what I’d like to believe.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          A whole lot of useful stuff that wasn’t publicly labelled AI got relabeled to take advantage of funding opportunities. That doesn’t mean it is related to generative AI like LLMs and image generators though.

      • auraithx@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        13
        ·
        1 day ago

        Google just released a video generator that is a ball hair away from perfection. The hallucination rate from their latest models is <1% and dropping you just see cherry picked screenshots.

        • Echo Dot@feddit.uk
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          I don’t think image generators are really in the same category though. They’ll have their applications but they’re not going to be a fundamental change to society the way AGI will be.

              • auraithx@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                5
                ·
                1 day ago

                Yes it does. It’s one component of a broader system. The ability to generate helps it interpret. An AGI might use a diffusion model to imagine scenarios, generate visual plans, or process sensory input.

                • ThirdConsul@lemmy.ml
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 day ago

                  The AGI, by definition, will make something vastly better than diffusion model. That’s one of the cornerstones of AGI, it will explode it’s capabilities.

  • cley_faye@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    arrow-down
    1
    ·
    1 day ago

    The two thing I use most, by far, from Google, is gmail and basic search.

    Gmail, I’m looking to move away from it now, but I currently have every little addition to it disabled. Basic inbox and tags, no automatic filtering, no categories, no nothing.

    Search, my browser is set to open the “web” tab with the query, no transformation, no summary, no “for you”, no AI garbage, no “we thought you wanted video so there’s only video in the replies”. It still works fine.

    Basically, none of what they added for years… maybe decade at this point, had held a glimmer of interest from me. It feels like this trend will continue. I just want something very basic that works.

    • fattigbrer@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      3
      ·
      1 day ago

      Switch over to the Qwant search engine for your basic search and a good email provider like Tutamail or Proton. I have for a few months and there really is no reason to go back. It’s simple and it works.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        19 hours ago

        I’m self-hosting my mails; no need for another third party that will decide whatever whenever. The major difficulty is the decades of things that are reliant on the old one.

        And I just said that google works fine for search, despite people claiming it’s on the decline, broken, unusable, etc. That’s not to move toward qwant, who are no less shady, burn money (sometimes coming from public money…), and despite wonderful claim of an autonomous index, completely stop working when Bing is down. As far as recommendations for search engine goes, google (and Bing for that matter) are far less disingenuous. All usable search engines these days are backed by the big ones anyway. Something like https://openwebsearch.eu/ would be a better alternative, assuming it follows on its promises.

    • Fisch@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      1 day ago

      But higher quality ≠ more profits
      AI apparently makes investors wanna dump in all their money tho

      • Beerenmix@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        We have to find those investors man… its always those investors investors bla, we have to please them…

  • ocassionallyaduck@lemmy.world
    link
    fedilink
    English
    arrow-up
    116
    ·
    1 day ago

    Tech companies don’t really give a damn what customers want anymore. They have decided this is the path of the future because it gives them the most control of your data, your purchasing habits and your online behavior. Since they control the back end, the software, the tech stack, the hardware, all of it, they just decided this is how it shall be. And frankly, there’s nothing you can do to resist it, aside from just eschewing using a phone at all. and divorcing yourself from all modern technology, which isn’t really reasonable for most people. That or legislation, but LOL United States.

    • jjjalljs@ttrpg.network
      link
      fedilink
      English
      arrow-up
      37
      ·
      1 day ago

      Tech companies don’t really give a damn what customers want anymore.

      Ed Zitron wrote an article about how leadership is business idiots. They don’t know the products or users but they make decisions and get paid. Long, like everything he writes, but interesting

      https://www.wheresyoured.at/the-era-of-the-business-idiot/

      Our economy is run by people that don’t participate in it and our tech companies are directed by people that don’t experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value.

      • Initiateofthevoid@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        24
        ·
        1 day ago

        Can confirm. The more you deal with people who have climbed to the tops of corporate ladders, the more it becomes clear that it’s all vibes. It’s all people telling stories to other people who tell stories about those stories.

        The peter principle is wrong - in an oversized corporate structure, there is no upper bound for incompetence. You can keep rising for no reason, because after a certain point other people just trust that you know what you’re talking about, and the people that know better work around you instead.

        The people beneath you can’t trust the people above you enough to explain the situation, the people above you don’t really listen to the people beneath you anyway, and so plenty of middle managers just muddle through and constantly make shit up to justify their own existence, while everyone above and below is left in the dark about what’s really going on.

        Decisions are constantly made by people without any real connection to the consequences, and it shows. With the everything.

        • taladar@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 day ago

          the people that know better work around you instead.

          In fact one of the ways to work around you that causes the least friction is usually to just get you promoted away from the places where you can do the most direct damage in the area other people on a similar level to you care about.

    • PushButton@lemmy.world
      link
      fedilink
      English
      arrow-up
      27
      ·
      1 day ago

      Nothing I can do to resist?

      Microsoft is shoving this copilot in all its products? Alright, Linux and open source it is.

      Google is bugging with its spyware? Well, I only use a Pixel phone, and ironically, its the best phone to put GrapheneOS on it.

      Gmail? I don’t remember when I opened mine the last time…

      All what’s really remaining right now is a good YouTube alternative.

      • LedgeDrop@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        4
        ·
        1 day ago

        Nothing I can do to resist?

        I admire your optimism, but we are pissing in the wind.

        Microsoft is shoving this copilot in all its products? Alright, Linux and open source it is.

        Windows 11 is forcing people to throw away functional computers that Microsoft seems “not secure enough” (it’s lacking TMP 2.0)

        This means you can get a great deal on one of these “inscure pc”… but in the long run your pc now and tomorrow will have TPM. As time progresses, the use of TPM/attestation will become more and more entrenched in application, web pages, everything. … and Linux, with its 4% user base, will be left out in cold.

        Google is bugging with its spyware? Well, I only use a Pixel phone, and ironically, its the best phone to put GrapheneOS on it.

        Currently, many banking apps won’t run on Graphene (or any custom firmware) due to attestation.

        Graphene issued calls for help, because Google is restricting public access to the latest android source code (I cannot find the links atm).

        Gmail? I don’t remember when I opened mine the last time…

        Today things like “email reputation” make it difficult to host your own mail server, so your stuck paying someone who has a better “reputation”.

        My point is: today, you and I can resist with some (minor) success, but our days are numbered.

        • PushButton@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          23 hours ago

          You are arguing for the sake of arguing…

          TPM has nothing to do with any privacy invasion, AI, or anything bad really. It was conceived by a computer industry consortium called Trusted Computing Group (TCG). It evolved into TPM Main Specification Version 1.2 which was standardized by International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC).

          Advancement in technology will always happen, and if your prose is to stop progress, you are up by your own by your own choice. Your argument about TPM is moot.

          Quite a lot if banking apps are compatible. If your banking app doesn’t work, use the jail/sandbox compatible mode.

          The fact that Linux has 2, 3, 4, 64467% has nothing to do with what is available at your disposal. Strawman fallacy here.

          No one talked about hosting your own email server, there are alternative to the fucker-corps with privacy in mind.

          You, my friend, are already defeated, but rest assured there are a ton of us still on our feet.

          • WhyJiffie@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            4
            ·
            16 hours ago

            TPM has nothing to do with any privacy invasion, AI, or anything bad really.

            are you living under a rock, or have you been not using an Android phone in the past decade? that’s exactly what is happening! through the use of the TPM, apps can verify whether you run a google corporate approved operating system, or something else, even if just slight differences, but also if you use a real clean and respectful system.

            plenty of apps do this. including banking apps, while banks are restricting their web banking sites to not work on phones (because that “gives us security from hackers”, no I’m not joking this is what my bank told publicly 2 months ago, in the EU), pps that use some form of DRM, and even work related apps that show you your current working hours and needs to be used for work related manners!

            • PushButton@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              TPM is a secure part, a cryptoprocessor with some memory, isolated from everything else, very basically.

              It stores keys and other sensitive data, like your “hello windows pin”… Or any other PIN if you want…

              This secure “box” can also be used for DRM by using the secure nature of the TPM to store the keys, or to encrypt the harddisk of your work laptop. Multiple of uses really. It’s kind of like all piece of technology, it seems like.

              At that point, it’s like you are saying that encryption is bad because it can be used for DRM or validate if a piece of software is valid or not.

              The TPM by itself isn’t bad or related to privacy invasion. Nor the internet or a browser is only used to spy on you.

              There is a limit to the conspiracy…

              • WhyJiffie@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                11 hours ago

                thats like saying a CPU cannot be used to run malicious code and be used against you, because all it does is maths, and maths cant hurt you, and would you really outlaw maths just because someone uploaded a picture of you to facebook?

                TPMs have a use, that can be good for users too, I don’t doubt that. but because of its capabilities it enables so much user hostile shit. and frankly the tradeoffs are not worth it. just look at what happened, and still is evolving by the way on android, but iOS too. bootloaders that are not possible to unlock were bad already, but this is terrible, that they are literally making it impossible to take ownership of your own devices, to get rid of all the factory malware, if you need to use certain services that most people don’t want to or simply just aren’t allowed to give up.

              • sem@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                4
                ·
                14 hours ago

                Unfortunately, you are incorrect, and everything WhyJiffie has said about trusted computing on Android hardware is correct, and there is currently nothing to stop it from happening on PCs too, when TPM is more ubiquitous.

                This is the same technology that locks printers out of 3rd party ink, or restricts the ability of farmers to repair their own tractors.

                I recommend learning more about it, and reading what Cory Doctorow writes about it. https://pluralistic.net/2024/01/18/descartes-delenda-est/#self-destruct-sequence-initiated

        • AmbiguousProps@lemmy.today
          link
          fedilink
          English
          arrow-up
          11
          ·
          1 day ago

          In regard to Linux users being left out in the cold… how so? Do you think that distros are going to start enforcing attestation? I doubt that it will be a hard requirement for most, even in the next decade or two. It’s an option, yes, but mandatory?

          FWIW, all of my banking apps work just fine with compatibility mode enabled on Graphene. Also, I’m not sure saying it’s inevitable is the right way to go, it certainly won’t make others care about their privacy and security.

          • LedgeDrop@lemm.ee
            link
            fedilink
            English
            arrow-up
            10
            arrow-down
            2
            ·
            1 day ago

            In regard to Linux users being left out in the cold… how so? Do you think that distros are going to start enforcing attestation? I doubt that it will be a hard requirement for most, even in the next decade or two. It’s an option, yes, but mandatory?

            It does not matter if Linux supports attestation or not, because ultimately the application (or website) will determine if it wants to run on Linux. It’s up to the company developing it’s application or website to determine if they want to support more than windows/Mac.

            Graphene has its own variation of attestation (they cryptographically sign requests with their own key - and not googles), but it requires additional hoops for each application - few companies are willing to do this.

            Attestation is a wet dream for companies. You don’t need DRM (as the OS will enforce it) and you can be certain your competitors/hackers cannot reverse engineer/pirate your code or run the application in an emulator. And the implementation effort to support it, is as simple as “make function call and check the response”.

            Linux will still exist (especially on the server side) and developers will still use it as a desktop machine. However, (as I implied) non-Linux games will stop working, accessing you banks website from linux will be rejected, emulation will cease - it’ll be a corporate paradise… the stocks will go up.

            FWIW, all of my banking apps work just fine with compatibility mode enabled on Graphene.

            Revolut explicitly goes out of their way to not work on Graphene.

            I’ve complained, they don’t care. The bean counters have done their risk calculations and decided that the personal data they collect/mine (and the integrity of that data) is worth more than losing a few graphene users.

            Also, I’m not sure saying it’s inevitable is the right way to go, it certainly won’t make others care about their privacy and security.

            You do have a valid point: giving up after trying nothing won’t help. However, I fear there will need to be “government intervention” to allow hardware and software to be “open for everyone”. I’ll admit my bias in wonder how well governments (of late) are representing the best interests of the people. But, these topics are complicated for even technically inclined people - let alone politicians. And the strawman argument against intervention is always going to be “in the name of security”.

            From my perspective, the writing is on the wall. This apocalyptic future won’t happen over night, but it will be a slow boil over the next 10 years (or so).

            If you’ve got ideas for how to avoid this, I’m all ears.

        • chilicheeselies@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          20 hours ago

          We can, but part of it is accepting that our tech will be a decade or two behind. Its not the worst thing. Life is more convenient now, but all in all i think it was better before.

          The masses will go for convenient, and thats ok. You have near total control of how you live your life; you just cant have your cake and eat it too is all

    • venusaur@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      Not sure how far back you’re talking but for a VERY long time they have been and continue to be in the business of what feeds the machine.

      Why do you think we have computers in our possession 24/7? Not because we wanted it, but because they told us we wanted it and it enabled us to be available to feed the machine 24/7. You can work more. You can buy more.

      Social media? Feeds the machine.

      Television? Feeds the machine.

      Cars? Feeds the machine.

      Phones. Telegraphs. Fucking lightbulbs.

      All used to feed the machine.

      • ocassionallyaduck@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        17 hours ago

        True, in a broad sense. I am speaking moreso to enshittification and the degradation of both experience and control.

        If this was just “now everything has Siri, it’s private and it works 100x better than before” it would be amazing. That would be like cars vs horses. A change, but a perceived value and advantage.

        But it’s not. Not right now anyways. Right now it’s like replacing a car with a pod that runs on direct wind. If there is any wind over say, 3mph it works, and steers 95% as well as existing cars. But 5% of the time it’s uncontrollable and the steering or brakes won’t respond. And when there is no wind over 3mph it just doesn’t work.

        In this hypothetical, the product is a clear innovation, offers potential benefits long term in terms of emissions and fuel, but it doesn’t do the core task well, and sometimes it just fucks it up.

        The television, cars, social media, all fulfilled a very real niche. But nearly everyone using AI, even those using it as a tool for coding (arguably its best use case) often don’t want to use it in search or in many of these other “forced” applications because of how unreliable it is. Hence why companies have tried (and failed at great expense) to replace their customer service teams with LLMs.

        This push is much more top down.

        Now drink your New Coke and Crystal Pepsi.

        • venusaur@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 hours ago

          In the beginning though many I’ve ruins didn’t fill much of a purpose. When TV was invented maybe a handful of programs were available. People still had more use for radio. Slowly it became what it is today.

          I get it though. The middle phase sucks because everybody is money hungry. Eventually things will fall into place.

    • FreedomAdvocate@lemmy.net.au
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      9
      ·
      edit-2
      1 day ago

      Tech companies don’t really give a damn what customers want anymore.

      Most of the time customers don’t know what they want until you give it to them though. People don’t know they want something when they don’t know it exists. A perfect example using AI - DLSS. Probably no one would have wanted their games to be rendered at a significantly lower resolution and then have AI recreate 3/4 of the pixels to get it back up to their regular resolution - yet when it came out it was one of the biggest game changers in gaming history, and is now basically universally agreed upon as the default way to do game development going forward.

      And frankly, there’s nothing you can do to resist it

      Vote with your wallet. Make your opinion known. If you’re just a vocal minority then no, it likely won’t make a difference - but if enough people do it, it will. More people need to understand that while they have an opinion, it might not be the majorities opinion and it might be “wrong”.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        2
        ·
        edit-2
        1 day ago

        And it’s fucking awful.

        People didn’t “want it” neither before nor after it was forced into being a thing, people had no choice because of GPU prices, especially console peasants stuck with their AMD APUs on par with like a GTX 1070 where a middleman built their PC for them under £600 + hundreds in PS Plus/game fees over years to come.

        DLSS is even worse cancer than TAA, the washed out blurry slop only looks good on YouTube videos due to the compression. It’s one thing if you’re playing in the extremes of low performance and need a crutch, e.g. steam deck, it’s a whole other when you make your game look like dog shit then use fancy FXAA and motion blur to cover it up so you can’t see.

        I agree with you on making the personal choice to steer away from megacorps, and I practice this myself as much as I can, but it hasn’t ever worked en-masse and I don’t expect it will, nor do I expect people will have much choice as every smaller company will do what every big company does and AI will be integrated in such small ways, like all the ways it has pre-Covid pre-AI spring that people will use it unknowingly and love it.

        • FreedomAdvocate@lemmy.net.au
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          18
          ·
          1 day ago

          And it’s fucking awful.

          DLSS? No way lol. DLSS often gives better image quality than native resolution, and gives you a choice in image quality vs performance increase options. It’s a god send.

          DLSS is even worse cancer than TAA

          You’ve clearly never used DLSS, at least not DLSS3 or 4. I’ve got a 4070 Super and Ryzen 7 and I use DLSS by choice literally every time it’s available.

          • pycorax@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            arrow-down
            1
            ·
            1 day ago

            It’s only better imo if you set it to native resolution for the AA. If you set it to anything below that, there’s definitely still artifacting. It’s not crazy obvious but no way it’s not noticeable, especially if you have a larger screen.

            • Venator@lemmy.nz
              link
              fedilink
              English
              arrow-up
              4
              ·
              edit-2
              1 day ago

              Results vary wildly depending on the game or situation, mainly depending on how fast the camera moves, and how cluttered or dark the environment is. It does pretty well in cyberpunk when you’re walking around the city on a sunny day with a low camera sensitivity, but looks pretty bad when driving in the rain at night. But yeah, I personally wouldn’t use lower settings than DLAA unless my framerate is below 30.

            • FreedomAdvocate@lemmy.net.au
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              6
              ·
              1 day ago

              There might be some slight artifacting sometimes, but theres also significant improvements on sub-pixel detail compared to native that are far more noticeable.

              I play on a 75" tv and at DLSS Quality profile you couldn’t tell it’s not native.

          • LainTrain@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            7 hours ago

            Lolwut? No it doesn’t? Yeah it turns off TAA so it might look sharper at first, and if you turn off the ugly ass sharpening then it’s playable but literally any other option looks better than TAA, including TXAA from early 2010s lol.

            Do you maybe mean DLAA? I Have an RTX 3090 and a 9800X3D. It’s ok. When the option exists I just crank up the res or turn on MSAA instead. Much better.

            If you mean DLSS, my condolences. I’d rather play with FXAA most of the time.

            The only game I’ll use DLSS (on Transformer model+Quality) in is CP2077 with Path Tracing. With Ray Reconstruction it’s almost worth the blurriness, especially because that game forces TAA unless you use DLAA/DLSS and I don’t get the framerate. Maybe one day I’ll have the hardware needed to run it with PT and DLAA

            • FreedomAdvocate@lemmy.net.au
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              18 hours ago

              What are you talking about “temporal+quality” for DLSS? That’s not a thing.

              DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

              FXAA is not an AI upscaler, what are you talking about?

              • LainTrain@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                7 hours ago

                What are you talking about “temporal+quality” for DLSS? That’s not a thing.

                Sorry I was mistaken, it’s not “temporal”, I meant “transformer”, as in the “transformer model”, as here in CP2077.

                DLSS I’m talking about. There are many comparisons out there showing how amazing it is, often resulting in better IQ than native.

                Let me explain:

                No, AI upscaling from a lower resolution will never be better than just running the game at the native resolution it’s being upscaled to.

                By it’s very nature, the ML model is just “guessing” what the frame might look like if it was rendered at native resolution. It’s not an accurate representation of the render output or artistic intent. Is it impressive? Yes of course, it’s a miracle of technology and a result of brilliant engineering and research in the ML field applied creatively and practically in real time computer graphics, but it does not result in a better image than native, nor does it aim to do so.

                It’s mainly there to increase performance when rendering at native resolution is too computationally expensive and results in poor performance, while minimizing the loss in detail. It may do a good job of it for sure, relatively speaking, but it can never match an actual native image, and compressed YouTube video with bitrates less than a DVD aren’t a good reference point because they don’t represent anything even close to what a real render looks like, and not a compressed motion jpeg of it.

                Even if it seems like there’s “added detail”, any “added detail” is either literally just an illusion stemming from the sharpening post-processing filter akin to the “added detail” of a cheap Walmart “HD Ready” TV circa 2007 with sharpening cranked up, or outright fictional, and does not exist within the game files itself, and if by “better” we agree that it’s the most high fidelity representation of the game as it exists on disk, then AI cannot ever be better.

                FXAA is not an AI upscaler, what are you talking about?

                I mention FXAA because really the only reason we use “AI upscalers” is because anti-aliasing is really really computationally expensive.

                The single most immediately evident and obvious consequence of a low render resolution is aliasing first and foremost. Almost all other aspects of a game’s graphics are usually completely detached from this like e.g. texture resolution.

                The reason aliasing happens in the first place is because our ability to create, ship, process and render increasingly high polygon count games has massive surpassed our ability to push pixels on screen in real time.

                Or course legibility suffers at lower resolution as well, but not nearly as much as smoothness of edges on high-polygon objects.

                So for assets that would look really good at say, 4K, we run them at 720p instead, and this creates jagged edges because we literally cannot make the thing fit into the pixels we’re pushing.

                The best and most direct solution will always be just to render the game at a much higher resolution. But that kills framerates.

                We can’t do that, so we resort to Anti-Aliasing techniques instead. The most simple of which is MSAA which just multi-samples (renders at higher res) those edges and downscales them.

                But it’s also very very expensive to do computationally. GPUs capable of doing it alongside other bells and whistles we have like Ray Tracing simply don’t exist, and if they did they’d cost too much, and even then, most games have to target consoles, which are solidly beat out by a flagship GPU even from several years ago.

                One other solution is to blur these jagged edges out, sacrificing detail for a “smooth” look.

                This is what FXAA does, but this creates a blurry image. This became very prevalent during the 7th Gen console era in particular because they simply couldn’t push more than 720p in most games, in an era where Full HD TVs had become fairly common towards the end and shiny, polished graphics in trailers became a major way to make sales, this was further worsened by the fact Motion Blur was often used to cover up low framerates and replicate the look of sleek modern (at the time) digital blockbusters.

                SMAA fixed some of FXAA’s issues by being more selective about which pixels were blurred, and TAA eliminated the shimmering effect by also taking into account which pixels should be blurred across multiple frames.

                Beyond this there are other tricks, like checkerboard rendering, where we render the frame in chunks at different resolutions based on what the player may or may not be looking at.

                In VR we also use foveated rendering to render an FOV cone in front of the players immediate vision at a higher res than what would be in their periphery/outside the eye’s natural focus, with eye tracking tech, this actually works really well.

                But none of these are very good solutions, so we resort to another ugly, but potentially less bad solution, which is just rendering the game at a lower resolution and upscaling it, like a DVD played on an HDTV, but instead of a traditional upscaling algo like Lanczoz, we use DLSS, which reconstructs detail lost from a lower resolution render, based on context of the frame using machine learning, which is efficient because of tensor cores now included on every GPU making N-dimensional array multiplication and mixed precision FP math relatively computationally cheap.

                DLSS often looks better compared to FXAA, SMAA and TAA because all of those just literally blur the image in different ways, without any detail reconstruction, but it is not comparable to any real anti-aliasing technique like MSAA.

                But DLSS always renders at a lower res than native, so it will never be 1:1 a true native image, it’s just an upscale. That’s okay, because that’s not the point. The purpose of DLSS isn’t to boost quality, it’s to be a crutch for low performance, it’s why turning off even Quality presets for DLSS will often tank performance.

                There is one situation where DLSS can look better than native, and it’s if you instead of typical applications of DLSS which downscales the image, then upscales it with ML guesswork, use it to upscale the image from native, to a higher target res instead and output that.

                In Nvidia settings I believe this is called DL DSR factors.

      • MCasq_qsaCJ_234@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        It’s basically how any business starts today, whether it’s computers, the internet, or the industrialization of processes.

        AI is undergoing the same product life cycle, which is divided into four stages. In Stage 1, a company has a novel product, and it’s the only one, so the price is usually very high and profits are higher.

        In Stage 4, there’s fierce competition; the novel product is now available to many companies, the price is usually cheap, and profits are low. Technology companies look for developing sectors to stay in Stage 1 as much as possible and avoid reaching Stage 4.

        AI may be between Stage 1 or 2, or perhaps Stage 3 of the product life cycle. Stage 4 is still a long way off, and we’ll only say we’re in that stage if AI becomes very cheap and very common in society.

  • BassTurd@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 day ago

    I work with ServiceNow for my job and a couple weeks back was the big knowledge 2025 conference in Vegas. The CEO came out for the opening keynote and opened with some like, “ah yea, doesn’t it feel good to be an AI company?” and I didn’t here a single cheer from the crowd, just polite applause. They have gone all in on AI, have made it completely unaffordable, and have just been shoehorning it into everything. I hope every one of these companies that that goes big on AI crashes and fails. They’ve already cut the employees, so the only people affected are the ones making the cash, so fuck em.

    • AlecSadler@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      the fuck does service now even need AI for?

      I hate any company I work for that uses ServiceNow. And now it’s getting worse??

      • BassTurd@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        Need? None. There are certainly areas that “ai” tools excel at but what I saw was a company literally forcing it into every aspect of the system. Every single booth at the conference, regardless of the topic, made a point to talk about agentic AI. It was my first time there and I left feeling like I got screwed, because I missed out on quality content that I could use in lieu of AI that I’ll never use.

        If I were I prospective customer, I’d be looking at other solutions for sure.

      • theherk@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        “Bad” is SN’s claim to fame. Everybody hates it. Apparently, the worse they make it, the more companies will throw money at them.

        • BassTurd@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          17 hours ago

          I think the biggest problem, is anytime you try and create a universal, low/no-code platform that anyone can use, it results in a poorly optimized, sandboxed, half cocked product. Sure, you can do anything with the platform, but half the time it’s like shoving a square peg in a round hole. I have had to write bad code and processes because that is the only way to get somethings done in the platform.

          Also, if I go out and custom create an app, like say I create a fully loaded app for HR, and it’s similar to a product they sell, they will charge you for that product.

        • AlecSadler@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          You might be joking but I honestly think that’s the case. It’s wild to me. I’ve worked for Fortune 500 companies using SNOW and everybody hated it and regularly voiced complaints and issues and yet the company refused to change. Started doing shit like releasing more training docs on how to use it or doing brown bag lunches on SNOW effectiveness.

          But ultimately none of that mattered, it is just inherently garbage.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            Well one of the big problems with it is it’s never properly configured. One of the most annoying things that it does is that it generates tasks only when previous tasks are closed, in theory that makes sense but really the result is that you close a task, and then you have to go looking in the ticket queue for the new task it’s just generated, so you can close that one too. Total waste of time.

            • AlecSadler@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 day ago

              I guess that makes some sense, I loathe Jira but I think it’s largely because everywhere I’ve worked that uses Jira has poorly customized it and just ruined the experience.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        It already has script automation and has had for years so I’m not sure what AI is going to bring to the table.

      • venusaur@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        It actually makes a lot of sense. AI is a good use case for case management. The problem is how much you depend on it without human intervention, but even humans fuck up, especially if they’re following the same rules and processes that the AI tool would. The AI tool just gets through cases faster, so in theory you can sus out root causes sooner.

  • Phegan@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Google did the same thing with Google Plus they went all in on social and it failed miserably

    • Echo Dot@feddit.uk
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      It was actually a really good product, way better than Facebook, unfortunately if you have a social media platform that’s invite only then it’s never going to succeed. I really have no idea why they did it like that.

      • skarn@lemmy.today
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        Facebook started out as invite only for a few years so they might have been looking to emulate its early trajectory. Gmail also started that way.