• 1 Post
  • 682 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2023

help-circle





  • I absolutely hate always online DRM in single player games, so I get it. Personally, I’ll avoid games that use it. I was a huge fan of the Hitman series but haven’t played any of the new ones because of always online, live service, season pass, model they decided to go with. It’s a deal breaker for me, but I understand it isn’t for everyone else. I told my friends I wouldn’t be playing Helldivers 2 with them because of its use of kernel level anti-cheat and they just gave me a weird look.

    I’ll choose to support games that are developed in consumer friendly ways, but I also accept that not everyone sees it as a big deal. If a company decides they need kernel level anti-cheat, then that’s on them. They won’t get my money, but I’m not about to start a petition to legally ban the use of kernel level anti-cheat and call anyone who won’t sign it an industry shill and bootlicker.

    Want to stop games you buy from being killed? Don’t buy games that can be. Does this mean you’ll be sitting out while all your friends have fun playing the latest hit game? Probably. Does it mean 10 years later when the game no longer works you can smugly tell them “heh, looks like you guys got scammed.” Also yes. Just don’t be surprised that they think you’re weird.


  • From the initiative:

    This initiative calls to require publishers that sell or license videogames to consumers in the European Union (or related features and assets sold for videogames they operate) to leave said videogames in a functional (playable) state.

    Specifically, the initiative seeks to prevent the remote disabling of videogames by the publishers, before providing reasonable means to continue functioning of said videogames without the involvement from the side of the publisher.

    The initiative does not seek to acquire ownership of said videogames, associated intellectual rights or monetization rights, neither does it expect the publisher to provide resources for the said videogame once they discontinue it while leaving it in a reasonably functional (playable) state.

    This is all that the initiative states on the matter. How it would actually work in practice is anyone’s guess because the wording is so vague. Supporters seem to be under the impression that companies have a “server.exe” file they purposefully don’t provide players because they’re evil and hate you. They could also be contracting out matchmaking services to a third party and don’t actually do it in-house. Software development is complex and building something that will be used by 100,000 people simultaneously isn’t easy.

    There’s a reason comedic videos like Microservices, where an engineer explains why it’s impossible to show the user it is their birthday based on an overly complex network of microservices, and Fireship’s overengineering a website exist. Big software is known to be difficult to maintain and update. Huge multiplayer games aren’t any different. It’s likely there isn’t actually a “reasonable” way for them to continue to work. Supporters are hopeful this initiative would cause the industry to change how game software is developed, but that hope gets real close to outright naivety.


  • LLMs are essentially that. They predict the next words based on the previous words. It was noticed that the quality of a prompt had an effect on the quality of an LLM’s output. Output could be improved if prompts were better. Why not use an LLM to generate good prompts? Welcome to “reasoning” models.

    Instead of the LLM taking the user’s prompt and generating the output directly, reasoning models will generate intermediate prompts for itself based on the user’s inital prompt and the models own intermediate answers. They call it “chain of thought” or CoT and it results in a better final output than LLMs that don’t use this technique.

    If you ask a reasoning LLM to convince a user to take medication that has harmful side effects, and review the chain of thought, you might see that it prompts itself to ensure the final answer doesn’t mention any negative side effects, as that would be less convincing. People are writing about how this is “lying” since the LLM is prompting itself to “hide” information even when the user hasn’t explicitly asked it to.

    However, this only happens in really contrived examples where the inital prompt is essentially asking the LLM to lie without explicitly saying it.




  • ImplyingImplications@lemmy.catoPC Gaming@lemmy.caThe end of Stop Killing Games
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    15
    ·
    15 days ago

    Lol what a bunch of cope. One guy made a youtube video and that’s the only reason why world governments aren’t changing laws? The video has less views than his Inscryption playthrough. Is he the sole reason for Inscryption’s success too? Is Thor actually a god who can make things happen just by leveraging the power of his 2 million subscribers!?

    This failed because the average person does not care about “saving video games”. Nintendo announced they can revoke your access to play games you paid $80 for on the Switch 2 and it’s setting sales records.





  • I don’t think there’s any moment that truly blows your mind. It’s a very slow burn. I found every run I learned something new that made me want to revisit old rooms and search out new ones. It definitely helps to take notes which is also fun in its own way.

    Sometimes solving a puzzle just gives you some lore but that was also neat too. There’s one note I found that stuck with me regarding following traditions. It doesn’t have anything to do with the game but it was great writing!


  • why don’t they program them

    AI models aren’t programmed traditionally. They’re generated by machine learning. Essentially the model is given test prompts and then given a rating on its answer. The model’s calculations will be adjusted so that its answer to the test prompt will be closer to the expected answer. You repeat this a few billion times with a few billion prompts and you will have generated a model that scores very high on all test prompts.

    Then someone asks it how many R’s are in strawberry and it gets the wrong answer. The only way to fix this is to add that as a test prompt and redo the machine learning process which takes an enormous amount of time and computational power each time it’s done, only for people to once again quickly find some kind of prompt it doesn’t answer well.

    There are already AI models that play chess incredibly well. Using machine learning to solve a complexe problem isn’t the issue. It’s trying to get one model to be good at absolutely everything.