• 0 Posts
  • 26 Comments
Joined 1 year ago
cake
Cake day: June 7th, 2023

help-circle
  • azl@lemmy.sdf.orgtoTechnology@lemmy.world*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    2
    ·
    2 months ago

    What’s the difference between one technology you don’t understand (AI engine-assisted ) and another you don’t understand (human-staffed radiology laboratory)?

    Regardless of whether you (as a patient hopelessly unskilled in diagnosis of any condition) trust the method, you probably have some level of faith in the provider who has selected it. And, while they most likely will choose what is most beneficial to them (cost of providing accurate diagnoses vs. cost of providing less accurate diagnoses), hopefully regulatory oversight and public influence will force them to use whichever is most effective, AI or not.




  • This would ideally become standardized among web servers with an option to easily block various automated aggregators.

    Regardless, all of us combined are a grain of rice compared to the real meat and potatoes AI trains on - social media, public image storage, copyrighted media, etc. All those sites with extensive privacy policies who are signing contracts to permit their content for training.

    Without laws (and I’m not sure I support anything in this regard yet), I do not see AI progress slowing. Clearly inbreeding AI models has a similar effect as in nature. Fortunately there is enough original digital content out there that this does not need to happen.



  • I want Ars content to be part of whatever training data is provided to the best models. How does that get done without appearing like they are being bought?

    Even if their contract explicitly states that it is a data sharing agreement only and the products of the media organization (articles/investigations) are not grounds for breach or retaliation, it is assumed that there is now some impartiality in future reporting.

    So, for all media companies, the options seem to be:

    1. Contribute to the greater good by openly permitting site scraping (for $0)
    2. Allow data sharing to contracted parties only (for a fee)
    3. Public or privately prohibit use of any data, and then seek damages down the road for theft/copyright infringement when the legal framework has been established.

    Is there a GPL or other license structure that permits data sharing for LLM training in a way that it does not get transformed into something evil?


  • I pay for Nebula and try to watch as much as I can there. The content is more “pleasant department store” and less “Mexican public market”.

    I do watch YouTube regularly when channel-surfing, but if I ever see an ad (which happens only on mobile devices), I close it immediately and do something else. It’s not that I don’t think I should be able to watch everything for $0, but YouTube ads are so jarring, random, irrelevant and just make me sick. They literally ruin whatever I was watching and make me sad to exist.

    It can be exhausting to wade through the absolute meat market of click bait titles and thumbnails to find something that not only looks interesting but won’t abuse me with infomercial-form audio/visuals.

    YouTube enables and promotes the “content creators” who abuse human psychology to accumulate views, likes, subscriptions, etc. The best thing that could happen is they continue to be exposed as the drug dealer they are.



  • Look at this in the same light as the 2nd amendment: bearing arms was more compatible with society when the “arms” were mechanically limited in their power/capability. Gun laws have matured to some degree since then, restricting or banning higher powered weaponry available today.

    Maybe slander/defamation protections are not agile or comprehensive enough to curtail the proliferation of AI-generated material. It is certainly much easier to malign or impersonate someone now than ever before.

    I really don’t think software will ever be successfully restricted by the government, but the hardware that is behind it might end up with some form of firmware-based lockout technology that limits AI capabilities to approved models providing a certificate signed by the hardware maker (after vetting the submission for legally-mandated safety or anti-abuse features).

    But the horse has already left the barn. Even the current level of generative AI technology is fully capable of fooling just about anyone, and will never be stopped without advancements in AI detection tools or some very aggressive changes to the law. Here come the historic GPU bans of the late 20’s!




  • I just wanted to thank you for your reply. It was so well written and easily digested I feel like I got hours worth of research out of it. God bless Lemmy.

    My 2 cents (more like $2 now that I wrote it) is that no car made in the past 20 years can be maintained to the degree older cars could, and electric cars will suffer from the same ephemeral lifespan as all modern autos do. Electric or not, makers will continue to abandon vehicle platforms regularly and aggressively in order to ensure no single component or technology becomes affordable or obtainable outside of a manufacturer-sponsored limited warranty plan. And they will lobby against our attempts to extend the service life of electric drivetrains in the name of safety or design secrecy.


  • I’ve been doing it this way for many years, before LetsEncrypt was around. Maybe some day I will switch so I can become dependent on another third party (though I do use LetsEncrypt for public-facing services).

    Yes, telling your computer to trust a certificate chain that you are responsible for securing may significantly increase your attack surface. It’s easy to forget about that root cert (I actually push mine via GPO).



  • The ads also show users interacting with their physical and virtual environments smoothly, without difficulty seeing around them or spatial positioning glitches, which does not at all describe the current state of Meta OS. I’ve been a Oculus/Meta user for 10 years and the UI is definitely not an Apple experience. (p.s. I hate Apple and love Quest 3)

    Wearing a Quest while working on a car sounds like a great way to lose a finger, or destroy the part I’m trying to install/repair. I can feel the frustration bubbling up when I imagine trying to assemble furniture while wearing a headset clamped to my face with a super tight headstrap. Man I’m so pissed now.





  • Yes, this is totally possible and I did it for a couple of years with OPNsense. I actually had an OPNsense box and a pfSense box both on Hyper-V. I could toggle between them easily and it worked well. There are CPU considerations which depend on your traffic load. Security is not an issue as long as you have the network interface assignments correct and have not accidentally attached the WAN interface to any other guest VM’s.

    Unfortunately, when I upgraded to 1Gb/s (now 2Gb/s) on the WAN, the VM could not keep up. No amount of tuning in the Hyper-V host (dual Xeon 3GHz) or the VM could resolve the poor throughput. I assume it came down to the 10Gb NICs and their drivers, or the Hyper-V virtual switch subsystem. Depending on what hardware offload and other tuning settings I tried, I would get perfect throughput one way, but terrible performance in the other direction, or some compromise in between on either side. There was a lot of iperf3 testing involved. I don’t blame OPNsense/pfSense – these issues impacted any 10Gb links attached to VM’s.

    Ultimately, I eliminated the virtual router and ended up where you are, with a baremetal pfSense on a much less powerful device (Intel Atom-based). I’m still not happy with it – getting a full 2Gb/s up and down is hard.

    Aside from performance, one of the other reasons for moving the firewall back to a dedicated unit was that I wanted to isolate it from any issues that might impact the host. The firewall is such a core component of my network, and I didn’t like it going offline when I needed to reboot the server.