• 3 Posts
  • 145 Comments
Joined 2 years ago
cake
Cake day: January 20th, 2023

help-circle



  • heavyboots@lemmy.mltoLinux@programming.devWhere is Naomi Wu?
    link
    fedilink
    English
    arrow-up
    57
    arrow-down
    3
    ·
    edit-2
    20 days ago

    Last post that I can find by her is on Mastodon, promoting a new electronics book she helped co-author, I think?

    But yeah, she has been super quiet since they “clipped her wings”. But she said she would be too. Censorship sucks and I can’t believe we let an entire country get away with it and still did business with them the whole time.


    • They attended a ceremony in a part of the cemetery off-limits to photography and political events, due to some sort of dealings instigated by House Speaker Mike Johnson.
    • Trump was told he could only attend in a personal capacity and that no hangers-on would be allowed in.
    • When an official tried to enforce these limitations on his visit, a large staffer physically pushed her aside, claiming that photography was allowed, so he and at least one or two staffers were there instead of just Trump himself.
    • Trump has (of course) posted this crap all over his social media, using it for political purposes, in spite of the conditions under which his visit was supposed to take place. (Honoring a fallen soldier in a civilian capacity.)
    • And after it is all over and starting to blow up in his face, his response is unsurprisingly to lie about it all, despite photographic evidence (of the politics and extra staffers) and eye witness confirmation of the physical aspect (by Army personnel) that it happened.







  • These are a bit unique from the lists everyone else has, I think:

    • Lemmy Keyboard Navigation (like the kbd shortcuts from RES)
    • Google Popup Blocker (stop the annoying log in with Google popups everywhere on the web)
    • OneTab (this one lets you collapse a whole window of tabs down into a list in the OneTab tab that you can later reexpand into a window again when you re-attack whatever subject all the tabs were about)

    These are the more standard ones that everyone seems to run:

    • UBlock Origin
    • Reddit Enhancement Suite
    • 2FAS Extension
    • BitWarden

  • I hope like hell the sets of questions were randomized, because if they weren’t, they were tweaked by the surveyors beforehand to try and force a particular result.

    Like the AI question was paired with some incredibly crappy options like “A browser that runs 2x slower than your current browser”. Obviously they want you to click that option as least wanted and leave the AI development alone (if that wasn’t a randomized grouping).

    Similarly, it looked like they were trying to decide which feature to sacrifice in support of AI dev in later questions, because all 3 would be things I enjoy much more than AI, but I have to rate one as least wanted.

    EDIT: OK, thanks for all the responses everyone! Looks like my pairing of AI and 2x slower was just a bad random selection inducing extreme paranoia on my part. Very happy to hear that.




  • heavyboots@lemmy.mltoPrivacy@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Can’t answer to Tor—haven’t even tried it in years, but I know on Windows, Firefox totally ignores the whole “reopen tabs on restart” pref if you close the last window via the red X in the corner. You have to use control-shift-Q or show menus and select File->Quit if you’re going to quit it in a way it understands as requesting you to reopen the tabs again next launch.






  • I totally agree that both seem to imply intent, but IMHO hallucinating is something that seems to imply not only more agency than an LLM has, but also less culpability. Like, “Aw, it’s sick and hallucinating, otherwise it would tell us the truth.”

    Whereas calling it a bullshit machine still implies more intentionality than an LLM is capable of, but at least skews the perception of that intention more in the direction of “It’s making stuff up” which seems closer to the mechanisms behind an LLM to me.

    I also love that the researchers actually took the time to not only provide the technical definition of bullshit, but also sub-categorized it too, lol.