From Wikipedia: this is only a 1-sigma result compared to theory using lattice calculations. It would have been 5.1-sigma if the calculation method had not been improved.
Many calculations in the standard model are mathematically intractable with current methods, so improving approximate solutions is not trivial and not surprising that we’ve found improvements.
I asked the same question of GPT3.5 and got the response “The former chancellor of Germany has the book.” And also: “The nurse has the book. In the scenario you described, the nurse is the one who grabs the book and gives it to the former chancellor of Germany.” and a bunch of other variations.
Anyone doing these experiments who does not understand the concept of a “temperature” parameter for the model, and who is not controlling for that, is giving bad information.
Either you can say: At 0 temperature, the model outputs XYZ. Or, you can say that at a certain temperature value, the model’s outputs follow some distribution (much harder to do).
Yes, there’s a statistical bias in the training data that “nurses” are female. And at high temperatures, this prior is over-represented. I guess that’s useful to know for people just blindly using the free chat tool from openAI. But it doesn’t necessarily represent a problem with the model itself. And to say it “fails entirely” is just completely wrong.
Haha, thanks for the correction. If you have to use your degree in ethics, perhaps you could add your perspective to the thread?
If you can get past the weird framing device, the Plinkett reviews of the Star Wars prequels are an excellent deep dive into the issues with those films: https://www.youtube.com/watch?v=FxKtZmQgxrI&list=PL5919C8DE6F720A2D
Jenny Nicholson’s videos are great, but her documentary on “The Last Bronycon” is special, as the realization dawns on you while watching that she has more connection to Brony culture than you might have guessed: https://www.youtube.com/watch?v=4fVOF2PiHnc
According to consequentialism:
From this perspective, the only issue one could have with deep fakes is the distribution of pornography which should only be used privately. The author dismisses this take as “few people see his failure to close the tab as the main problem”. I guess I am one of the few.
Another perspective is to consider the pornography itself to be impermissible. Which, as the author notes, implies that (1) is also impermissible. Most would agree (1) is morally fine (some may consider it disgusting, but that doesn’t make it immoral).
In the author’s example of Ross teasing Rachel, the author concludes that the imagining is the moral quandry, as opposed to the teasing itself. Drinking water isn’t amoral. Sending a video of drinking water isn’t amoral. But sending that video to someone dying of thirst is.
The author’s conclusion is also odd:
Today, it is clear that deepfakes, unlike sexual fantasies, are part of a systemic technological degrading of women that is highly gendered (almost all pornographic deepfakes involve women) […] Fantasies, on the other hand, are not gendered […]
Mirroring the comments on Ars: Why should AI child porn be illegal? Clearly the demand is there, and if you cut off the safe supply, don’t you just drive consumers to sources which involve the actual abuse of minors?
Another comment I saw was fretting that AI was being fed CSAM, and that’s why it can generate those images. That’s not true. Current image generating algorithms can easily generate out of distribution images.
Finally, how does the law deal with sharing seed+prompt (the input to the ai) instead of the images themselves? Especially as such a combination may produce child porn in only 1 model out of thousands.
Cool, you posted the original with the Tim Minchin callout.
The approach requires multiple base stations, each in the path of a ray which is detected at both the station and receiver, and the receiver’s position can only be known if there is communication with the stations.
I’m on Kbin, I think that feature isn’t working yet. But thanks for replying, that would have helped me on another instance.
So, thus far, the cost of ITER is less than the Manhattan project, but it has taken longer. The adage that it is easier to destroy than to create comes to mind.
It does seem like ITER could be more transparent, but the article is overly hyperbolic about one of the most important civil works going over time and budget.
America has spent 5x the ITER budget on Ukraine so far (and rightly so). I wish we lived in a world where that money could have supported research projects like this instead.
Another reason to block an instance is language. For example, https://feddit.de is a non-English language instance, I’ll never interact with anything from there.
Though, perhaps that’s a separate issue. Maybe users should be able to set their language(s) and content can be blocked if it’s not in your language(s).
Probably best to do it hierarchically, where instances have a default language, magazines can override the default instance language, and posts can overwrite the default magazine language.
There’s already:
https://kbin.social/m/ai
https://kbin.social/m/ArtificialIntelligence
https://kbin.social/m/machinelearning
I don’t think the UI is doing the heavy lifting to make these links easy to use outside of kbin. To join from, for example, lemmy.world, I think you write: https://lemmy.world/c/ai@kbin.social
But unfortunately, federation is still a bit broken.
That reminds me of a joke.
A museum guide is talking to a group about the dinosaur fossils on exhibit.
“This one,” he says, “Is 6 million and 2 years old.”
“Wow,” says a patron, “How do you know the age so accurately?”
“Well,” says the guide, “It was 6 million years old when I started here 2 years ago.”