- cross-posted to:
- eticadigitale@feddit.it
- news@lemmy.world
- cross-posted to:
- eticadigitale@feddit.it
- news@lemmy.world
There is one class of AI risk that is generally knowable in advance. These are risks stemming from misalignment between a company’s economic incentives to profit from its proprietary AI model in a particular way and society’s interests in how the AI model should be monetised and deployed. The surest way to ignore such misalignment is by focusing exclusively on technical questions about AI model capabilities, divorced from the socio-economic environment in which these models will operate and be designed for profit.
How about the risk of dumbass managers overestimating AIs ability to save on labor costs and firing too many workers to function?
I tried to follow this but my brain is fried (and it’s only lunch time!)
One thing it got me thinking about (and I was surprised by the conclusion I came to), was it’s often brought up how the training models are black boxes that are proprietary - but we all know the data was whatever public records they could scrape from the internet, be it reddit or whatever.
Such a thing didn’t exist for them to use in a licenced manner, they were innovating - so I’m naively wondering why is it a problem when they took the risk of using the data and presumably paid tremendously low wages to people to prune and train it from 3rd world countries
They still had to build the thing and pay to run it, train it and mature it. The risk was all theirs, why is it a problem that they’re now hoping to profit from that?
- Maybe they should sell their training data…
We’re upset at the greedy little pig boy spez for licencing it to them, but we did chuck all our thoughts up on the bathroom wall for all to see. It’s not like there was anything private about it.
I do like the approach of changing the incentives, but that will need regulation to force the capitalists to behave, so I guess we’ll just have to wait for the EU to form a plan.