I’d be ok being under the rule of an open source AI— not today’s AI, but like, at this point I’d trust a robot more than I’d trust a human in leadership.
There would still be disagreements on how to program the AI. A super intelligent AI with information about everything could probably find the best way to reach any goal, but how do we define what are our goals in the first place, and how they’re prioritized? And what constraints they are on the actions of the AI state?
There’s a quote I’ve heard, I think it’s from Alexandre Grothendiek: " There’s no systematic way to go from the knowledge of what it to the knowledge of what should be", that sums it up pretty well.
I’d be ok being under the rule of an open source AI— not today’s AI, but like, at this point I’d trust a robot more than I’d trust a human in leadership.
IDK man. AI generally reproduces existing biases, and who is going to control those who control the AI?
another ai. Let me dream, dammit!
There would still be disagreements on how to program the AI. A super intelligent AI with information about everything could probably find the best way to reach any goal, but how do we define what are our goals in the first place, and how they’re prioritized? And what constraints they are on the actions of the AI state?
There’s a quote I’ve heard, I think it’s from Alexandre Grothendiek: " There’s no systematic way to go from the knowledge of what it to the knowledge of what should be", that sums it up pretty well.
Isn’t ruling by definition centralized and open source by definition decentralized?