Society of Minds — Possible AI Futures with Long Now Foundation : 無料・フリー素材/写真
Society of Minds — Possible AI Futures with Long Now Foundation / jurvetson
| ライセンス | クリエイティブ・コモンズ 表示 2.1 |
|---|---|
| 説明 | At the speaker dinner, we all started with a pop question from Kevin Kelly: what possible AI minds would you want to see? There were some creative answers. I said I would like to see a society of minds (in a plural nod to Minsky). I pointed out that our brains are physically identical to those of humans 2,000 years ago, but much of the “intelligence” that we ascribe to humanity comes from our ongoing cultural evolution, glacial as it may seem in any one lifetime. But 2k years ago, our lives and social contracts were nasty and brutish, and abominations like rape, genocide and slavery were not yet condemned. So, I would love to see a society of AI minds interacting in a simulator, exploring the frontiers of the moral landscape (as Sam Harris wrote about) and rapidly accelerating cultural evolution for the benefit of all of us. A better social contract for modernity might take us many generations to discover, and we might make some mistakes along the way. Especially in a technological epoch where weapons of mass destruction are available to individuals; what rule of law and societal immune system might serve us best? What better peace dividend from the AI lives than the discovery of the better angels of our nature. Here are juicy nuggets from The Long Now Panel on Possible Minds (Bio’s here )Alison Gopnik: “I have found that engineers are more open-minded than scientists. Scientists hold onto their theories. Engineers see something new, like deep learning techniques, and switch abstractions more easily when it proves useful.”“Deep learning is good at pulling statistical structures from huge data sets, and now we see their limits at creatively thinking up new ideas.”“3 and 4-year-olds do causal inference better than the best scientists we know”“The computational model of the mind is the best we have. The computational model for a 2-year-old is better predictor than any psychological or scientific model of their behavior”Q: Will AI’s better us at humor? A: “All of the things that most make us human are the hardest to define.”Stuart Russell: “Consider the so-called ‘filter bubble’ of social media. The reinforcement learning algorithm is trying to maximize click throughs. From the view of the human, the purpose of the machine is to maximize clickthroughs. But from the view of the machine, it is changing the state of the world to maximize clicks. It is changing you to make you more predictable. A raving fascist or communist is more predictable and will lap up raving content. The machines can change our mind about our objective function so we are easier to satisfy. Advertisers have done this for decades.” [I argued with him about this feedback loop, and Yann Le Cun says this changed at Facebook a while ago]“The reinforcement learning algorithm in social media has destroyed the EU, NATO and democracy. And that’s just 50 lines of code.”Rodney Brooks: “With humans, we can often predict one’s capabilities from our performance at a task. This is a common mistake when applied to AI.”“Machine learning consumes incredible energy compared to a child. But once learned, it can be duplicated at no cost.” |
| 撮影日 | 2019-02-25 21:46:08 |
| 撮影者 | jurvetson , Los Altos, USA |
| タグ | |
| 撮影地 | |
| カメラ | DSC-RX100M3 , SONY |
| 露出 | 0.02 sec (1/50) |
| 開放F値 | f/2.8 |

