Monday, February 17, 2014

SF & AI

At a talk at the Oxford Martin School titled Artificial intelligence: examining the interface between brain and machine, I asked Anders Sandberg what role, if any, cultural products, including fiction, could usefully play in thinking about the future.*   He replied:
I quite like Asimov's robot stories because they are beautiful demonstrations that if you try to get your robots to behave according to a fixed set of rules there are going to be conditions that lead to bizarre or stupid behaviours. There are actually good demonstration of why you shouldn't use that sort of programming. But Asimov came up with the rules mostly to have a good framework for this stories. The real problem is when people think they are proposed seriously.

Any individual story, and individual piece of fiction is not going to work. But I think reading a lot of science fiction is actually quite useful to stretch your mind. None of the individual stories in necessarily useful or helpful but they can help you get into mindsets that are very different. If there is one thing science fiction is about it is about dealing with the other – dealing with very different situations and especially beings that function in a very different way. And I think that flexibility is important when we start to reason about it. Ray Kurzweil suggested that we give future AI the golden rule. That way they would learn how to behave themselves. But anyone who has tried to explain the golden rule to an inquisitive 8 year old will realise there are plenty of loopholes in that. And that's a human 8 year old. If this had been an AI 8 year old the loopholes that are obvious to an intelligent machine would be very weird to us. 
I think the money quote in this talk was "We have very little idea how to encode a good values system [into intelligent machines]."

Here is an article titled The Dawn of Artificial Intelligence.  

At Charlie Stross's blog, Ramez Naam argues that The Singularity is Further Than It Appears.



* The video is here. My question is at 1.05.30 and Sandberg's reply at 1.09.15. I mentioned Marvin the Paranoid Android in the preface to my question in reference to his anecdote, at 1.01.00, about a robot he built that got stuck in a pattern of learned helplessness. The transcript above is not exact.

P.S. Maria Popova suggests some reasons why science fiction writers are good at predicting the future.

No comments: