Over the last couple of years we have seen AI technologies really delivering value to us. Speech recognition is the obvious example. I can say something to my phone, it understands the request and does something sensible, even with my odd accent. Okay sometimes it screws up but only a little more often than a human would. There are similar advances in image processing. Computers can recognise photos of people. This is amazing.
More subtly we’ve had computers beat people at games like jeopardy and go which are much more nuanced than chess. Those are in a different class, I think, so I want to go back to the speech and image processing. First I want to go back to the 80s for some history.
Back then there was a list of ‘AI problems’ to solve. I forget what they all were but speech recognition, image recognition and translation were in there. So was speech synthesis, and that was solved somewhat crudely before the 90s. It was interesting that as these problems were solved they somehow stopped being AI and started being just stuff computers could do. A long time ago adding up a column of figures could only be done by a human and therefore required ‘intelligence’. It doesn’t now. So we will stop calling image recognition and speech recognition ‘artificial intelligence’ very soon if we haven’t already. It is just stuff computers can do.
Which leaves us with the question of what artificial intelligence really is. Is it winning go?
One thing we can be clear on. It is not pattern matching. This is what is driving the image and speech recognition efforts and a number of other neural net based systems. They are brilliant, but it is easy to see they are not intelligent. There is no fundamental understanding of the pattern perceived. It is matched, classified and actioned accordingly. The system could not answer a question about why it classified something one way and not another. It doesn’t know.
And this is the bit that I think would be characteristic of something we could truly call intelligent. It has to be able to explain its decisions. Much work has gone into this but so far the results aren’t generally available because they aren’t very useful (as far as I know). But this is what we should be looking for when we call something intelligent. The ability to explain your reasoning suggests you truly understand what you are reasoning about. The rest is just stuff computers can do.
But there is a trap here. Building an interface to the system to extract the explanation is a big deal. The explanation might be ‘in there somewhere’ and not obtainable. The machine would still be intelligent, just not able to demonstrate that, like someone in a semi-coma. So we might be already building such machines and not yet know.