The beauty of Dutch commuting: when biking at university, I came across my long-time AI colleague. We are from back in the day when AI was just a theory, not really used in practice. Now, it’s the other way around: in practice, and not really a theory. What follows is a summary of our discussion. It is not fact-checked, but AI can probably do that for you if you wish.
Big tech analysed for a year basically all existing texts, with 90 billion neurons (90 billion-dimensional space), electricity alone costing $50 million. The results are amazing, and we are all allowed to use it. Obviously, not out of generosity, but as some free public debugging. There is no theory underlying this exercise. You could say it is brute force, just correlating. And big tech is not very transparent. AI researchers are now reverse engineering it by asking questions to try to find out what kind of model is underlying all this. There is a model, for sure, but one generated on the fly. It is worrying that commerce, with its big capital, is doing this and excluding the research community. They have models and experience. To get AI on a responsible level, it would be good if big tech more involved the research community and was more transparent about what it is doing.
Now, there is one intriguing question. AI can still be incredibly stupid. Like suggesting antibiotics as medicine while the case mentions the patient’s allergy. It makes mistakes humans never would. I always give the example of a neural net trained to recognise sneakers. It can, with an accuracy of over 95%. But when you turn the sneaker upside down, it has no clue whatsoever. Humans do. The reason for these mistakes is that AI doesn’t understand anything. It just correlates with what it found in the data. Maybe it will get better over time, maybe not. It is this black box that is often referred to. This makes it risky to put it to work in life-and-death situations or in law, for that matter.
My colleague was saying that it has intelligence, but different from ours. So, things that we find very easy can be difficult for AI. This reminded me of an analogy he often used: when people started thinking about flying, they imitated nature and used moving wings. We can now fly for over 100 years, but we do it in a way that is totally different from, for example, flies or birds. And it works.
These are exciting times, game-changing times, and society-changing times. Another, even older AI colleague predicted that there would be a chess computer beating the world champion way before anyone believed it to be possible. In 1997, Kasparov was beaten. It is 25 years later, and we still play chess and use computers, AI if you wish, to improve and understand the game. In the coming five years, it will become clear what role AI will play in education, journalism, etc. It is our role as legal scholars to emphasise the importance of fairness and legality of what is going on. It has become more than just a game. It is serious.