Autonomous Machines and Responsibility
It is strange to see some people question who would be responsible if an AI came to make a decision that would result in disaster. As it seems that AI is not yet recognized as a sentient being in full possession of cognitive capacities, this responsibility falls onto the owner of the AI software. Many countries already have laws defining what this kind of responsibility and ownership entails. But the real question is : do we really want an AI to be recognized as fully autonomous ? AI is still at a too early stage of its development for this question to get a definitive answer right now : current AIs can be considered simplistic, as what the human brain can do in comparison is still leaps and bounds ahead of them. We’re simply not there yet in terms of technology and engineering capabilities. This being said, we still need to start considering what this responsibility really means, as several companies have now started building autonomous cars. These cars raise many ethical and civil concerns, as they have to operate in the same environment as us, humans. This takes us back to the aforementioned points.
One possible solution would be to have a separate environment for autonomous cars, similar to what we already do by prohibiting pedestrians and cars which cannot reach a minimum speed of highways. It is possible to have roads for autonomous cars only, but a government would wait for the majority of its population to use autonomous cars before implementing such a project. If an autonomous car enters the highway, then the AI takes over the driving, shifting from intelligent cars to intelligent roads. Off these intelligent roads, the driving, even if facilitated by driving assistance, would be entirely manual.
This seems like a good system, as AIs currently lacks the cognitive abilities to operate in the same environment as humans. Legal concerns would also be lessened as these environments would be as strict and controlled as rail or air traffic.
It should now be clear that the issues raised by AI are not only technical, and many field actors with very different profiles must work together to design the best solutions.
From this perspective, it would only seem the next logical step that elected government officials start thinking as project managers, as aggregators of skills, in order to design tomorrow’s society with civilians and engineers.
Controlling our AIs
The protection of our personal data and how AI should be monitored is another sensitive topic. If we don’t set up ground rules now, we’ll be way over our head in a few years. The EU has recently set a milestone in personal data protection with GDPR. It is a good start, but it’s just a start. These rules must be respected, and making sure of this should be done by an independent monitoring body. This monitoring body should also have access to the right technological resources in order to operate efficiently.
Another important point is AI control. Currently, almost all AIs is programmed with the neural network approach, and researchers have to use huge amounts of data to train them. This data is compiled and encoded in the network itself so that everything is packaged and can be used in production without relying on external dependencies. The issue with this is that once the training starts, keeping tracks of the AI’s decision-making process becomes impossible. This explains why this process is referred to as a ‘black box’ by the AI community. To give an example of why this is an issue, if, during the training process, a malicious programmer feeds the network a modified dataset in order to make it behaves differently, this difference in behaviour will only be visible after the training is complete, and once the AI starts being used. Monitoring this decision-making process in a proactive way is quite challenging, and even if research is being conducted on how to open this black box, it is still at a very early stage.The problem is that at the beginning, neural networks’ decision-making processes were a black box by design. Clearly, rethinking this would be very useful for industries, in order to respect public privacy. But there are other algorithms out there, some of them already proven successful in industrial fields, and this is why we think AI has a lot of good to offer. The field is getting more and more mature, but needs a round-the-clock monitoring as it will play a major part in our future. There is still a lot to do in AI to get anywhere near humans’ flexible cognitive abilities, and we should use this time wisely : to make sure humans keep a central role in AI development, so that we can stay in control of our own future.