Machine Learning, Big Data, and ResponsibilityHari Seldon Stage
Machine learning revolutionizes science, economy, and society. In science, pattern recognition by learning algorithms (e.g., in big data of life sciences and medicine) supports and sometimes replaces cognitive abilities of human scientists. Predictive analytics opens new avenues to predict human behavior for business strategies and precriming. But, machine learning is based on neural nets with exploding numbers of parameters which are often only trained and tuned to finalize desired results with big data. In this case, neural nets are black boxes with statistical procedures which miss causal explainability. But, without causal explainability of machine learning, clarification of responsibility is impossible. Explainable AI, however, is not only still at its beginning, but the question of responsibility transcends explainability as it leads ultimatively to commercial warranty and personal liability. A crucial example is the development of self-learning cars. The trust in provable and controllable software might contribute substantially to the acceptance of AI by the society, despite the inherent risks. Apparently there are still less regulations in software engineering which could help to identify responsibility. The challenges of responsibility need more research on the foundations of machine learning. Here we aim to get a sense for the necessary standards to be set for AI software.
Cédric Villani, Mathematician - member of the Academy of Sciences - 1st vice-president of the Parliamentary Office For Scientific and Technological Assessment (OPECST) – Member of Parliament Fields Medal,