BlogNews

Why we should take AI safety of autonomous vessels seriously and what we need to do

As artificial intelligence (AI) gradually stepping into our life with more and more breakthrough technologies (e.g. AI for science, AI-generated content) in recent years, the safety risks in machine learning (ML) models are continuously getting public attention. Regarding the answers or output given by some fancy AI applications, such as ChatGPT, if they are incorrect, these may not be considered as a safety threat. In turn, AI applications giving incorrect answers for medical advice may be considered as such. Likewise, for machine learning models used in some safety-critical industrial applications, incorrect model output could also be a very different story.

Deep neural networks have been known as complicated, non-convex, non-transparent models that are very difficult to analyze, verify and test as other computer programs. Those deep neural networks could handle a lot of tasks, sometimes outperform human beings, but in a way different from our human intuition. For example, Convolutional Neural Network (CNN) models seem to rely more on the background than we human for object detection and classification (Figure 1). They also pay more attention to textures and other local structures of the object, which is usually not a priority when human classify objects (Figure 2).

Figure 1                                                            Figure 2

Figure 1: A cow on the beach rather than grassland is not recognized as a cow by a CNN [1]; Figure 2: The image of a cat is misclassified as an elephant by standard deep neural networks due to its texture and local structures. [2]

Fails of machine learning models also exist in other fields such as natural language processing and reinforcement learning. Excellent performance of machine learning models in training dataset and validation dataset does not always guarantee its performance in test dataset or the real world, unfortunately. For safety-critical industrial applications such as autonomous vehicles and vessels, there are some gaps in existing safety standards and the implementation of these machine learning models due to the inherent safety drawbacks of machine learning models including domain shift, data corruption and natural perturbations. These issues have been addressed intensively in recent years for autonomous vehicles [3]. However, there is still a lack of deep dive into the robustness of ML tasks in autonomous vessels. Although there are numerous papers addressing the robustness and safety risks in autonomous vehicles, we need to note that different risks existing in autonomous vehicles and vessels due to some major differences between their input data. Identification, characterization, quantification, and mitigation of possible safety risks in both data and machine learning models of autonomous vessels are required towards safer autonomous vessels.

An article by Yunjia Wang

References
[1] Beery, S., Van Horn, G., & Perona, P. (2018). Recognition in terra incognita. In Proceedings of the European conference on computer vision (ECCV)(pp. 456-473).
[2] Geirhos, R., Jacobsen, J. H., Michaelis, C., Zemel, R., Brendel, W., Bethge, M., & Wichmann, F. A. (2020). Shortcut learning in deep neural networks. Nature Machine Intelligence2(11), 665-673.
[3]Mohseni, S., Pitale, M., Singh, V., & Wang, Z. (2019). Practical solutions for machine learning safety in autonomous vehicles. arXiv preprint arXiv:1912.09630.

Related Articles

Check Also
Close
Back to top button