Assuring Safe Implementation of Decision Support Functionality based on Data-driven Methods for Ship Navigation
Chapter
Published version
Åpne
Permanent lenke
https://hdl.handle.net/11250/3051298Utgivelsesdato
2020Metadata
Vis full innførselSamlinger
Originalversjon
e-proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference (ESREL2020 PSAM15) https://www.rpsonline.com.sg/proceedings/esrel2020/pdf/4899.pdfSammendrag
The rapid technology development related to machine learning and data-driven models for autonomous and unmanned vessels continues. Also manned vessels can make use of this technology, for example to enhance situational awareness of an on board navigator. Potentially, this can contribute to increase safety and to optimize operations by transferring tasks and functions to where they are most effectively handled, ashore and on board. However, the introduction of decision support systems and functionality to enhance situational awareness can have detrimental consequences, due to for example misunderstandings, wrong use of the functionality, malfunctioning user-interface, as well as bad or wrong decision proposals. This can be the case, even when manning levels are kept unchanged. To ensure safety, we argue that the system must be rigorously tested, and the system’s limitations, uncertainties and capabilities must be correctly conveyed to its users. Based on current regulations, including the International Maritime Organization (IMO) resolution Principles of minimum safe manning, we investigate how minimum safe manning of a vessel should be established considering relevant factors, including the ship’s level of automation and shore support. We also discuss challenges related to lack of specification, which is an inherent challenge to decision support systems based on object detection and image classification since these tasks rely on perception of the environment, which can only partially be specified using rules. Furthermore, challenges related to lack of explainability are discussed, and potential benefits of using methods for black-box explanation during operation and during testing are investigated. We emphasize the importance of testing and verification of the dataset used to train the models, ensuring that it sufficiently covers relevant scenarios. We also discuss challenges related to human factors, and emphasize the importance of safety management systems used to identify risks, responsibilities, resources and competencies ensuring compliance with rules and regulations.