Artificial intelligence (AI) has the potential to transform the shipping industry, offering improvements in efficiency, safety, and environmental sustainability. However, like any disruptive technology, its integration comes with ever evolving advantages and disadvantages.
Artificial intelligence is increasingly being integrated into the shipping industry, offering significant improvements in efficiency, safety, and environmental sustainability. It can be applied in several ways, including autonomous vessels, route optimization, predictive maintenance, and cargo management, while also enhancing supply chain and logistics operations, improving safety and security measures as well as helping in monitoring environmental conditions.
On ships, AI is used for behavior-based safety, collision avoidance, fire detection, route optimization, and even identifying misdeclared cargo, further advancing the industry’s operational capabilities.
As AI becomes more embedded in maritime operations, it contributes to real-time cybersecurity, enabling systems to detect anomalies and respond proactively, without waiting for human intervention.
However, despite these positive advancements, AI also presents risks. Malicious actors are increasingly looking for ways to exploit AI systems, potentially targeting vulnerabilities to conduct cyberattacks or disrupt operations.
This dual nature of AI, offering both vast potential and complex challenges, requires ongoing attention to ensure its safe and beneficial integration into the maritime industry.
Nick Andriopoulos, AMMITEC Member & IT Manager of Heidmar, AMMITEC, during his presentation on the SMART4SEA Athens Forum 2024, pointed out that there are inherent risks in AI, allowing access to sensitive data. According to Andriopoulos, one major risk is “hallucinations,” a phenomenon where AI generates information that might seem correct but is incorrect.
What are AI hallucinations?
AI hallucination is a phenomenon wherein a large language model (LLM)—often a generative AI chatbot or computer vision tool—perceives patterns or objects that are nonexistent or imperceptible to human observers, creating outputs that are nonsensical or altogether inaccurate.
Term origins
In 1995, Stephen Thaler demonstrated how hallucinations and phantom experiences could emerge from artificial neural networks through random perturbations of their connection weights.
In the early 2000s, the term “hallucination” was used in computer vision to describe the process of adding detail to images, such as generating high-resolution faces from low-resolution inputs, known as face hallucination.
By the late 2010s, the term evolved to describe the generation of factually incorrect or misleading outputs by AI systems, particularly in tasks like translation or object detection. For instance, Google researchers used the term in 2017 to refer to neural machine translation (NMT) models producing responses unrelated to the source text, and in 2018, it was used in computer vision to describe errors like detecting non-existent objects due to adversarial attacks.
The term “hallucinations” in AI gained broader recognition with the rise of widely used chatbots based on large language models (LLMs).
Examples of AI hallucinations
Some common examples include:
Incorrect predictions: An AI model may predict that an event will occur when it is unlikely to happen. For example, an AI model that is used to predict the weather may predict that it will rain tomorrow when there is no rain in the forecast.
False positives: When working with an AI model, it may identify something as being a threat when it is not. For example, an AI model that is used to detect fraud may flag a transaction as fraudulent when it is not.
False negatives: An AI model may not identify something as being a threat when it is. For example, an AI model that is used to detect cancer may not identify a cancerous tumor.
Another method is called adversarial training. Unlike other cyber threats, this form of attack can use AI’s inherent abilities and weaponize them to create malicious outputs.
What is adversarial AI?
Adversarial AI, also known as adversarial attacks or AI attacks, is a facet of machine learning that involves malicious actors deliberately trying to subvert the functionality of AI systems.
These kinds of attacks are dangerous because they don’t draw too much attention to them by subtly interfering with the internal logic of AI and ML (machine learning) systems. This can make it a challenging task, as the potential attack methods are limited by human understanding.
Potential risks in the maritime industry with implementing AI
While the benefits of AI are numerous and evident, it also poses challenges and limitations that require attention from shipowners.
Shipowners must conduct due diligence to ensure that AI implementation aligns with their specific needs and operation. Key considerations include cybersecurity, achieving credible data, overreliance, training and privacy.
While AI systems evolve, the associated risks and threats become continuously more sophisticated and difficult to combat, making human supervision still imperative.