The algorithm is the first to enable 'risk-bounded adaptive sampling.' An adaptive sampling mission is designed to automatically adapt an AUV’s path, based on new measurements that the vehicle takes as it explores a given region. Most adaptive sampling missions that consider risk typically do this by finding paths with a concrete, acceptable level of risk. However, the researchers found that accounting for risk alone could significantly limit a mission’s potential rewards.

GET THE SAFETY4SEA IN YOUR INBOX!

Namely, the algorithm takes in bathymetric data, or information about the ocean topography, to calculate the level of risk for a proposed path. The algorithm also considers all previous measurements that the AUV has taken, to compute the probability that such high-reward measurements may exist along the proposed path.

If the risk-to-reward ratio meets a certain value, determined by scientists beforehand, then the AUV proceeds with the proposed path, taking more measurements that feed back into the algorithm to help it evaluate the risk and reward of other paths as the vehicle moves forward.

The researchers tested their algorithm in a simulation of an AUV mission east of Boston Harbor. They utilized bathymetric data from the region during a previous NOAA survey, and simulated an AUV exploring at a depth of 15 meters through regions at relatively high temperatures. They looked at how the algorithm planned out the vehicle’s route under three different scenarios of acceptable risk.

In the scenario with the lowest acceptable risk, namely where the vehicle should avoid any regions that would have a very high chance of collision, the algorithm mapped out a conservative path, keeping the vehicle in a safe region that did not have any high rewards. For scenarios of higher acceptable risk, the algorithm charted bolder paths that took a vehicle through a narrow chasm, and to a high-reward region.

The team also used the algorithm through 10,000 numerical simulations, generating random environments in each simulation, in order to create a path. It found that the algorithm 'trades off risk against reward intuitively, taking dangerous actions only when justified by the reward.'