Hydrography

Data Processing

Clearing Data Processing Bottlenecks

Data processing is already playing catch-up to acquisition; it will have to change dramatically if it’s going to keep up with what’s coming next.

By Wendy Laursen

Source: GeoAcoustics
Listen to this article

AI hasn’t worked all that well when it comes to object detection using conventional image or video data as is typically found on ROVs. The problem, says Dr Adrian Boyle, CEO and Founder of Cathx Ocean, is that environmental variables, such as turbidity, vehicle movement and the distance to target, make it incredibly difficult for any kind of automation algorithm to detect objects or features in near real time with any reliability.

Color and range are a good examples. If you are two meters away, color is pretty obvious; five meters away and it is gone. So a distance algorithm is required before color identifications can be made.

“Applying AI and machine learning is not enough,” says Boyle. “There’s a whole set of other steps needed to pre-process the data, to apply physics constraints for example using range information, to provide the metadata that enables that, and to then link it all together so that the AI algorithms can work much better.”

When image and laser data are combined, we can achieve this, says Boyle. “We strobe the lights and synchronize the laser so it does not appear in the image data. This also reduces backscatter and allows us to travel 10 times faster. The area and therefore volume of data increases by a factor of 10 or more, providing higher quality and a lot more data for machine learning model development.”

GeoAcoustics AI-based system for GeoSwath 4 can log clean data in milliseconds across diverse seabed environments and water depths. Source: GeoAcoustics

Add more sensors and higher frequency surveys and it increases by hundreds. And that’s where the industry is headed, whether it’s monitoring a habitat, an offshore wind farm, a pipeline or critical subsea infrastructure. The current processing bottlenecks, that can delay survey data reporting by weeks or months, will limit adoption of high-endurance, high-speed, autonomous data acquisition AUVs.

With Cathx Ocean, event detection can now happen in milliseconds using shape information to “measure” change, and it can happen in the AUV if the company’s CLARITY software is included on the vehicle’s system. Cathx Ocean is now developing the same processes for sonar target detection and AI processes that involve sensor fusion on the AUV in real time.

Without pre-processing about 70% of events can be detected by AI algorithms, says Boyle. Processing boosts that to about 80%, and then if sensor fusion is incorporated, it reaches about 95%. “We need to take it in small steps initially as each application is different. There is no silver bullet, but with the right tools and a unified approach we can transition much more efficiently from manual workflows.”

Source: north.io
As geospatial data volumes grow exponentially, monolithic systems experience performance bottlenecks, become cumbersome to maintain and are reaching their limits very fast. Jann Wendt, Founder & CEO, north.io

PlanBlue uses hyperspectral imaging combined with AI to deliver greater speed for assessing seafloor characteristics such as biodiversity. Where a camera typically uses red, green and blue light, a hyperspectral camera can record hundreds of bands in the visual spectrum. Combining this with images from a regular camera, precise location data and AI processing, the company can tell whether the seafloor is healthy, how much carbon it stores, its biodiversity, whether there is pollution, and more. It can identify key species and remove caustics, the shadows cast by the waves on the surface.

GeoAcoustics has made progress on processing bottlenecks for sonar data by incorporating AI data cleaning in its GS4 software for GeoSwath bathymetric sonars. With manual configuring, says CCO Richard Dowdeswell, the quality of results produced is correlated to the user’s experience. Instead, the AI-based system can log clean data in milliseconds across diverse seabed environments and water depths without any user intervention in the cleaning process, handling thousands of data points at a rate of 30 pings per second. “Receiving AI-processed data from GeoSwath 4, an AI-powered survey platform can make the same decisions as that of a human operator following the live stream,” says Dowdeswell. While it cannot replicate all the nuanced understanding and decision-making capabilities of a surveyor, it will help them operate more effectively over a larger area.

This browser does not support the video element.

Source: GeoAcoustics

Jann Wendt, Founder of north.io, is tackling the data bottlenecks that can start once even cleaned data is on the survey vessel. The company’s cloud-agnostic TrueOcean platform uses a range of programming languages, each chosen for specific tasks, that maximize data processing speed. Horizontally flexible and scalable, the components can rapidly ingest and process the data and make it available as petabyte-scale file structures for processing.

The TrueOcean platform technology already contains over 65 individual microservices, each conducting highly specific tasks in a large-scale but modular environment. “Traditional monolithic and non-cloud architectures, where all functionalities reside within a single codebase, struggle to handle the scalability and agility demanded by the industry. As geospatial data volumes grow exponentially, monolithic systems experience performance bottlenecks, become cumbersome to maintain and are reaching their limits very fast,” says Wendt.

This browser does not support the video element.

Source: GeoAcoustics

Instances of the platform can be deployed across geographically dispersed cloud platforms. Individual microservices could be deployed on edge computing devices like those found on research vessels and operate even in remote locations. This enables data-driven decisions in real-time, leading to more productive expeditions.

Rather than aggregating data into raster grids, the company’s latest development on big data processing, Zeus, enables 2D or 3D point cloud analysis of sensor data using tools that make it easy to isolate specific geographic locations and to look for historic trends even with complex data formats involving multibeam, side scan and sub-bottom profiler data. Data quality metrics like point spacing, motion and horizontal and vertical resolution can be computed on an individual ping basis in hours rather than weeks or months.

Taking advantage of cloud computing, north.io is working with NVIDIA on physics-based AI approaches for underwater acoustics modeling. Wendt is comparing the use of modeled values of sound speed in water, which can change with temperature and salinity, to those typically measured every six hours at sea. AI modelling has achieved 99.7% accuracy compared to the physical measurements, and it can be performed for every data point in a dataset, instead of every six hours. This physics-based approach could change how surveyors work with AI, says Wendt. “It is so new that we have yet to evaluate what it means in the end from a professional and business perspective.” He is now talking with experts across the industry to find out.

PlanBlue uses hyperspectral imaging combined with AI to deliver greater speed for assessing seafloor characteristics such as biodiversity. Source: PlanBlue

This browser does not support the video element.

Source: GeoAcoustics
August 2024
Teledyne Marine