AI and Computer Vision – the key to successful search
Published on: June 2021
The name Sentient Vision Systems suggests the company just manufactures optical camera systems, but it would be wrong to think so. Yes, the company’s Kestrel and ViDAR systems use the imagery feed from optical sensors of one kind or another to generate their outputs. But actually, Sentient Vision Systems is an artificial intelligence (AI) company that uses advanced software to enhance the performance of sensors and mission systems, and that’s a fundamentally different thing.
From a distance of five nautical miles can you tell the difference between an arctic ice floe, a breaking wave and an upturned boat? Can you even see such a thing with the naked eye? You might not see such a target even when using the zoom function on a high-power Electro-Optic or Infrared (EO/IR) sensor. But ViDAR (for Visual Detection and Ranging) can detect the target in the imagery feed, discriminate between possible alternatives, and draw the operator’s eye to what he or she is looking for. That’s the power of AI.
AI and mastery of traditional computer vision technology underpins everything the company has done over the past 17 years since it started working on target detection solutions over land and maritime environments. Sentient’s ViDAR systems use the AI within its deep learning and computer vision algorithms to detect tiny targets that are almost invisible in the imagery feed from an EO/IR sensor, especially in very challenging conditions, and filter out irrelevant information. If the EO/IR sensor can see it, then it is detectable, even at potentially sub-pixel level.
This matters to operators: if you can enhance the resolution of the system, you can fly cameras faster and higher to cover a search area more quickly whilst maintaining safety and integrity in the search. ViDAR enables airborne operators to cover a search area up to 300 times greater than an aircraft without ViDAR and has demonstrated detection rates over 96%.
ViDAR is a software addition to an operator’s mission management system, not a replacement. When it detects a target, it highlights this in a thumbnail on the operator’s display, showing target location and distance, for the operator to investigate further.
Sentient Vision makes it all look quite easy and quick – but delivering this capability has taken hard work over many years in a variety of challenging environments. ViDAR has now proven itself in active operations with customers such as the U.S. Coast Guard, Australian Maritime Safety Authority (AMSA), Royal Australian Navy, as well as Fisheries and Oceans Canada.
Sentient Vision’s team includes AI experts in traditional computer vision, machine learning, and deep learning enabling detection performance to be optimised across the discipline. If you only use deep learning algorithms on the sensor imagery feed it’s possible to detect and classify targets but they need to be at least 4×4 pixels in size. Traditional computer vision algorithms however will detect targets down to sub-pixel levels. But whilst providing much higher resolution, they can potentially overwhelm a mission system and operator by displaying many irrelevant targets.
The combination of advanced deep learning algorithms and traditional computer vision makes it possible to detect very small targets, classify potential targets and filter out features of no interest to the operator over land or sea. The system can be set by the operator to filter out irrelevancies such as flotsam and jetsam, or objects outside a defined area, and focus on the object of the search – a survivor in the water, a red vehicle travelling in a certain direction, or a suspicious boat containing criminals or terrorists. The result is high search performance with a low false alarm rate, and massively reduced operator task saturation.
This makes it possible to introduce operational modes according to the type of search being undertaken and the operating environment. So, in the arctic, for example, one mode makes it possible to eliminate ice floes which might appear very similar to the hull of an upturned boat but whose profusion would very quickly overwhelm both an operator and their mission system. Or over land, it is possible to eliminate all objects that are not relevant and only detect the type of vehicle or colour of vehicle or even a person walking in the wrong direction. This reduces operator workload and ensures the focus is on the current mission.
The other advantage of sustained R&D into AI, deep learning and computer vision is that these make ViDAR progressively more power-efficient. This in turn makes it attractive to both manufacturers and operators of Unmanned Air Vehicles (UAVs). The power required onboard is determined by the demand from the computer system; reducing this reduces onboard power generation needs and therefore improves endurance of the platform and allows ViDAR to be miniaturised for use on Group 1 UAVs.
Using AI algorithms to filter out irrelevant targets makes onboard processing of imagery data highly desirable. Downloading raw data demands huge amounts of communication bandwidth. Downloading processed meta data containing only targets of interest significantly reduces bandwidth demand, which reduces the burden on communications datalinks.
A power-efficient processing system and low-bandwidth datalink brings operators a step closer to the employment of long-endurance battery powered surveillance UAVs. The internal combustion engine still dominates the manned and unmanned airborne surveillance market, but battery technology is improving. A power-efficient sensor system and low bandwidth data link are key to the eventual widespread employment of battery powered surveillance aircraft and ground systems. But that’s still in the future.
Here and now, Sentient Vision Systems’ mastery of operationally proven advanced computational and deep learning algorithms and its investment in R&D ensures those who operate search and surveillance missions have a unique, cost-effective, and capable wide-area imagery solution, and it will only get better.