Computer vision (CV) is a
subset of AI that enables systems to interpret information from digital images and react to it with action or recommendations.
The goal of computer vision technology is to
emulate human vision for performing monotonous or complex visual tasks faster and even more efficiently.
Historically, computer vision started with applications that were able to accomplish limited tasks, relied on a lot of manual coding, and needed human assistance. When machines began to learn with ML progress, it became possible to create small apps and apply statistical learning algorithms for recognizing patterns or detecting objects. A fundamental shift came with major strides in AI as deep learning and hybrid models using neural networks largely replaced ML algorithms.
After the release of the first commercial computer vision software back in the 1970s, computer vision applications have evolved — from enabling reading devices for the blind to transforming entire industries.
Some systems powered by computer vision have achieved 99% accuracy today and can even surpass human performance (for instance, in
diagnostic radiology).
The key drivers behind the surge in computer vision applications are: