The next level of vision technology is turning machines into smart partners – Innovative embedded applications in industry and everyday life benefit from the combination of 3D technology and Artificial Intelligence.
Image processing is one of the key technologies in automation, robotics and the smart factory. With image-based artificial intelligence, Vision systems provide accurate analyses of environments and objects. The 3D technology develops with simple to integrate building blocks to the new normal and generates a new level of perception in real time. With even more precise and faster analyses, intelligent algorithms can make valid decisions. With embedded vision plus innovative applications and business models, machines are turning from a repetitive service provider into an intelligent partner.
3D Imaging and Artificial Intelligence (AI) as individual building blocks are no longer brand new, as now image processing has reached the next level with the combination of both technologies. The reason for this lies primarily in the technical development and digitalization of image processing, which has created an enormous market demand in recent years and made it possible for vision systems to take the leap to the mass market. Perceptional computing generates a new level of perception with 3D sensing, and AI enables accurate, real-time analysis without the 2D distortion and delay of assumptions and simulations. Using Embedded Vision, the machines are equipped with human senses. The visual 3D sensor makes them see and the AI lets them understand, allowing both machines and devices to interact with their environment and cognitive learning. In industrial automation, machines and robots can now make valid decisions themselves.
Machines and devices with integrated vision technology can also be controlled without contact. The tracking of eyes, faces and movement is the basis for intelligent and novel consumer goods, security applications and industrial solutions. “Smart Homes”, for example, can be controlled with the touch of a finger, coffee machines can detect who is standing in front and automatically prepare their favorite coffee and cars slow down should the driver fall asleep. In the industry, the precise detection of objects and the exact detection of position and distance come into play. Embedded 3D sensing provides robots with the mechanical skills to grab like humans and avoid collisions with its environment during movement. Drones fly around obstacles. By networking multiple robots or 3D-sensing devices, companies can leverage the full optimization potential of intelligent algorithms for their processes.
High potential for intelligent robots in industry and logistics
Robots have been used in industry for decades. They were programmed to detect obstacles in 2D and navigate around the room using markers, thus being reliable aids. 3D sensing and AI algorithms, on the other hand, make industrial robots real partners and thoughtful colleagues. The truly embedded 3D technology makes robots fast, allowing them to recognize once unknown objects in real time. The position and distance measurements are no longer based on old data, CAD models, or vaguely made assumptions – the robots now recognize and act immediately with high precision. For example, “random picking” applications can be implemented and in quality assurance, 3D cameras are also much faster and more accurate, as well as less complicated during construction. Especially where both man and machine share a work space, the speed and precision provided by 3D improves safety and, for example, can make protective fences obsolete. Now, both man and machine can work hand in hand in the truest sense of the word.
Self-learning algorithms allow networking between various devices and machines within the Internet of Things, so they can independently coordinate with each other. This option is particularly relevant in the logistics sector with many unexpected and unpredictable events, but it also allows production in batch size 1. The big advantage is that 3D imaging does not have to be trained, 3D technology looks like a human and learns in combination with AI. 3D technology enables easy interaction and collaboration between robots and humans. At the moment, small logistics robots are experiencing a big boom. The intelligent and interconnected “R2D2’s” drive through the warehouse, lift and move boxes, grab objects and bring them to workers, and of course clear up these boxes and objects. With accurate object, position and distance detection based on the 3D data of the embedded robotic camera, the digital service providers can use SLAM cartography to record their environment within seconds and navigate independently. They detect both fixed and mobile obstacles and avoid collisions even in new, unknown situations. Several networked robots communicate with each other and coordinate their actions, thus independently ensuring a smooth process in the warehouse and optimal support of workers and processes. These small all-round robots increase efficiency in the logistics chain immensely.
In the future, small or large helpers are also conceivable as digital liftboys and room service in hotels or inventory services in supermarkets. Munich Airport is already testing a mobile service robot called “Josie Pepper” to inform travelers and accompany them to the right departure gate.
Smart Home – New Business Models for Vacuum Cleaners
There has already been a lot of talk about digitally monitored houses and self-containing refrigerators. The fully appliance-integrated 3D technology and intelligent algorithms open up many new application possibilities in the so-called Smart Homes, especially for home security and surveillance which benefit immensely. A great deal is already a reality, such as drones, which transmit a live image to the homeowner’s cell phone as soon as a suspicious movement is detected in the garden – and it is not a stray cat. The most visible practical use in everyday life is currently being generated by intelligent 3D technology for vacuum cleaners and lawn mowers. Again, good work has been done with 2D imaging, but 3D makes the difference. Through 3D recognition, objects are not simply captured by their outlines, as the AI can classify and categorize things. So, the wedding ring is not mindlessly consumed by the vacuum, but rather classified as “precious jewelry” and avoided. Or the dog pile being bypassed rather than “sucked up” – YouTube already shows enough (very funny) videos showing how 2D technology distributes excrement across the floor. By means of 3D navigation and the intelligent mapping of the environment, a structured cleaning is now possible. Instead of using tactile sensors on a collision course, furniture, carpets and trees are visually recognized and bypassed in advance. An intelligent lawn mower detects where the lawn edge ends, stops there and realizes that the small pile in the grass is a hedgehog, which must be dodged.
The “Next Level” vision technology enables not only the technological advantages but also new business models. Those who must still spend large sums on a vacuum cleaner robot could save a lot of money in the future with advertising-based leasing models. The intelligent and networked vacuum cleaner knows the exact size of the apartment needed to be cleaned, the floor plan, the brands and the condition of the furniture, as well as the individual furnishing styles. This allows conclusions to be drawn on the income and, with the appropriate consent, individualized advertising. A furniture store can thus propose to potential customers a couch that matches the dimensions, the style and the price based off their exact living conditions. With these data-based models, the purchase of the vacuum cleaner would be very favorable, since the manufacturer makes their profit with the data sales. Also conceivable are leasing models in which the customer pays for each use and receives advertising accordingly.
Without 3D, Drones Cannot Navigate Autonomously
Drones demonstrably offer a high practical use potential for the integrated 3D technology with AI. None of the popular self-propelled, follow-me drones could navigate without this combination. Intelligent drones monitor agricultural growth and hyperspectral drones can distinguish rocks from potatoes during harvest. Surveying industrial sites in inaccessible terrain is unimaginable without 3D drones delivering better and more accurate data than ever before, in real time. Thanks to the complete embedding and miniaturization, the modern 3D technology is so light and the processing processors so small that they are no longer a limitation even for ultralight drones. The high level of efficiency and innovation of drones for companies reflect two creative examples of practice.
A power company, for example, uses smart drones to monitor its electricity pylons. No technician today has to climb dangerously high by default. Rather, the technician can control a drone from below, which transmits the images and evaluates the data independently. It automatically measures and maintains the correct distance and notifies the technician of possible abnormalities. The drone steers locally in direct communication with the technician, who only has to make a visual inspection at dizzying heights in case of deviations.
A second “outside the box” application is the optimization of a company’s flow of goods and logistics based on airborne surveillance. Drone data can accurately view and virtually simulate all goods routes, trucking and conveying movements as well as processes of a company from a bird’s-eye view. This way, it can be “recognized” that, for example, the delivery trucks arrive at the ramp too early, thus causing a jam in the supply chain. With the knowledge of the drones, the in-house logistics can be made highly efficient.
Amazon’s GO is unthinkable without 3D vision and AI
One of the most recent and prominent examples of the use of 3D technology in conjunction with Artificial Intelligence is Amazon’s pilot supermarket GO. Customers do not need cash there and do not have to pay at any checkout; they just go in, load the products they want and go out with a full shopping cart. The market is fully camera-monitored and recognizes which goods a customer has placed in his car. Currently the market is open for employees of Amazon, it recognizes the customers by face recognition and charges directly to the deposited credit card. Intelligent cameras register the facial expression of the shopping people so an employee can offer his help when the algorithm captures a questioning look. Based on this data, Amazon can analyze customer behavior, reactions to products as well as run a decision-making process very closely, and draw profit and turnover-increasing conclusions.
In addition to industrial and commercial applications, there are also purely humanitarian applications, such as intelligent 3D glasses for the support of the visually impaired in everyday life. The glasses are equipped with the latest stereo cameras, where intelligent algorithms translate the visual signals into haptic and acoustic information. The visually impaired person is read street names, tram lines or signs at shops, as the audio information is based on the recognition of forms, objects and fonts. Positions and distances are provided as a haptic feedback on a belt equipped with vibrating motors. Depending on where an obstacle is, it vibrates in another place on the arm. The visually impaired user learns a kind of perception that enables him to fully understand his surroundings and to orient himself.
The next level of image processing has already begun. Applications that use a combination of Embedded Vision, 3D technology and artificial intelligence have long since found their way into everyday industrial and social life. In the near future, these applications and those that humanity still cannot imagine today will describe a new normality. What happened with the light bulb around 150 years ago could also apply to the potential of image processing and artificial intelligence – at some point, industry and humans will no longer be able to imagine how it went without it.