We’re excited to bring back Transform 2022 in person on July 19 and virtually from July 20-28. Join leaders in AI and data for in-depth discussions and exciting networking opportunities. Register today!
Algorithms have always been at home in the digital world, where they are trained and developed in perfectly simulated environments. The current wave of deep learning is facilitating AI’s leap from the digital to the physical world. The applications are endless, from manufacturing to agriculture, but there are still hurdles to overcome.
For traditional AI specialists, deep learning (DL) is old hat. It had its breakthrough in 2012 when Alex Krizhevsky successfully deployed convolutional neural networks, the hallmark of deep learning technology, for the first time with his AlexNet algorithm. It was neural networks that enabled computers to see, hear and speak. DL is the reason we can talk to our phones and dictate email to our computers. Yet DL algorithms have always played their role in the safe simulated environment of the digital world. Pioneer AI researchers are working hard to bring deep learning into our three-dimensional physical world. Yeah, the real world.
Deep learning could do a lot to improve your business, whether you’re an automaker, a chipmaker, or a farmer. Although the technology has matured, the transition from the digital to the physical world has proven to be more difficult than expected. That’s why we’ve been talking about smart fridges for shopping for years, but no one has one yet. When algorithms leave their digital nests and have to fend for themselves in very real and raw three dimensions, there is more than one challenge to overcome.
Automation of annotations
The first problem is precision. In the digital world, algorithms can get away with around 80% accuracies. This is not enough in the real world. “If a tomato harvesting robot only sees 80% of all the tomatoes, the grower will lose 20% of his turnover,” says Albert van Breemen, a Dutch artificial intelligence researcher who has developed DL algorithms for harvesting. agriculture and horticulture in the Netherlands. Its AI solutions include a robot that cuts leaves from cucumber plants, an asparagus harvesting robot, and a model that predicts strawberry harvests. His company is also active in the world of medical manufacturing, where his team has created a model that optimizes the production of medical isotopes. “My customers are used to 99.9% accuracy and they expect AI to do the same,” says Van Breemen. “Every percentage loss in accuracy is going to cost them money.”
To reach the desired levels, AI models must be constantly retrained, which requires a stream of constantly updated data. Data collection is both expensive and time-consuming because all of this data needs to be annotated by humans. To solve this challenge, Van Breemen has equipped each of its robots with a feature that lets it know when it is performing well or poorly. When they make mistakes, the bots only upload the specific data they need to improve. This data is collected automatically across the entire robot fleet. So instead of receiving thousands of images, Van Breemen’s team only receives about 100, which are then tagged and tagged and sent back to the robots for recycling. “A few years ago, everyone said data was gold,” he says. “Now we see that the data is actually a huge haystack hiding a nugget of gold. So the challenge is not just to collect a lot of data, but the right kind of data.”
His team has developed software that automates the retraining of new experiences. Their AI models can now train themselves in new environments, effectively knocking the human out of the loop. They also found a way to automate the annotation process by training an AI model to do much of the annotation work for them. Van Breemen: “It’s a bit paradoxical because you could say that a model capable of annotating photos is the same model I need for my application. But we are training our annotation model with much smaller data size than our lens model. The annotation model is less accurate and can still make errors, but it is enough to create new data points that we can use to automate the annotation process. »
The Dutch AI specialist sees huge potential for deep learning in the manufacturing industry, where AI could be used for applications such as fault detection and machine optimization. The global smart manufacturing industry is currently valued at $198 billion and has a projected growth rate of 11% through 2025. The Brainport region around the city of Eindhoven, where Van Breemen’s company has its headquarters, is full of world-class manufacturing companies, such as Philips. and ASML. (Van Breemen has worked for both companies in the past.)
The sim-real gap
A second challenge of applying AI in the real world is the fact that physical environments are much more varied and complex than digital environments. A self-driving car trained in the United States will not automatically work in Europe with its different traffic rules and road signs. Van Breemen took on this challenge when he had to apply his DL model that cuts the leaves of cucumber plants to another grower’s greenhouse. “If that happened in the digital world, I would just take the same model and train it with the data from the new producer,” he says. “But this particular grower was using their greenhouse with LED lighting, which gave all the cucumber images a blue-purple glow that our model didn’t recognize. We therefore had to adapt the model to correct for this real discrepancy. There are all those unexpected things that happen when you take your models out of the digital world and apply them to the real world. »
Van Breemen calls this the “sim-to-real gap,” the disparity between a predictable, unchanging simulated environment and the unpredictable, ever-changing physical reality. Andrew Ng, the renowned Stanford artificial intelligence researcher and co-founder of Google Brain, who is also looking to apply deep learning to manufacturing, talks about “proof of concept at the production gap”. This is one of the reasons why 75% of all AI projects in manufacturing fail to launch. According to Ng, paying more attention to cleaning up your dataset is one way to fix the problem. The traditional view of AI was to focus on building a good model and let the model handle the noise in the data. However, in manufacturing, a data-centric view may be more useful, as the size of the data set is often small. The improved data will then immediately have an effect on improving the overall accuracy of the model.
Besides cleaner data, another way to bridge the gap between simulation and reality is to use cycleGAN, an image translation technique that bridges two different domains made popular by aging apps like FaceApp. The Van Breemen team studied cycleGAN for its application in manufacturing environments. The team trained a model that optimized the movements of a robotic arm in a simulated environment, where three simulated cameras observed a simulated robotic arm picking up a simulated object. They then developed a cycleGAN-based DL algorithm that translated real-world images (three real cameras observing a real robotic arm picking up a real object) into a simulated image, which could then be used to retrain the simulated model. Van Breemen: “A robotic arm has many moving parts. Normally, you would program all these movements in advance. But if you give it a clearly described objective, like picking up an object, it will now optimize movements in the simulated world first. Thanks to cycleGAN, you can then use this optimization in the real world, which saves a lot of man-hours. Each separate factory using the same AI model to operate a robotic arm should form its own GAN cycle to adjust the generic model to its own specific real-world parameters.
The field of deep learning continues to grow and develop. Its new frontier is called reinforcement learning. This is where algorithms move from mere observers to deciders, giving bots instructions on how to work more efficiently. Standard DL algorithms are programmed by software engineers to perform a specific task, such as moving a robotic arm to bend a box. A reinforcement algorithm might discover that there are more efficient ways to bend boxes outside of their pre-programmed range.
It was reinforcement learning (RL) that enabled an AI system to beat the world’s best Go player in 2016. Now RL is also slowly making its way into the making. The technology is not yet mature enough to be deployed, but according to experts, it will only be a matter of time.
With the help of RL, Albert Van Breemen plans to optimize an entire greenhouse. This is done by letting the AI system decide how the plants can grow in the most efficient way for the grower to maximize their profits. The optimization process takes place in a simulated environment, where thousands of possible growth scenarios are tested. The simulation plays with different growth variables like temperature, humidity, lighting, and fertilizer, then chooses the scenario where plants grow best. The winning scenario is then transcribed into the three-dimensional world of a real greenhouse. “The bottleneck is the gap between simulation and reality,” says Van Breemen. “But I really expect these issues to be resolved in the next five to ten years.”
As a trained psychologist, I am fascinated by the transition that AI makes from the digital to the physical world. It shows just how complex our three-dimensional world really is, and how many neurological and mechanical skills are needed for simple actions like cutting leaves or bending boxes. This transition makes us more aware of our own internal, brain-powered “algorithms” that help us navigate the world and which have taken millennia to develop. It will be interesting to see how the AI will compete with this. And if the AI does eventually catch up, I’m sure my smart fridge will order champagne to celebrate.
Bert-Jan Woertman is the director of the Mikrocentrum.
Welcome to the VentureBeat community!
DataDecisionMakers is where experts, including data technicians, can share data insights and innovations.
If you want to learn more about cutting-edge insights and up-to-date information, best practices, and the future of data and data technology, join us at DataDecisionMakers.
You might even consider writing your own article!
Learn more about DataDecisionMakers