The Embedded Vision Alliance is No More
I just heard that the Embedded Vision Alliance (embedded-vision.com) has rebranded itself as the Edge AI and Vision Alliance (edge-ai-vision.com). To be honest, I think this change is absolutely the right thing for the alliance and I also think this is the perfect time for the change. The original Embedded Vision Alliance was founded in 2011 by Jeff Bier and Berkeley Design Technology Inc. (BDTI). Although this is only nine years ago at the time of this writing, forming the alliance was pretty visionary (no pun intended) for its time.
The raison d'etre for the Embedded Vision Alliance was that a powerful new technology in the form of computer/machine vision had become ready for widespread use, but the companies and developers creating systems and applications were struggling to figure out how to best incorporate it into their products. Furthermore, technology suppliers needed data and insights to help them find their best opportunities, as well as connections to customers and partners to enable them to grow their businesses.
The Embedded Vision Alliance was established to address all these needs. It began in 2011 with 14 founding members. The next year, the first Embedded Vision Summit in 2012 attracted 160 attendees. In 2019, the 100th member company joined the alliance and around 1200 attendees made the trek to that year’s Embedded Vision Summit.
As an aside, I’ve attended a lot of conferences, exhibitions, and summits in my time, but the Embedded Vision Summit stands proud in the crowd. The presentations are all top-notch, the attendees are all serious players (no “tire-kickers”), and the organization is fantastic. With regard to the exhibit area, I’m reminded of the soliloquy by Roy Batty (played by Rutger Hauer) in the Blade Runner movie -- the beginning when he says, “I’ve seen things you people wouldn’t believe...” All I can say is this is the way I feel after seeing what’s on display at the Embedded Vision Summit.
I’ve used the example of the Gartner Hype Cycle before and I’ll use it again, because it’s relevant to this column. Developed by the American research, advisory, and information technology firm Gartner, The Hype Cycle is a graphical presentation used to represent the maturity, adoption, and social application of specific technologies.
Technologies like artificial intelligence (AI) and machine learning (ML) were largely of academic interest only until relatively recently. In the 2014 version of the Hype Chart, for example, neither of these technologies were represented at all. By comparison, in the 2015 incarnation of the chart, machine learning had already crested the Peak of Inflated Expectations.
When the Embedded Vision Alliance was formed in 2011, artificial intelligence and machine learning weren’t even a “blip on the radar.” Now, just nine years later, artificial intelligence and machine learning are everywhere you look (again, no pun intended).
The point is that we are seeing the same challenges in Edge AI that we saw in computer vision almost a decade ago. Then, as now, a powerful new technology had become ready for widespread use. Then, as now, the companies and developers creating systems and applications were struggling to figure out how to best incorporate it into their products. And then, as now, technology suppliers needed data and insights to help them find their best opportunities, as well as connections to customers and partners to enable them to grow their businesses.
With its history of success with regard to embedded vision, the newly branded Edge AI and Vision Alliance is perfectly poised to address all of these issues. Why “Edge AI”? Well, the cloud is all well and good, but the edge is where the nitty-gritty action is taking place. When you have autonomous robots racing around an industrial complex, for example, you can’t afford the latencies involved in waiting for a Cloud AI to make a decision, and things can really go pear-shaped if you lose your connection to the internet. As Jeff noted in an email:
“By ‘Edge AI’ we mean AI processing that occurs locally, whether on a chip, device, or on-premise. We include both hybrid approaches where some processing happens locally and some in the cloud. And we include edge devices that process all sorts of sensor data: images, audio, vibration, radar, lidar, and the like. Examples of edge AI systems include a warehouse robot using cameras and lidar, a smart speaker with local wakeword processing, an on-premise video recorder with object detection and tracking, and a radar-based hospital patient monitor that uses AI to detect breathing, movement, and sleep.”
Due to the momentum it’s gained over the years, the 2020 Embedded Vision Summit is flying under its original moniker, but I would be extremely surprised if it doesn’t manifest itself as the Edge AI and Vision Summit in 2021.
As always, I am tremendously enthused by all the exciting technological developments that are taking place. I’m also excited because I’ve been hearing about some of the next generation technologies that are going to be announced at the forthcoming tinyML Summit, February 12-13 in San Jose California, and the Embedded World Exhibition and Conference, February 25-27 in Nuremburg, Germany. However, I’m afraid I can say no more for the moment, except: “Watch this space!”