AI Inference in Robotic Vision with Luxonis

Zachariah Peterson
|  Created: February 14, 2023  |  Updated: August 18, 2024
Robotic Vision Systems

In this episode, we are fortunate to have two key personalities at Luxonis, a hardware, firmware, software, AI, and simulation company. Erik Kokalj, director of application engineering at Luxonis, and Bradley Dillon, CEO of Luxonis discuss how and who can benefit from AI technology.

Tune in and make sure to check out the show notes and additional resources below.

Listen to Podcast:


Download this episode (right-click and save)

Watch the Video:

Show Highlights:

  • Introduction to Luxonis and its founding on April 2019
  • The liDAR and radar as an imaging technology, Erik briefly explains the technology behind it
  • Bradley shares why they decided to open-source some aspects of their platform and design
  • Luxonis’s AI training, AI conversion, and AI deployment onto hardware are all open-source, for their customers can quickly develop their own model and then deploy it on the device itself
  • Future design updates, miniaturization, and thermal management
  • What are some of the industrial applications that are utilizing the Luxonis imaging technology? Beekeeping was unexpected!
  • Robotic applications on robotic cars, Zach and Erik talk about action recognition on top of object recognition
  • Bradley talks about series 3 products and what’s coming in terms of capabilities – processing power, more AI inference

Resources and Links:


Get Your First Month of Altium Designer® for FREE

Transcript:

Bradley Dillon:

So really that's when we think about robotic vision, that's really how we view it as well, is that we want to be able to have one of these devices, and I'll hold one up here for people that are watching on the the video, just an incredibly small device like this that somebody can plug in and they can be up and running and doing something that's meaningful in less than 10 minutes. So that's what we mean. We wish everyone to make it easy.

Zach Peterson:

Hello everyone and welcome to the Altium OnTrack podcast. I'm your host, Zach Peterson, and today we're talking with Erik Kokalj, director of application engineering at Luxonis and Bradley Dillon, CEO of Luxonis. If you've seen the most recent Altium story, then you have probably been introduced to this company. They work in AI and robotic vision and we're very happy to be talking with them on the podcast today. Bradley, Erik, thank you so much for joining us.

Bradley Dillon:

Thanks for having us.

Zach Peterson:

Absolutely. You are one of the few AI and robotics companies that I've had the pleasure of talking to as host of the podcast. So I think this is a great opportunity for you guys to introduce yourselves and introduce the company and what Luxonis does for anyone who has not seen the most recent Altium story.

Bradley Dillon:

Yeah, absolutely. So Luxonis was founded nearly four years ago now in April, 2019, and we were really formed around trying to solve this big enterprise pain point, which was these machine vision systems. And so these are these types of systems that engineering teams were cobbling together, disparate types of hardware, trying to get the firmware to work and trying to be able to make it so that it could actually solve the problems that they were trying to do at scale.

These types of enterprises were spending millions of dollars and it was taking them years to be able to put these robotic vision systems into place. So when we identified that pain point, we said, "Gosh, we would like to be able to come along and help make robotic vision easy." And so Luxonis is a, we're a full stack provider. So we do hardware, firmware, software, we also do AI and simulation, and then we have a cloud application layer as well. So we're trying to make it so that we have this beautiful wild garden for robotics companies to be able to make the eyes and the brains part of their robot to be pretty easy and so that they can focus on the rest of the hard stuff when it comes to developing and deploying robots in the field. So yeah, that's Luxonis.

Zach Peterson:

So when you say you want it to make it easy, it sounds to me the pain point you were trying to solve is really that everyone was doing everything from scratch. Is that a fair assessment?

Bradley Dillon:

Yeah, it absolutely nailed it. So a lot of the Luxonis team formally worked at Ubiquity, and so Ubiquity is a provider of business Wi-Fi. And to be honest, it's pretty interesting to see somewhat of a rinse and repeat from what Wi-Fi was like 20 years ago. So when somebody was putting together wifi systems, it was very difficult. You had to have somebody that was an expert, like network engineer, to be able to take together these disparate systems, do a lot of tinkering to get them to work together, and then voila, you have Wi-Fi.

And then here we are 20 years later and WiFi's just a given. Everybody, even grandma, grandpa, they can get Wi-Fi and they can plug it in and they can make it work no problem. And so really that's, when we think about robotic vision, that's really how we view it as well, is that we want to be able to have one of these devices, and I'll hold one up here for people that are watching on the video, just an incredibly small device like this that somebody can plug in and they can be up and running and doing something that's meaningful in less than 10 minutes. So that's what we mean when we say we wanted to make it easy.

Zach Peterson:

So you've alluded to your background there just briefly. Maybe tell us a bit more, how you got the idea to get started in this and what your background is like more broadly, because I think everyone gets into these areas of, not just the electronics industry, but this specific area in very interesting ways. So I always like to hear people's stories.

Bradley Dillon:

Yeah, absolutely. So when we were founded, our founding member was Brandon Gilles, and his background was electrical engineering, but he's also a very curious engineer, where he really, really learned and explored all of the different types of disciplines. And Luxonis today, we have, it's crazy how many engineering disciplines that we have. So we have electrical engineers, PCB layout. We have mechanical engineers, industrial design engineers, firmware engineers, software engineers, ML engineers, it goes on and on and on. And so when you look at our product offering, the history is that in 2020 we did a kickstarter here and this was called our OAK-D. And so the D stood for depth. And so this is where you have a stereo depth pair and it's just set up very much your eyes.

And then as the product has evolved, we've moved on to our next generation of products. And you can see, visually, you can see the big improvement here on the form factor and the size of these types of devices. And so these devices have a number of sensors on them, so you can see that they have cameras, but also on the inside, they have other sensors like IMU's, microphones and things like that. And so we're really sensor agnostic and it's really about being able to put a device out there in the world that can perceive it, perceive the information that it needs in a way that's able to make it useful to whatever you're doing, whether that's you're actuating something or you're just, you need to observe something that you're sending that information somewhere else.

Zach Peterson:

So you say, "Sensor agnostic." I take that to mean that you're not developing a entire ecosystem around just your standard or your protocol or just your hardware, like what Apple does. You have to use only Apple plugs and you have to use the Apple headphones with the iPhone and the Mac and all of that. When I hear sensor agnostic, I'm thinking, okay, I could take this vision system and plug it into or interface it with another piece of hardware that I've built for industrial production or medical or whatever other application. Is that a fair assessment?
 

Bradley Dillon:

Yeah, it is a fair assessment, yeah. So we're trying to support more and more sensors, more and more types of external devices over time. So for example, in 2023, we'll be adding time of flight support, which is something that we don't currently have today. We've done some custom, but we don't have a standard offering for it. And then over time we want to make it so that you could take whatever the type of external sensor you have, whether it's liDAR or radar, sonar, it doesn't really matter, but that you can feed that information into all of the AI and CV compute that's done on the device and process it in the meaningful way that you want to.

Zach Peterson:

So you brought up liDAR and radar. I think when most people think of liDAR and radar, they don't really think of it as a vision or imaging technology, but as the scan rate and resolution have improved over time, it really has become an imaging technology. So I'm hearing a lot of imaging applications and imaging sensors, but then you also mentioned stuff like IMU's. Are there other sensors that someone could create support for, I guess you could say, in your product? Or let's say someone wanted to integrate a sensor, do they have to then take in the digital data with a different control system, send the digital data into your unit and then use that in the AI model to then try and do inference?

Bradley Dillon:

Yeah. Can you take that one, Erik?

Erik Kokalj:

Yeah, sure. So currently we support about 30 different camera sensors. So you can use them directly, feed the  directly to the SoC that runs the AI, the, and then do the processing together with inside our ecosystem. So for any different sensors, currently we have to add support into that, into the firmware so it's supported. In the next generations of our hardware, it will be even more agnostic because people, customers will have access to the full image, so they would be able to add the drivers to the image themselves without our help. So they would be able to take our devices, take for example, a thermal imager and supporting themselves and create a whole solution on top of that.

Zach Peterson:

Very interesting. And I think one thing about building a platform like this is, you have mentioned in the Altium story about your platform is that a lot of it is open source. Why did you choose to go the open source route with your firmware and some of the other aspects of your platform?

Bradley Dillon:

Yeah, it's a great question, so, and just to clarify, so there are two aspects of the platform that are close source. One of them is the source code for our firmware, and then the other one is the chip down design for our system on modules, but everything else is open source. So if you want to go look at the PCB layouts, if you want to look at the mechanical designs, if you're looking at other aspects of the software and the AI, you can find all of that on our GitHub. It's all open source.

And our thinking here was really, it's just that what robotic vision unlocks is new markets, new industries, new applications, and it's definitely not a winner take all type of view for us. We believe that anything that we can do to help get these devices out there will create these new industries that will then flow back into future hardware sales. So it's really an intentional decision there, that by open sourcing there, we can really accelerate the adoption of these type of devices out there in the world. And we think that's ultimately a win-win because it helps take the engineering efficiency that we can help provide, and it helps provide that improved productivity to enterprises. And then obviously at the end of the day, we fashion ourselves as a software company that monetizes when we sell hardware, we just want to sell a lot of hardware. And so by getting these out there, it helps sell a lot of hardware.

Erik Kokalj:

But because the hardware is open source, that has helped numerous customers to actually leverage the open source designs and then build their solutions on top of ours, which just accelerates the progress and time to market, which also benefits both the customer and us, because we're able to sell the hardware faster.

Zach Peterson:

Normally when a hardware design goes open source, people use it as a reference design. So for example, the big semiconductor vendors, they'll create a reference design and maybe they have to put a little bit of effort into it to make sure that it works on the front end, but the whole point is to sell their chips. And for you guys putting your PCB layout out there like that for free, it almost seems like you are given away the best part of your product. You don't normally see companies doing that with their PCB layouts. So how does that fit into the company's strategy to scale out and really become a leader in this space?

Bradley Dillon:

Definitely. So we offer a number of our standard oak type of devices here, like you see that I'm holding up for those looking at the video, but we also offer system on modules. And so that's a piece of hardware that we sell. And so the system on module is something where it's like this here where you're connecting onto a baseboard and the system on modules today, they don't make up a large portion of our sales, but we actually think long term that one of the biggest drivers of our hardware sales will be the system on modules.

And so the system on module just has this, all of our core tech on it, and the customer can just use it to plug and play into their larger PCB layout into whatever form factor that they have.So they're able to custom integrate it into their finished solution. So that's where we still get the benefit from it, even though it's not our broader design, it's the customer's design. We still have the system on module that's plugging in there.

Zach Peterson:

I see. Okay. And as part of the open source component to this, Bradley, when you and I had talked previously, I believe you had said that some of the AI models are also put out there as open source. Is that correct?

Bradley Dillon:

Yeah. Can you speak to that one a little bit more, Erik?

Erik Kokalj:

Yeah, correct. So we do have all the notebooks for the training, so the AI training, then also AI conversion and AI deployment onto our hardware. We have all of that open source just so the customers can easily develop their own model and then deploy it on the device itself. Besides that, we also have the model, such as the trained AI models that customers can just use out of the box with our system.

Zach Peterson:

And one of the, I guess one of the paradigms in AI at the edge or AI on embedded systems or small devices these days has been training in the data center, inference on the edge. And so when you make a model available like this for users, essentially they could run this on the edge device, but I believe you just said training, so you can do training on these devices as well.

Erik Kokalj:

So the training notebooks usually will be done on either the cloud or on the computer itself, so the host computer that the developers use and then deploy it, the trained model on the device.

Zach Peterson:

Okay. So this is fitting with the standard paradigm where essentially training in the cloud, deploy it to the end device and then infer on the end device?

Erik Kokalj:

Yeah, exactly.

Zach Peterson:

Okay. And then one thing I noticed on your device, Bradley, when you were holding up one of those boards, you got a big heat sink attached to that board. So just looking at it, I can see this thing probably uses its fair share of power and puts out a lot of heat as a result. Are there any plans to maybe procure an alternate chip set or are you hoping that the vendor for that chip set will become more power efficient over time so that these devices can get further miniaturized?

Bradley Dillon:

Yeah, it's a great question. So our first generation of products, our series one, we were not thermal experts at all. So we basically said, "We're just going to make these massive overkill", I don't even know if we simulated it, but basically said, "Oh, we'll make these really big, we'll test it and oh look, it works, so let's put that out there into the world." And so for our series two of products, you can see that the, for those looking at the camera, the thermal fins have gotten much, much smaller because we got a lot more confidence. And now we do have mechanical engineers that are doing thermal simulations to be able to check this sort of thing. The power draw on these types of devices is actually pretty low. So for the models that I was holding up there, the USB-C connectivity, it provides both the power and the data connectivity.

And those type of devices, depending on what you're doing, are typically drawing three to five watts, which is relatively low. And then for some of our power over ethernet devices, the power draw typically is in the five to seven watt range, depending on what you're doing. But the devices also can be optimized to have a low power mode, like a sleep mode where they're drawing less than one watt and then they can come online when they need to. So the total power that you're consuming over time is isn't that bad. And then the area with thermal that we've had to get more advanced on is the outdoor environment. And so this is where you're in the sunshine, you're in Arizona and it's 45 degrees centigrade. And in those types of environments it's really difficult from a heat perspective. And so in that case, sometimes we're adding things like, okay, we get a custom hood to be able to prevent the sun, and stuff like that. But yeah, we've improved a lot with thermal over time as we've evolved.

Zach Peterson:

What you just mentioned about throwing heat sink at the device until it gets cool enough and then iterating on that, I think a lot of engineers will take that approach, but then they don't do the iteration, which is to then try and optimize the thermal management strategy, whatever that looks like, for space. And so from what you just held up, that one, it looked like version two of your product, you actually integrated the heat sink into the enclosure. So in doing that, how does that affect the form factor of the board? You're able to just essentially just shrink down the board on the Z axis and do what, attach it directly to the enclosure? You have a thermal interface material in there to then draw heat into the enclosure and then into the fins?

Erik Kokalj:

Yeah, that's exactly how we're doing it. So both for the main SoC that runs the AI and also some other components like the power management components have the thermal layer between the enclosure and them for efficient thermal dissipation.

Zach Peterson:

And so I'm sure anyone could then maybe use a fan if they're in the larger system, maybe they deploy this as an accelerator in a data center for something, they could then do whatever cooling strategy they want to ensure that they at least keep the thing operating within safe temperature limits without blowing out their form factor.

Erik Kokalj:

Yeah, exactly. So actually some customers even have really thin fins, so not this big, just because of the weight and the size, specifically on some drone applications. And then they had, because they already had active bulk, which then provided good enough heat dissipation.

Zach Peterson:

That's interesting. You bring up drone applications. I had not thought of that. I was actually going to ask next, what are some of the applications that you're seeing? And I figured the common ones like automotive and then I think you guys have mentioned industrial robotics earlier, which makes total sense. But what are some of the application areas that you're seeing and which ones do you think are most interesting, or maybe most unexpected?

Bradley Dillon:

Yeah, we're obsessed with all of them. It's probably one of the funnest parts of the job is Erik and I will get on a sales call and we'll go, "Wow, I had no idea that you were possibly doing this with a vision system." I'll give an example, recently, we had a call with a company that helps people operate beehives for honey production. And these beehives will get inundated with mites. And so the mites will actually, they attach onto the bee and then they make them sick. And so then the people that operate these beehives, they need to be able to provide the bees with some kind of chemicals that kill the mites, but they don't really want to use it. And so what they're actually able to do, they're not using our system, but they're using a vision system today, is they're actually, with a camera, they're able to count how many mites are on a bee and then they're able to know basically what's the density of mites per bee count.

And then they can know the exact right time when the, right before the curve of where the number of mites is just going to explode, then they can provide the chemicals and they can make it so that the mites die and then they can keep their bees happy and healthy. And this is one of these off the wall examples where you're like, I had no idea that this type of application occurred. But to more directly answer your question, the type of industries that we're getting the most traction in, I'd say one of them would be the retail and warehouse automation case. And so there's a lot of aspects of retailing and warehousing and also ties into logistics as well, transportation logistics, just the movement of goods and the automation of stuff that previously was done by a human that needs to be assisted by a robot.

I'd say that's a huge industry that we're seeing a lot of traction in. Another one I would say is the industrial equipment. So think, say agricultural, industrial machinery, stuff like that. We're getting a lot of attraction there. You may have seen recently the actor Jeremy Renner at his snow cabin, he got run over by his big snowplow. And you think about that and you go, okay, there's two major problems here that has occurred to make it so that Jeremy Renner could get hurt here. One, why does he even need to be in the snowplow to start with?

This is a big expensive piece of equipment. He should be able to have a remote control to operate it, or it should be completely autonomous altogether. And then the second piece is, is that why if Jeremy Renner's standing in front of the snowplow, is it just willing to drive him over? It should be able to see him and it should be able to know, oh, I don't want to drive over a human. And so those are some examples of where, with machine guarding and automation, you can make that this big industrial equipment operate way more efficiently, but way more safe in terms of not harming humans.

Zach Peterson:

I would've never guessed beekeeping was one of the applications, but I asked for unexpected, so there you go. That's definitely unexpected. So you brought up power over ethernet earlier for interfacing with one of these devices. The area where I have seen power over ethernet for imaging and vision or even for radar most recently come up, is in security, and specifically facilities security, campus security, and then some applications in defense. Have you been seeing that type of application area come up for you guys at all recently?

Bradley Dillon:

Yeah, we have. And so it actually ties into that retail automation use case. And so you think about a retail store and some of them are really big format. We were actually in Las Vegas for CES a couple weeks ago, and I went to a Walmart for the first time in a while and, oh my gosh, it took me a half an hour to find five items because the store is so freaking big, one of those super sellers. And so you think of a store like that, it's perfect for power over ethernet.

You can have really long cables, you can have a cable that's say 300 meters long and it's all connecting back into some central location to give you a lot more processing power. And in that type of application, there's all kinds of use cases in retail that can be really helpful. So some of it is be able to detect bad actors. Shrinkage is a big thing that retailers are complaining a lot more about lately that's really harming their bottom line. So you can prevent that. You can also do some stuff that's pretty exciting for the marketeers of the world, is is that you could do proximity based stuff. Is somebody near a display that's meaningful in a way that we want to maybe serve them up an ad, or we could actually ,say it's a digital display and we could, if there's a few people that are looking at it, maybe we could go ahead and serve up some type of a demo or something that's interesting or meaningful to them.

And then another piece of it is actually just the automation. So Amazon with Just Walk Out technology really pioneered that. I used to live in Seattle and you'd loved it, if you just, you scan your palm, you walk in, you grab what you want and then you walk right out. And so I think that's really the future of retail is that it's more of an experience which these camera devices connect over power over ethernet can provide, but then also it's automated, so you just, you grab the items you want, you walk out, and then you just, get your receipt later.

Zach Peterson:

I think the technical term that you had brought up earlier was loss prevention. Is that how it's been described to you?

Bradley Dillon:

Yes. Yes. Yeah, so when it comes to brick and mortar theft, yeah, it's a big challenge. Warehouse theft is an issue as well, but they're able to keep that a little bit more under control. But yeah, brick and mortar theft is a big problem. In the most recent quarterly earnings reports, multiple retailers cited it as a major drag to their bottom line. And they're struggling with, in particular, say America, they're struggling to be able to police theft and they're trying to find more efficient ways to be able to identify and prevent it on the early end of it. So yeah, it's a big challenge that our type of devices can definitely help solve.

Zach Peterson:

So I think that's a really interesting use case. And just to dig into it a little bit more, because it's not just object identification, but it's really classification among, I think what would have to be, a set of sequences of video, in order to maybe identify what could possibly be theft. So your system has to capture all of this data from multiple sensors, in this case multiple cameras, bring it all together, and then do, not just the I object identification, but also the classification based on a stream of images from multiple sources. How much of that are you doing on the device versus what has to happen in the external system or the broader system that it has to interface with?

Erik Kokalj:

That's usually, to describe this, it's usually used action recognition. And so for some basic models, for example, basic human actions, if a person is sitting or standing or raising a hand, that's a simple model and it can be run on the camera itself. For more complex models, in general, edge computing isn't the best and you need the big systems with big GPU's to be able to run those models at a high resolution.

Zach Peterson:

Sure, I could understand that. I think once you have to link together a lot of data over multiple timeframes, it makes more sense to do it in a server. And really it sounds like the camera in that place or in that type of application is doing some pre-processing number one, and then number two, almost like automated tagging of data. So in this case, tagging of the image data that comes in. Is that an appropriate way to think of it?

Erik Kokalj:

Yeah, that's actually quite correct. So usually what people do is do some pre-processing. So for example, initial object detection. So for example, detect the person, prop the higher resolution image from the high resolution, for example, camera, and then stream just that to the cloud, which then saves a bunch of bandwidth, which is the main pro of having some AI computation on the edge, and then save on the bandwidth cost.

Zach Peterson:

Okay, so let's take that architecture for a moment and find its analog in something like automotive. With automotive, you may have to do some kind of action recognition or action classification because things are dynamic, they're happening in real time. There's of course the object recognition, so identify whether the thing in front of the car is a human or an animal or a stop sign, let's say. Those are the obvious ones. But then to really predict what the vehicle has to do next, I think it then has to be able to predict what's going on around it based on a set of data that may not all have been gathered at the same time.

And in something like automotive, the processing time and the time required to get to a decision has to be really low because milliseconds worth of time is a lot of breaking distance if it comes out that the car needs to slam on the brakes. And it really could be the difference between saving someone's life. So because of that, it seems to me that you need to bring some of that more powerful inference capability out of the cloud and into the vehicle in order to get the latency that you need. So what would that look like in a vehicle if someone was trying to use your system for that type of application? Do they have to have a cellular connection to the cloud, or can it be done in the vehicle with the right backend processing? Because if someone's in, one of our previous guests brought this up, if someone's in rural America and they're relying on the cloud for inference in that safety system, maybe they don't even have a connection, or maybe they do have a connection, but it's a slow connection.

Erik Kokalj:

I think it really depends on the application itself. So for some applications, it's fine if you completely rely on the cloud and the whole application won't work without it. Of course, there would be some full decks, some other systems. And then for some other applications, the connectivity isn't really required, and there's a powerful host computer on the, somewhere around the device. So initial computation would happen on the cameras, so on our cameras. And then further computation would be done on the host, for example, that would be two meters away from it.

Zach Peterson:

So someone's basically bringing a small back plane with a blade server or something into the vehicle to then do all of this processing without having to send everything up to the cloud.

Erik Kokalj:

Yeah, for example, with robotics application, usually, you'll have quite a powerful computer that would then take all the sensor information and then out into the evacuators, so the motors, robotic cars, stuff like that.

Zach Peterson:

So I think that type of architecture maybe then brings up what you guys might want to do in the future. Where do you see the future of your products going and do you see yourselves developing maybe a suite of products to target some of these higher compute areas where someone might have to rely on the cloud, but you could bring some of that more processing, that greater processing capability and power closer to the end device for some of these applications where time sensitivity is very important?

Bradley Dillon:

Yeah, definitely. So we've recently been rolling out our series three of products and to really demonstrate the new capabilities that we have for series three, we actually created our first robot, which we're not trying to be a company that makes robots, but we wanted to be able to create one to show what series three can do. And so what series three unlocks that the previous generations did not have, is it actually has a A-53 running Yocto Linux. And so it makes that you can be the host controller of other devices. And so in the case of the little robot that we have, for those of you on video, this is what this cute little robot looks like. It's pretty small here. So it has five cameras, but down here, it's actually controlling two motors. And then with the USB-C, you can connect it to, you could, for example, have an arm that's on top of this.

And so that helps evolve our types of devices to really be the eyes and the brain behind robotic systems that we mentioned. Previously, it's more the eyes and somewhat of the brain, but now being able to have the Linux running on it, it really makes it so you can control other devices with it. And then we're crazy. We're trying to come up with a new series of products every year. We're already working on our series four. And so series four will build on the same things as what series three has, but then have quite a bit more power, when it comes to both processing power, but then also the ability to do quite a bit more tops when it comes to AI inference.

So in our view, really, the future is is that to try to stay ahead of the hardware improvement curve of constantly being able to have more processing power, more AI inference, and that's really going to be our primary focus. The other piece that we're excited to be rolling out right now that kind of goes along, goes hand in hand with this, is our cloud application layer. And so this makes it so that you can much more easily manage a fleet of these devices at scale. So that cloud application layer is called RobotHub and there's a number of free applications that would be available there. But then third party developers could also put their own applications there. And this would be somewhere where you could, you could do things like stream the data, do testing, stuff like that.

Zach Peterson:

Interesting. So you're putting, when you say you want to be the entire brain, what I'm really hearing is, you could have a broader application that makes use of the AI in some innovative way and that broader application is going to then allow someone to tailor the device to whatever their end use case is.

Bradley Dillon:

Yeah, exactly.

Zach Peterson:

Okay. Well, as it rolls out, I hope to have you guys on again. I know we're getting a little up there on time, but I definitely want to have you guys on here again to show what your products can do, 'cause I think this area is so interesting and embedded AI is something that I've become passionate about over the past few years, and so I hope some of our listeners will have some of that passion rub off on them and we'll go learn more about Luxonis.

Bradley Dillon:

Yeah, that'd be great. We'd love to come back. It's obviously a topic that's near and dear to our heart, and yeah, we can talk about it all day long.

Zach Peterson:

Absolutely. Absolutely. Well, thank you both so much for joining us today. To everyone that's out there listening, we've been talking with Erik Kokalj, director of applications engineering, and Bradley Dillon, CEO, both from Luxonis. For those watching on YouTube, make sure to subscribe. You'll keep up to date with all of our episodes and tutorials as they come out. And of course, we'd like to remind you all to watch the most recent Altium story about Luxonis. We will link to that in the show notes so you can go take a look. I watched it this morning again, it's a great video and I encourage you all to go watch it as well. Finally, everyone that's listening, don't stop learning, stay on track, and we'll see you next time.

About Author

About Author

Zachariah Peterson has an extensive technical background in academia and industry. He currently provides research, design, and marketing services to companies in the electronics industry. Prior to working in the PCB industry, he taught at Portland State University and conducted research on random laser theory, materials, and stability. His background in scientific research spans topics in nanoparticle lasers, electronic and optoelectronic semiconductor devices, environmental sensors, and stochastics. His work has been published in over a dozen peer-reviewed journals and conference proceedings, and he has written 2500+ technical articles on PCB design for a number of companies. He is a member of IEEE Photonics Society, IEEE Electronics Packaging Society, American Physical Society, and the Printed Circuit Engineering Association (PCEA). He previously served as a voting member on the INCITS Quantum Computing Technical Advisory Committee working on technical standards for quantum electronics, and he currently serves on the IEEE P3186 Working Group focused on Port Interface Representing Photonic Signals Using SPICE-class Circuit Simulators.

Related Resources

Back to Home
Thank you, you are now subscribed to updates.