Join us for an in-depth conversation with Ari Mahpour, Embedded Systems Engineer at Rivian, as he reveals how AI is revolutionizing embedded development and testing workflows on this episode of the OnTrack Podcast.
From voice-controlled Arduino programming to automated hardware-in-the-loop testing, discover cutting-edge techniques that are transforming the industry. Learn about embedded DevOps, Cloud-driven testing infrastructure, and how AI agents can write, compile, and test code autonomously.
Explore the intersection of hardware and software development, from NASA's space missions to modern automotive systems. Ari shares practical insights on bridging the gap between electronics design and software development, implementing CI/CD pipelines for embedded systems, and leveraging AI for everything from data sheet analysis to automated test generation.
Watch the Episode:
More Resources:
Transcript:
Zach Peterson
Hello everyone, welcome to the Altium On Track podcast. I'm very happy to have with me today Ari Mapour, embedded systems engineer at Rivian and frequent contributor to Altium Academy and Octopart YouTube channels. Very happy to have Ari here today to talk about a lot of different things, including embedded systems development. Ari, thank you so much for being here today.
Ari Mahpour
Thanks so much for having me. Happy to be here.
Zach Peterson
Yeah, it's it's it has been quite a while that we've talked and I think we've known each other and I I regret not getting you on the podcast sooner because you do quite a bit of work with Altium and especially on the Octopart channel. So I think some folks out there are probably familiar with you and have seen your face. But if you could give us a little bit of intro to who you are. Give us your background and then how you got started in electronics.
Ari Mahpour
So I'm a son of an engineer. I grew up in Silicon Valley. So the second I was born, I was injected with the engineering serum, of course. I spent most of my, I spent about half of my career, a little bit more than half of my career at NASA JPL, the Jet Propulsion Laboratory, doing space-borne missions, radar instruments, worked on some of the Mars rovers, power systems, things like that. And then I moved
a little bit more to the military aerospace, and then kind of moved more into the startup space when I joined Virgin Hyperloop and now at Rivian, which is now no longer really a startup. And so I work in the commercial sector now. I've worked mostly in embedded systems, whether it's some electronics design with FPGA design or embedded software, or virtualization right now is some of the things that I'm working on digital twinning.
as you would call it nowadays. But I also did a little bit of a stint for a while, and I still do. And you'll see with my videos two focus areas that are near and dear to my heart. One is embedded-based DevOps or embedded DevOps, DevOps for embedded systems, that whole automation workflow. And then another thing is AI. That's something that I'm pretty gaga about, and you'll see in my most recent tutorials and videos of.
not only engaging with AI, LLMs, but using that for embedded system development, specifically on hardware and things like that. So that's something I'm extremely also very interested in.
Zach Peterson
Yeah, a lot of people interested in it, not just engineers, also, you know, of course people in the business world and in finance world, because I swear I get asked every, I don't know, a couple months or so by some investment firm to give them some overview of some AI platform that's coming out in the electronics space. But I think one area that doesn't get enough attention is definitely the embedded area. And I think people just kind of reduce it to, you know, these guys are just coding.
But give us a little more insight. What is an embedded engineer actually doing? Because it's really this intersection between the hardware and the software.
Ari Mahpour
And the software. Yeah, I mean, it depends, you know, in, so I've worked in really small teams, really small startups, and I also work, you know, Rivian is a pretty large company. And an embedded system kind of entails hardware and software, and really kind of understanding those boundaries as much as you possibly can. So think of it like a day to day, like if you're in a smaller,
environment or a smaller startup. Maybe you're contributing to the electronics design. Maybe you're helping pick for sure the processor. You're helping pick some of the interfaces. You're working with the designers. And then once the board comes in, you're bringing up the board. You're writing basic firmware, basic software, hardware abstraction layers, drivers that can talk to all these different chips on the board and then synthesizing all of that. So for example, if you're trying to take signals or sensors or any of that, fusing all of that,
bringing all of that information together, and then passing that up to some sort of higher level software stack. So if you think of like an IoT device, let's take a very trivial example, an IoT weather station, right? So you have all these different sensors, you're sensing a bunch of things, you're aggregating all those signals, so maybe a EE will design the board, you're working with them to figure out what are the right chips, you're working with them, what's the right processor, then you're writing all the code to aggregate all that information, and then you're pushing it to the cloud, and then...
then whoever is on the full stack software side, they're pulling all that information from the cloud. So you really have to understand how to speak the language of the electronics designer and the language of the full stack developer so you can understand how to negotiate with them with APIs or things like that. So a really good embedded systems engineer knows both worlds.
Zach Peterson
Yeah, and when we were talking earlier, you'd mentioned that you had worked in cloud infrastructure. So it sounds like that's also where you get a little bit of experience to develop the cloud app that then interfaces with the hardware. Is that correct?
Ari Mahpour
Yeah, yeah. So that was also, that's been a really interesting experience. That's some of the work that I did at previous companies and also at Rivian, working in the DevOps and cloud infrastructure. You learn about writing applications that can collect all that information and then deploying those services. So being able to run it in Kubernetes, scaling it properly, setting up all the cloud infrastructure in AWS or Google Cloud or Azure, whatever it be, or even your own data center.
Ari Mahpour
being able to pull all of that, all those data pipelines, process it, look at it on a large scale, on a macro scale. These are all things that a full stack developer or even data scientists are looking at and aggregating that information. So getting into that and doing that for a little while also gives you that little empathy or understanding from a user story or user persona what those people are dealing with. So you're a little bit more primed.
to figure out how easy or how difficult to make the data consumption on their end, because you know what it's like to be on their side.
Zach Peterson
And then you had mentioned DevOps now, just, just for context, right? I always tell people like, I'm not a pro software developer. Um, I know what I can Google or I guess these days, whatever I can, you know, GPT, but like, I'm pretty good with Python and I'm pretty good with, you know, HTML, a little bit of P PHP. But other than that, I am not a polyglot, excuse me, polyglot programmer.
And I've never worked on like a full on, you know, software engineering team, actually writing code or anything. So if you could tell us a little bit, what is DevOps?
Ari Mahpour
So DevOps, it's short for development and operations. So like a dev, a developer, and operations like ops. So think of it like you have the developers who are writing an application. And DevOps came from the Agile Manifesto, and there's a bazillion books on it, and lots of conferences, and so that's a whole wave. And DevOps specifically for embedded systems is also picking up nowadays. But essentially, it's
somewhere to bridge the developers and operations. a developer, a typical issue that you would find in a software company is a developer will develop their application. You know, it, some widget, again, it's a, some dashboard, let's go back to the weather station, some dashboard that prints all this information from a weather app and they develop it on their computer and they have this, you know, proof of concept and then they,
take it to production and it builds and it looks great. And then they hand it over to the operations people who deploy it in the data center or in the cloud and like everything falls over. And it's, we see that also in electronics. There's the contract manufacturing wall, right? So you build, make everything over here and they just send it to the contract manufacturers and say, hey, you guys, you know, figure it out and things start to fall apart.
that the parts don't solder correctly and the profiles and this and that, and the boards are flexing and bending. so there's this kind of like silo between the two. And so the idea is a DevOps first methodology is you do things like automated tests. You do things like deploying it into the cloud even before it hits the operations team. You do things called containerization where you basically put them into small like...
wouldn't call them virtual machines, but little encapsulated, almost isolated environments for all intents and purposes, let's say a virtual machine. But you basically create your application as a developer, right? With that mindset that it's going to get deployed anywhere, anyplace, anytime. And so DevOps is this kind of workflow or this idea, this ideology where you can use all these different...
Ari Mahpour
tools, frameworks, whatever it is, in order to make that process of throwing it over the wall as seamless as possible. So there really is no wall. So it's a series. So for example, automation, running everything in what's called continuous integration, CI, pipelines. So it runs your builds, and it processes everything and runs that all in the cloud. So you can say, this doesn't just build on my machine. It'll build on any machine, anywhere at any time.
That becomes really, really important if you're scaling your application. So your weather app, if you're scaling it to 1,000 instances that need to support a million user hits every minute, you have to make sure that you're scaling it properly. And so you're designing it with those principles first.
Zach Peterson
Okay, so connecting this back to embedded development, because you mentioned embedded DevOps is, I guess now becoming, I don't want to say a buzzword, but becoming a term or a concept that people are paying attention to. Now it's just applying kind of the same principles to embedded development, ensuring that the application won't just deploy on your one set of dev boards or your prototype board. But now let's take this prototype board manufacturer, let's say let's L-rip it and do
you know, a hundred units and then test that and then see if that also can maybe, let's say communicate with our cloud instance or that kind of thing. Right. Is that, do I have the right idea?
Ari Mahpour
Essentially, I mean, there's also aspects of it. So I've done like a tutorial or two in some of the videos as well that kind of walk you through that workflow and like a micro scale. So one of the biggest problems in embedded development, especially in the manufacturing environment, is that, I mean, you still see this a lot in industry, which is surprising, but people will do their development on an embedded device. So let's say some sort of widget, you know, I don't know, some...
widget, let's go back to the weather station. They write all the code for the weather station. It's going to hit production. We're going to set it to contract manufacturer. They're going to put the whole thing together. They have a bed and nails test system so then they can validate everything works. they close the enclosure. And then let's do shipment to a warehouse, something like that, ready to go. OK, so you as a developer, right? You are
write all the firmware or you write all the software that goes on this box. And then you build the firmware and then you zip it up and you email it to the contract manufacturer, you know, or you email it to the, so let's say you have it in-house and you build in a house and they program the boards in-house in your assembly house. And so then you zip it up.
and you put it on the network drive and then the operators go and grab the files and flash the device. mean, just by your facial expression, I can tell you've probably heard this story before. That's not okay. That especially when you have to start meeting like ISO standards or ASIL standards or DL ratings or things like that, like you need to have a very, very systematic automatic way of doing these things, especially like military and aerospace.
So what the process looks like is you do your development, then you commit your changes, you push it to a place like GitHub or GitLab or Bitbucket or you name it. Then it goes to a system that builds it. So in GitHub, it's called GitHub Actions, GitLab, it's GitLab pipelines. Bitbucket has the Bitbucket pipelines and Jenkins is another CI system. So these go and build the firmware. So instead of building it on your computer,
Ari Mahpour
It builds in some remote cloud that's hosted by NWS or Google Cloud or whoever it is. And so that guarantees, first of all, that your firmware can be built somewhere else. So that's step number one. And then step number two, the idea is it builds all of that and then packages it up as an artifact. So now it can be pulled and moved to whatever station or automatically.
And that sits in the cloud. That's not on some shared network drive or USB key or something like that. And these have manifest, they have checksums, have shas, they have, they're encrypted, they do all sorts of things. And that's all automated. Now, also, when you talk about hardware in the loop testing, you now talk about like, now it's built the package, but maybe we don't want it to just go to an operator to test manually. We need to have this all automated. So you set up things like hardware in the loop testers.
where these hardware in the loop testers, have your device under test hooked up to a laptop or a computer or even a Raspberry Pi. And these have what are called runners. And these runners will then go and exercise all the tests with lab equipment and things like that, like what you see over here. And it'll stimulate all of the inputs and outputs, but it runs all driven by the cloud. So now this can be run in a data center. This can be run at your contract manufacturer. This can be run in your lab.
But all of this can be scaled. So these principles that normally lived completely in the cloud now have some sort of mix where you have some hardware and some cloud operations, if that makes sense.
Zach Peterson
Yeah, I can envision everything that's going on. This is just a world that's totally new to me. think in the back of my head, I know that each one of those steps is possible, of course. But it's the connection together to then accomplish this live testing all driven from a cloud environment that's totally new to me. Kind of crazy that this is actually done, too.
Ari Mahpour
Yeah, I actually, I mean, I've been doing it since the days at JPL, which is over 10 years ago. We've been doing this automation and this workflow for quite some time. And then every company that I've been going to, it's always been like my little nights and weekends projects where I would try to automate things and get DevOps going in the embedded groups for wherever I worked. And then I finally was brought to Rivian to do that full time.
Zach Peterson
Okay.
Ari Mahpour
for a while until I moved to the Embedded Systems group.
Zach Peterson
Yeah, I could see in an automotive environment how that's becoming just exponentially more important because of all of the different pieces of hardware that are actually running applications on them. I mean, I don't even know how many ECUs are in a vehicle at this point, but it's some crazy amount and they all have to have firmware running on them. then there's some like master EC, I don't even know the terminology honestly, because I don't work in commercial automotive, but then there's some like master ECU somewhere that has like
Ari Mahpour
lot.
Zach Peterson
all of the, that's like the master application gathering all of the data everywhere in the vehicle. Yeah. And so it sounds like they have so much development going on. They have to find a lot of ways to do all of this testing and validation very efficiently.
Ari Mahpour
Right.
Ari Mahpour
Yeah, yeah, absolutely. And I mean, the scale at which it occurs is unbelievable. Just the amount of testers and the amount of operations and the amount of compute necessary when you have thousands and thousands of developers all, you know, trying to build at the same time and testing at the same time, trying to integrate their code. I mean, you have like, you have commits every second almost, I mean, on to your main branch.
or merges onto your main branch every couple minutes when you have tens of means. Think about like a company also like Google or Meta or places like that. mean, commits happen all the time. So a company like Ford or GM or us or any other automotive company with many thousands of developers also, you have just a lot going on and a lot of compute usage. So that requires a whole team of people to
really manage that and orchestrate that and make sure it's like as lean and efficient as possible.
Zach Peterson
So how does the hardware engineer, right? The person who's just the pure electrical engineer, hardware designer, maybe the PCB designer, that role entails, right? How do they get looped back into this cloud-driven process that seems to, I mean, just from my view as someone who's an amateur, how do they participate in that whole process where it seems to be really centric around ensuring that the software is gonna do what it's gonna,
needs to do at scale and can do it anywhere.
Ari Mahpour
So in terms of, can you elaborate a little bit in terms of.
Zach Peterson
Yeah, like how do they participate and what is their role in ensuring that all of that goes successfully, right? Are they just hooking up the instruments that are being used for all the automated testing? Are they actually looking at the results? Are they injecting changes into the hardware? Are they trying to find the changes that need to happen in the hardware?
Ari Mahpour
Yeah, mean, with those iterations and with that, I'd say, immediate feedback, that also results in a lot of good feedback to the double E's, which results in rapid prototypes, which results in revs of boards. I mean, this isn't just specific to the automotive industry. This is specific that I've seen in like
Zach Peterson
Right, right.
Ari Mahpour
all the industries that I've been in when we started introducing lots of hardware in the loop testing, lots of DevOps into the embedded system. So that results in a lot of iterations on the circuit boards, which as we know has gotten faster, but still takes time. You still have to make the changes and you have to update them and then you send them out and then it comes back and then you have to place all the parts and then yada, yada, yada.
So that is good because you get to accelerate faster, but also can sometimes be a little frustrating, I think, for the double E's because the electronics folks, they're building something and then next thing they know it like a day or two after it comes back, it's like, we found all these issues we need to change. So I think what we're going to see with all of these DevOps and all of these changes on the embedded side, which 10 years ago,
when I'd speak at conferences about this, people just kind of like laughed like, what are you talking about? Automation around embedded systems. Like, that's ridiculous. But I think what you're going to see is you're also going to also see more shifting around the EE side. more, hopefully somebody, more simulation, better emulation of the electronics, better digital twins of the IC components, more collaboration amongst industries where
Each chip has some sort of model besides a data sheet that everybody can share and collaborate rather than having these proprietary models. I think that that industry is going to have to shift, especially, which is a topic that I definitely would like to get into a little bit, especially around AI. With AI accelerating everything on the digital realm, you have to start accelerating on
electronic sign as well. And we're already seeing that. We're seeing LLMs starting to do their own place and route. We're seeing LLMs starting to design things. So that's also an extremely interesting space.
Zach Peterson
So yeah, before we get into AI, because I definitely want to talk to you about that, but before we get into that, so obviously with the electronics folks that are just pure hardware people, it moves at a snail's pace, of course, I mean, compared to software at least. And I think this is one of the biggest headaches that I have with clients or with people who come from the software world, which is that,
They just kind of expect you to just be like, we'll just call up the board guys and have them ship out another board. It doesn't work like that, bro. So they have some really unrealistic expectations of what goes on in hardware. And then once they have to start actually paying for it themselves, then they realize, oh, we maybe need to slow down a little bit or at least collect more changes to be a bit more cost effective.
Ari Mahpour
Alright, right, right, right.
Ari Mahpour
Right, right, before iterating, right.
Zach Peterson
Yeah, exactly. Exactly. does, how does a team like that, like in, automotive or another industry where you have all this activity going on, how do they manage the alignment between the software and the hardware? Right. Because at some point the hardware could iterate and one, you know, at some point this build an earlier is no longer compatible with the iteration we just did. Yeah. How do you track all that data? Because you've really got two separate things running in parallel.
Ari Mahpour
Right. Right.
Zach Peterson
Right. And at somewhere they stop overlapping because either there's a build that made it no longer compatible or there's a rev on the hardware that made all past builds beyond some certain point no longer.
Ari Mahpour
Right, right. Yeah, I mean, this is a challenge in really all industries. So there are obviously different revisions, different builds, and there are different strategies, especially in embedded systems. So the kind of like ideal strategy is you can build a one-size-fits-all software with, you know, like an all-encompassing operating system that's really kind If you know...
that you're going to be rapidly iterating with different ICs and different functionality and things like that. You can build an all-encompassing kind of ecosystem or operating system or however you want to do it. And then you can configure or create different kind of configurations statically, dynamically, however you want to do that, to be loaded, let's say, at runtime. So you would say, like, this is a Rev-A board. And then, boom, we can have a very quick, dynamic way of being able to load all the different
peripherals or load the different permutations or revisions of that particular board. There's also the opposite side of this, that requires a lot of work, a lot of dynamic thinking, a lot of architecture to really get that working. And then there's the other side of the spectrum where it's like for every single board, we have a literally a copy of the software and we maintain many different forks of that software. And then you have to manually copy and paste to every single project. So I mean,
And that's like, there's a wide spectrum between the two. I've worked in companies that do both very poorly and very well on all sides of the spectrum. And so I guess the answer to that question is it really depends, which is a great engineering answer. But it also depends how much investment you want to do. If you know that this rev is going to get killed, you know, after a couple months, maybe you do maintain a copy and then you just throw it away because it's going to go into the trash.
But if you know that you're going to have to support the lifecycle of that for months or years or decades, you better be sure that you're and you have to roll updates back to those old boards because they're out in the field. You better be sure you have a robust way of dynamically allocating how these boards are configured so you can still work on the core software while still maintaining a periphery somewhere else that, you know,
Ari Mahpour
keeps track of all the different hardware changes or hardware revisions between boards.
Zach Peterson
Okay, okay. I can just imagine the data management processes being very, almost proprietary and hard. company A is not gonna do it the same way that company B does, and so on and so forth.
Ari Mahpour
hard.
Ari Mahpour
I've seen it done differently in every single company that I've been at. Yes. Yeah.
Zach Peterson
Okay. Okay. Well, maybe you'll do some webinars in the future because that would be pretty interesting talk on data management. Cause like, like I said, you know, we, like we talk about data management all the time, you know, at Altium and people in the PCB industry do, and they're always talking about, you know, manufacturing revisions, which, which is fair, right. Or they're talking about library data, which is fair. Like how many products do you have that run some firmware, even if it's just 20 lines of code, let's say, right. I mean, it's probably a dozen.
Ari Mahpour
Hahaha!
Zach Peterson
products. There's products sitting right behind you right now that all have, you know, software and firmware. And like we never ever bring up the data management for the code side of it. But without the code, like it's honestly just kind of a paperweight.
Ari Mahpour
Right.
Ari Mahpour
Right, right. And that's part of going back to some of the DevOps principles, you have official releases. So in your release and your manifest that when you release a package, which again, it does automatically. So every time you merge your branch or whatever, every time you go for like an official release, it will release that and say, this is compatible with this rev and that rev and this particular board with this, know, PCB number and this assembly file. And so everything is kind of packaged up. So you have a very, very, very
Zach Peterson
.
Ari Mahpour
good accounting of what is compatible with what. And that's done in a totally automated process. That's kind of like the dream. That's why companies are investing in embedded DevOps, because they need that traceability. They need that kind of you know, pulling together with the data management side too. Cause again, the people with DevOps also have cloud infrastructure experience. They have experience with SQL databases or Postgres or whatever it is. They have experience with...
DevOps platforms, GitHub, GitLab, all these different places, cloud systems, ways to integrate with MLS systems, the contract manufacturing systems, and your MES systems, and these ERPs. So there are all these different acronyms to integrate together. And so somebody usually, like in DevOps, has experience with integrating these pieces together and kind of seeing that whole data management, data workflow.
Zach Peterson
Yeah, yeah.
Ari Mahpour
system.
Zach Peterson
I see, yeah. And I think there's another challenge to this, which is that someone working in this role and trying to manage all this data is probably also going to be interfacing with somebody on the, let's say, web or cloud development side who is not a hardware person, even in the slightest. What's a resistor kind of person?
Ari Mahpour
address.
Yeah.
Yeah, yeah, yeah, that's definitely been the challenge for me. Again, integrating ERP systems and putting it together with GitHub and trying to integrate releases and designing the electronics and writing the software and dealing with the cloud and for people and the IT people. Because I've been in all these different places, I'm extremely fortunate and lucky enough to be able to speak everybody's language. But generally, most...
people don't. And so there's a lot, a huge disconnect that happens from the opposite ends of the spectrum. So that's hard. It's really hard, especially in large companies. That's a big challenge.
Zach Peterson
Yeah.
Zach Peterson
Yeah, yeah. And in large companies, it's almost a statistical guarantee that you're going to encounter people from so many different walks that have to work together. I mean, there's you're not going to have everybody knowing what everybody else does. OK, let's go ahead and move on to AI, because I know you're dying to get to it. I'm dying to get to it, too, honestly.
Ari Mahpour
Yeah. Yeah.
Zach Peterson
because I mean, I swear it's like, it's not going to be an on track podcast episode unless we mentioned AI. So here we go. I, I, I often feel like looking out in the landscape for, you know, for AI driven tools, right? yeah, there's been a lot of stuff coming out recently for hardware and I think it's, you know, trying to find its place in the workflow. But again, it's all on the physical design.
Ari Mahpour
Ha
Zach Peterson
And then on the other side of that with coding and now like even with vibe coding, right? There are all these different, you know, productivity tools and they just focus on pure code. Right. And there, I don't feel like there's anything that's bridging the gap between stuff that happens in the hardware and stuff that happens in the software and bringing that together as an AI tool. think the closest thing I saw.
was on LinkedIn, one of our prior guests, Duncan Haldane over at JITX, he did a little experiment with Claude and he gave Claude a data sheet and I think he asked Claude to like write some code and maybe Duncan can come back on and correct me on this if I'm incorrect. But I think that's like about the extent of it when you're trying to do embedded development with AI is like, well, here AI, here's my data sheet, write my code, you know?
Is that pretty much where we're at right now with AI and embedded development?
Ari Mahpour
Absolutely not. Absolutely not. And sneak peek, I have, I mean, I did some early videos of this when custom GPTs came out, but it was extremely contrived, extremely esoteric and very hard to like follow. was like a five part each 20 minute tutorial and very difficult to put together with a Gentic AI, which is another buzz term.
Zach Peterson
Okay good.
Ari Mahpour
completely changing the game is totally changed. right now, actually, you see I have an Arduino right here, Arduino UNO R4. I just finished doing a tutorial that's gonna come out soon where I am hands off the keyboard, vibe coding with my Arduino hooked up to my computer. The Arduino, all I did was I just installed Arduino IDE. It did the...
It identified that I had an Arduino hooked up to my computer. It installed all the libraries for it. Then it started writing code. told it, I said, I want you to write an example, this and that. So it writes code, it blinks the LED, it compiles it, it uploads it, it runs it. But then I say, you know, I have to look at that and I have to tell you like, oh, it's working or it's not working. Find a way to get feedback on that. One way that we normally do that is like you grab telemetry. So it goes.
It writes a whole serial, like ACK and NACK, so it's like a command and response. And so every time it sends a command, it gets a response. It can acquire that telemetry. It knows which tools to pull over serial. And eventually after many, many times, it took me many hours to record it because it kept messing up. But it pulls a serial line and it gets all the results. And it can just keep iterating and iterating and iterating. And every single time something fails, a compile fails or it doesn't see the right response.
It just keeps running autonomously. And not once did I touch the keyboard. Not once. This is all voice. We're just going back and forth, and you see it. In some of my examples, have some, it's not crazy sophisticated, but using Claude Sonnet 3.5, which is not the most advanced model, it was able to do a lot with hardware in the loop. So there are a lot of challenges. It can go off the rails. It...
busted up my COM port many times. I had to keep unplugging and plugging and sometimes restart my computer. You have to be really careful about giving it guardrails. That's why things like Model Context Protocol, MCP, these are new tools that you can give an agent. And then it knows like, OK, this is how I talk to a piece of hardware. This is how I query it. This is a setup routine. Here are all the commands that I can run. So think of it like an instrument, right? So you give it.
Ari Mahpour
you give it all of that context and then you run that server and it knows how to talk to it. So it knows how to talk to your lab equipment. So these are guardrails that you can then create and give tools to it. But also, just a bridge from where we are coming from, it's not just like that embedded development, like talking and stuff like that. It's also the whole workflow. So recall, we talked about the IT people and the operations.
and the hardware people and the embedded people on how we do this in DevOps, a quick question to chat GPT or a quick question to Anthropic or Cloud or in your GitHub Copilot or Cursor. You can very quickly learn how to write code. You were even saying that you can write code nowadays with all of that. So everything is really changing, even for the hardware engineers who are interested.
Zach Peterson
Yeah. So, so let's recap on all that because that sounds like a total whirlwind. And I think for the folks that are new to this whole idea of like, let's bridge the gap between, you know, AI encoding and embedded systems. tell me if I have this correct. So you were doing everything with voice to text and that's how you were, that's how you were commanding what's going on with the AI. You have an agent that is basically running the Arduino IDE, correct?
Ari Mahpour
Yeah.
Ari Mahpour
fair.
Ari Mahpour
Voice attacks.
Ari Mahpour
Mm-hmm.
Ari Mahpour
Not even that. That's the next level. It just has access to my terminal. It doesn't even know how to run Arduino. It has to go check its LLM or it has to go check the web. So it's going to go research. How do I run an Arduino on a Mac?
Zach Peterson
Okay, well that's the next level.
Zach Peterson
I see.
Okay, okay. So then you're telling it to write certain parts of the code, right?
Ari Mahpour
I'm telling you to write the whole thing. I start with a blank canvas, empty folder.
Zach Peterson
whole thing. Okay. Okay. Right. So this is like create my Hello World application kind of thing.
Ari Mahpour
Correct. And then it gets iteratively more complex.
Zach Peterson
I see. OK, so then you could imagine after you have it going and you tell it to compile and flash, and then it runs and it blinks, you could then say, OK, modify the main loop to do A, and C. And then it starts adding functions and then running through all of that and then making your code more complex.
Ari Mahpour
Exactly. Exactly. But also keep in mind, like, it may lose context of where it is or what's going on. And again, this is where Model Context Protocol helps. But it may lose track of where it is. And so it'll go off the rails and completely change the code. Or it'll go off the rails and run a totally different command on the command line and hose your whole COM port. So your communications port is busted. And now you have to either
At best, unplug the device and plug it back in to reset everything, or you have to restart your computer. Hopefully it won't break your device.
Zach Peterson
Sure, sure. OK. OK. So I could imagine that if you were to maybe create, let's say, well, I don't want to say create or fine tune an LLM, but maybe create a database for like RAG, right? You could then do, what is it, retrieval augmented generation to then pull really specific data that it needs for these instructions. And maybe that's one way to enforce the guardrails on the code that it's creating. Would you agree?
Ari Mahpour
Yeah, so that idea of RAG and some of that has been taken to these MCPs, so these tool sets. So basically you give the LLM, or in this case, it's really like a, it's really an IDE. So you give to cursor, give to get help, co-pilot, or you give to some like a, some device that can run on the command line. You give it a series of tools and you say, these are the tools available to you. So the tool you create, for example,
Zach Peterson
Okay.
Ari Mahpour
is an interaction with an Arduino. It says, this is a specification. Here's how you interact with an Arduino. Here's how you upload. Here's how you talk to it over serial. Here's how you compile. Now go write code. But if you're going to talk to an Arduino, you can only do it this way. Don't start running all these funny commands and everything on the command line. I don't give you access to that anymore. I give you explicit access to a tool. Think of it as like Skippy commands or things like that to your instruments.
very specific commands that you could send to your instruments to turn on the voltage, to turn off the voltage. And like, that is the negotiating contract that you have with your device. If I try to send it, you know, some ridiculous command and some long string, it's not going to have it. It's going to say error. I don't know what you're talking about.
Zach Peterson
Well, speaking of sending the AI or telling the AI to send commands to devices, that seems like the natural next step, right? I mean, you can just tell the AI, you know, okay, run test routine number two or whatever, and it knows what that is. And then it's basically controlling all your lab instruments to test this thing that it just built.
Ari Mahpour
Correct. So I did, I think it's over a year ago, I did a demo of that. Yeah. So I was talking to it over my phone, right? And I was watching and I was holding the probes and I was watching it. was saying as an AI lab assistant, Hey, can you measure this for me? Measure that for me. Now plot it. The problem was it was only like, it's a very specific protocol that was used only specifically with chat GPT, with the paid version.
Zach Peterson
that's right. I remember that now, yeah.
Ari Mahpour
with a custom GPT, with a special specification, and you had to run a whole server and so that. MCP, which is now an open platform by Anthropic, is basically becoming the de facto standard where you can say, this is a tool, this is how you use it, and now anybody can do it. Everybody is using that standard. So now I can go into any IDE, and it now knows how to control my instrument. So that's coming next. That's what I'm working on for the next video.
Zach Peterson
Okay, okay. That's really cool though. I mean, it's just this whole natural progression from, you know, just simply having a conversation to actually issuing commands and then having it send out signals to other pieces of hardware. I think that's something that so many people thought was like Terminator level stuff, you know, a few years ago. And now we're just like, yeah, we're just gonna do this in our garage for fun.
Ari Mahpour
Yeah, yeah, yeah.
Ari Mahpour
So I also have this kind of like, Franket, and I have this dream. So if you see over here, you can't really see it, but I just got in the mail after waiting three and a half months, the Nvidia Jetson Orin pack. That's the development kit with the Nvidia GPU on there. And so the thought was how well could that do if you, let's say, gave it some wheels and maybe a camera?
And just by talking to it, how easy, because we all like we've seen like the videos and something that all these fancy robotic companies doing AI and GPUs, autonomous driving, course, but like how achievable or how easy is it now with the LLMs to get like a bootstrap, you know, like autonomous robot just by talking to it, like go drive up to the wall or.
Zach Peterson
Ha ha ha!
Ari Mahpour
you know, turn around and do U-turns. How quickly, how feasible is it to put something like that together now with LLMs versus being a PhD in robotics five years ago? That's something I'm extremely interested in figuring.
Zach Peterson
Sure.
Yeah, with what you just mentioned about like, you know, like do you a U-turn, right? That kind of command. I would almost imagine that requires a lot of fine tuning of an LLM in order to accomplish that, right? Because it would have to almost have like examples of the actual, you know, turns and everything to, to execute in order to execute that action. So then it knows to associate that set of actions as a response to
Whatever example you give it of essentially a command or a prompt. Is that correct?
Ari Mahpour
So this is a common, I would say, misconception by most people in industry. The answer to that question is yes. However, you don't need an LLM or you don't need to train LLMs to do exactly those things. Like a lot of the time, people think like, okay. I mean, it's not as popular now because people are not investing as much, but there was a time where it's like,
let's go find the top PhDs in machine learning and just bring them into our company. Because we need to fine tune in LLM. we need to fine tune this. We need to make our own models. And that's really not the case with all these extra tool sets now, with reasoning models, with MCP, with being able to create frameworks around AI and the guardrails around them and just putting AI in the loop in the application.
you can do probably 90 % of the things with just a really basic lightweight LLM. So for example, if you just have natural language processing, so it can just translate like, hey, I want you to turn right. So it translates that to turn right. So that's natural language processing. And then if you have some sort of framework where you've already built like, this is how you turn right, and that you used AI to help you write that code, so then you just give it that command.
And now we can associate. It says, hey, I have this command. I have this, somebody telling me this language, 99.9 % of the time, it's going to figure out that this associates with that. And then it's going to turn right. Did I have to do like, did I have to do machine learning and reinforcement learning and all of that? Probably not. Right.
Zach Peterson
That was what I was just going to ask. That's I was just going to ask is do you have to do reinforcement learning for that?
Ari Mahpour
Probably not. Probably not. mean, like, okay, you might be off like 0.1 % of the time, but then just add an exception for that. Like if it doesn't do that, then just like have it check itself, chain of thought, reasoning, know, self-reflection. Have it do those things and then, or add some extra guardrails, you know, and then, then you have like a, kind of like a cyborg. So, and I think that's kind of like where we're headed with AI. It's also like,
Zach Peterson
You
Ari Mahpour
Everybody, a lot of people ask me this, like how do I, because I write like a lot of AI based applications with LLMs in them and trying to read stuff and whatever. So, or write code, or developer tooling. And like a lot of people ask me, how do I get started with AI? I'm in the accounting industry. I'm in the medical industry. I'm in this industry. You write applications. How do I get started with AI? And one of the things I try to tell them is that AI is not.
in itself, like a chat, GPT, whatever, is not going to solve all of your problems. AI is just another framework. It's like taking assembly language and turning it into C or taking, you know, a high level abstraction language, know, like go like a, like a Python or something like that. And then abstracting away a lot of stuff or a framework, you know, so you want to abstract it, things away with AI. so AI
in itself doesn't do much. AI needs to get incorporated. So if you're a banker or you're a financial analyst, you need to integrate AI into your regular tools to help you. Or if you're an accountant, you need to integrate AI into your tools to help you. If you're in the medical profession, you need to integrate AI into that. So it's the same thing with hardware engineers. It's coming back to hardware engineers. It's whatever your day-to-day task, like the data sheet stuff, like that's a classic example. Something, somebody came to me and they said, I need a...
I needed to come up with a test recently, an end-to-end test. And I said, well, here's the mechanical fixture, and here's the data sheets, and here are the ICs. Go figure it out. So I literally took all of those. I gave it to an LLM. I gave it my test environment just for the setup routine. And I said, write me all the test vectors in one huge list or Python dictionary lists. And just give me everything to
to go from 0 to 100, for example, or 0 to 1,000. And it gave me, based on the data sheets, it says, OK, based on this and based on these graphs and everything that we see in there, here are all your test vectors. And that was extremely helpful. That would have taken me at least an hour to read through all of that. It did it in like 30 seconds. So it's just applying it to your day-to-day and figuring out how it can cut corners.
Zach Peterson
Yeah, and it seems like the risk of failure from experimentation is extremely low too, right? Because if it only takes 30 seconds to generate a response and it takes you an hour to qualify it, whereas the trade-off is you would have had to take an hour to generate that same response anyways, it seems like your risk of failure is essentially zero.
Ari Mahpour
Correct. I mean, you don't want to run it on a critical system quite yet, unless you have guardrails again, that's like, you know, self-driving and all that stuff. okay, I mean, there's a tremendous, tremendous amount of guardrails when you talk about self-driving and AI and machine learning. That's a different story. But like, when you're going to review it and it's AI in the loop, like there's no, there's really no risk. Just go and check it, you know, like.
Zach Peterson
Right.
Ari Mahpour
not gonna hurt you. What, it cost you 30 seconds of time versus 30 minutes?
Zach Peterson
Yeah, exactly. Yeah, I think that's actually how I kind of landed on some of the things that I use AI for regularly was just like, I wonder if chat GPT can do this. Or I wonder if Claude can do this. I'm just going to give it a shot and see what happens. And if it becomes an onerous workflow where, you know, I've got to go through and, you know, follow up with new prompts, or I got to take it from one model, go into another model and, you know, have to go back and forth and
Ari Mahpour
can do that.
Zach Peterson
I'm not going to do it because it just becomes another form of pulling out my own hair.
Ari Mahpour
Yeah. One other thing, just a point I wanted to say is if you notice most LLMs these days are more as interface chat GPT and Claude and Google Gemini. Most of them, I think all of them nowadays, they all have the capability to go to the internet and grab information, right? So in the early days, they couldn't do that. So not only were the models not as sophisticated, but they couldn't grab that extra context. When it comes to AI and LLMs,
Context is kink. Context is so important. so giving, so how much better did the LLMs get? Yeah, okay, that's true. They got a lot better, but like they got infinitely better because of this extra context that they can now kind of consume and synthesize and then give you a way better response in real time, real time information, not as of, as of April, 2023, you know, we have blah, blah, blah. No, it's like today. So that's, yeah. So that's like really important, you know?
Zach Peterson
Mm-hmm.
Zach Peterson
Yeah, I remember that.
Ari Mahpour
that kind of context.
Zach Peterson
Yeah, I would 100 % agree. So where do you see AI going next to help folks who are working in embedded development?
Ari Mahpour
so I think testing is really, really important. Testing is kind of like the testing is always like the big elephant that nobody wants to talk about it. Nobody wants to address, especially pretty much. Yeah. And so I think what you're going to see is more, more LLMs helping with some of that, the ability to basically define your requirements.
Zach Peterson
Why, people hate doing it.
Ari Mahpour
And it can either review your code, it can review your development, it can review your designs, it can review your PCBs, and then it can find ways to test things. Maybe just automatically, know, validating things in the background. Maybe it's a writing test, maybe you're pairing with it to write tests. Maybe it's, and so when you also hook up equipment, although again, you need to set guardrails, otherwise it's going to blow up your board. But you, when you do all of that, maybe it's going to look at requirements.
Zach Peterson
Thank
Ari Mahpour
It's going to see the tools. It's going to look at the board. It's going to look at the design. It's going to look at the code. And now you're going to have integration tests end to end, or even full system level tests that are fully AI driven. And then the nice thing about that is you have that full documentation workflow. So if you're trying to meet standards for military or medical device company or automotive, all of that is documented because it's run basically by a robot that can then document that the whole way through. So that's one area.
for the electronics designers, which I know is not fully embedded, but for electronics designers, I mean, it's only a matter of time before you're basically describing your design and it's pulling together a schematic and then doing layout for you. I know that sounds kind of like science fiction, or you kind of like, well, it's gonna be terrible. know, the layout's gonna look like Swiss cheese and whatever. But I...
Zach Peterson
Well, maybe it does, but I mean, that's your job as the hardware person is to come in and identify when does it look like Swiss cheese or when does it look great? And then change the constraints so that it actually gives you something or be able to identify like, okay, this thing is messed up, but it's going to take me a half hour to fix it. I'm just going to fix it. And it's going to be better than what the AI can do. Right. I mean, making those judgment calls as an experienced hardware designer, I think is really important too.
Ari Mahpour
Right.
Ari Mahpour
Can't it.
Ari Mahpour
Yeah. Precisely.
Ari Mahpour
Yeah, yeah, absolutely. From the embedded side, I think we're going to see more embedded developers pairing with AI, writing more code, again, running more testing. And also, another thing that I didn't really talk about was being able to synthesize the hardware side of things. So this is something that, you know, PCB manufacturer, PCB software design companies are going to have to think about.
making the metadata that's coming from the schematics, the metadata that's coming from the boards, make that available in a way that the LLMs can digest it, so then they can link that up with your software. Imagine an LLM looking at a schematic and then writing all the boilerplate software for your embedded device, because it knows all the ICs, it knows context about all these different libraries that it's seen.
you know, for all these different chips that have SPI interfaces, I2C interfaces. So you're going to see a lot of things. And again, is it going to be perfect? Absolutely not. 100 % it won't be perfect. Is it going to break the first time? Most likely. Did it save you like at least 50 % of your time for a boilerplate? Absolutely it did. And is it like perfect? No, but that's fine. You know, like that still got you 50 % of the way. So that
makes you be much more think about just like for all the people who are not in tech like they say well I need to draft an email to this person give me help me brainstorm you know like is that going to be the final email no but it's helping you brainstorm so it's the same thing let it do that initial placement or that initial layout and that initial software development right
Zach Peterson
Yeah, this already evokes images of what I've seen on, on LinkedIn of senior software developers who will go on and make this long post about, you know, complaining about AI generated code and how, you know, they have to fix it and so on and so forth. It's like, how much code did you have to fix from a junior developer? You know, did you actually, did you ever do an editing of the time required to, to fix the AI code versus writing it yourself? And I think if you actually do that comparison,
then you probably do find that you save time. What's interesting though is that I think that the expectations on these tools are always superhuman, not human at all, right? And I mentioned this to Kirsh Mackey, because we just had him on a podcast episode and I told him, think people have real unrealistic expectations of AI. Like they expect everything to be perfect and never infer false information. And it's like, well, if you're replacing human with a machine,
Ari Mahpour
Yeah.
It's so good.
Ari Mahpour
Absolutely.
Zach Peterson
it's probably gonna act kind of human, which means it's gonna lie. It's gonna infer things that aren't technically correct. It's gonna pretend that incorrect things are correct because it doesn't want to make you mad.
Ari Mahpour
Exactly.
Ari Mahpour
Yeah, precisely. Precisely. And that's right. And also another thing that people tell me, well, it was bad with this or bad with that. like, I also always challenge them, like, show me your props. How articulate were you? Did you describe the problem well? Did you iterate back and forth? I'd say majority of the time, it's a very high majority, very high percentage. It was garbage in, garbage out. Like.
Zach Peterson
Yeah, they used a five word prompt and expected the new iPhone, right? Yeah.
Ari Mahpour
Yeah, yeah, exactly. So, I mean, is it the LLM's fault? No, it's theirs.
Zach Peterson
Yeah, that's fair. That's fair. Yeah. I'm really excited to see how, all of this kind of, you know, transformed what, you know, folks like us do, especially because of the, workforce crunch that's been, I mean, it's like a persistent thing, you know, there are so many people retiring and the workloads are just going to keep going up. And, you know, I think these tools are going to be a big help for efficiency and, also quality, you know, if you can just kind of pull the, the repetitive stuff off of somebody's back, they can focus on the, real points of failure.
Ari Mahpour
Exactly. Precisely. Yes, absolutely.
Zach Peterson
and do those as best as possible.
Zach Peterson
Absolutely. Well, listen, we're getting up here on time. We're almost, I can't believe it, we're almost an hour. It felt like we've been talking for like 10 minutes. Ari, I want to thank you so much for coming on. I want to first tell everyone that's listening, go over to the Octopart channel and hit the subscribe button. Ari has a ton of videos over there. They're all so cool and they all bring up or illustrate a lot of stuff with embedded and some of the things we've been talking about. And honestly, man, I always learn something when I'm watching one of your videos. It's really cool.
Ari Mahpour
Yeah.
Ari Mahpour
Thank you.
Ari Mahpour
Thanks.
Ari Mahpour
Thanks so much. Thank you.
Zach Peterson
Yeah, absolutely. And then I think you also have some on Altium Academy. Of course, that's this channel. Make sure to hit the Subscribe button. And I'm sure you'll be coming on because we've got some projects in the works that will be featuring you and your expertise. So thank you again. This has been really cool. Absolutely. To everyone that's out there listening, we've been talking with Ari Mapour, Embedded Systems Engineer at Rivian.
Ari Mahpour
Thank you.
Zach Peterson
If you are watching on YouTube, make sure to hit the like button, hit the subscribe button. You'll be able to keep up with all of our podcast episodes and tutorials as they come out. And last but not least, don't stop learning, stay on track and we'll see you next time. Thanks everybody.