Not all engineers think like hackers, and not all hackers think like engineers. - Joe Grand
Let’s jump into a contemporary engineering topic – security and privacy! Today’s guest is Joe Grand, a computer engineer, inventor, teacher, device security expert, and an AltiumLive alum. I am very excited to be talking about IoT and hardware-level security with an engineering celebrity! Together we will dive into the processes and challenges of building hardware security. There are many details discussed so make sure to watch through the end and check out the additional resources below.
Listen to the Podcast:
Download this episode (right click and save)
Watch the video:
Show Highlights:
Links and Resources:
Full OnTrack Podcast Library
Altium Website
Download your Altium Designer Free Trial
Learn More about Altium Nexar
Transcription:
Joe Grand:
Not all engineers think like hackers, and not all hackers think like engineers. And especially for an audience like this, where we're designing products that are going out in the marketplace or that are being used, we kind of have to think about what hackers are going to do to our systems because it's only a matter of time, right?
Zach Peterson:
Hi, everybody. And welcome to the OnTrack Podcast. I am Zach Peterson, guest-hosting for Judy Warner. And today, we're going to be talking IoT and hardware level security with Joe Grand. I'm very excited to be talking about this as a contemporary topic. And I'm hoping everybody learns quite a bit about hardware-level security. Let's go ahead and get started.
Speaker 3:
Welcome to All Teams OnTrack Podcast, where we talk to leaders about PCB design tackling subjects, ranging from schematic capture all the way to the manufacturing floor. I'm your host, Judy Warner. Please, listen in every weekend. Subscribe on iTunes, Stitcher, and all your favorite podcast apps. And be sure to check out the show notes at altium.com/podcast, where you can find great resources and multiple ways to connect with us on social media.
Zach Peterson:
Hi, Joe. Thank you so much for joining us on the OnTrack Podcast. I know that you are AltiumLive alum from some years back. I know that Judy is out of office right now, planning for AltiumLive. And for any listeners who are interested, we've got a registration link down in the show notes. Hopefully, you can come back again at some point in the future. But I was actually really interested to talk to you after watching your earlier talks and watching a video about you. And you came recommended by some other folks at Altium. So thank you again for being here.
Joe Grand:
Cool. Yeah. Thanks for having me. And, yeah, it's been a while. I think my last OnTrack Podcast was with Judy a couple of years ago. And I looked a little bit different. That was pre-COVID hair. And now, this is the post-COVID hair. Yeah. I'm glad to be here. I think-
Zach Peterson:
I know all about COVID hair.
Joe Grand:
That's right. Altium has been a company for me that actually has been one of the few companies that have stayed the same as far as the tools I've used for a long time. And without sounding like a marketing pitch, it's like anybody that's worked with the tool. It's like, once you start using it and you get used to it, you're kind of stuck with it. So no matter what you do, I'm going with Altium. And I don't really have a choice at this point because all of my previous designs over the past, basically, 30 years starting with like Protel Easytrax, actually Protel, we have free tracks and Easytrax.
Joe Grand:
And moving along through there and Altium and Altium designer, so yeah, I appreciate what you all do. And I appreciate that you're having me on here. And, hopefully, people find it interesting. And if you're out there listening, thanks, and yeah. It should be fun.
Zach Peterson:
Yeah. You talked about Protel. That was a little before my time as a designer. I never got to use Protel. I think I was still an undergrad at that point. But now jumping into Altium, I totally understand what you mean.
Joe Grand:
Yeah. And it's like if you're writing code, you have your code snippets that you always use. And my footprint library that I've designed is massive, even though now, there's a lot of good options for some community-driven component libraries and things like that. But I just have my go-to set.
Joe Grand:
And the first thing I built talking about hardware security, the first circuit part I ever designed using free tracks was a little OpAmp board that would take in the audio output from a radio, from a ham radio and decode pager traffic. So the POCSAG, pager transmissions. So it'd basically be the OpAmp that would amplify the signal and kind of provide digital data into a computer. And then, there'd be software on the computer to decode that.
Joe Grand:
So it was like a little kit that I sold when I was part of a hacker group back in the day that would help pay for the rents by selling these circuit boards. And that was the first board I designed and learned a lot, because even back then, it was a penalized board. And the board was only like one inch by one inch.
Joe Grand:
But even learning about component placement at that point where I had before that, only hand-etched circuit boards, and it kind of like you placed components where they were best routable by your rub-off letters. And then, I learned very quickly that when you're making a kit and other people are building it, you want to make it where the components are easy to access and make sense. And you can kind of route stuff where you want, especially if it's a two-layer board or more. You have options there. So I learned just a whole lot about circuit board layout and design early on.
Joe Grand:
And then from there, but definitely by no means an expert, my passion has always been in kind of creating tools to kind of show a concept or to do something or to empower somebody. And I've laid out some what I consider fairly complicated boards, but to any competent circuit board designer, probably most of the Altium audience, the stuff I'm doing is sort of not that bad. It's like micro-BGA with some maybe a six layer with some and stuff like that.
Joe Grand:
To me, that's state of the art. But for most people probably not. But it's still exciting. And then to be able to kind of grow and use some of these more advanced techniques as somebody like me, it's really cool.
Zach Peterson:
That's funny that you brought up your first circuit board was an amplifier based board because that's what mine was. And it was, by no means, sophisticated or shockingly complex. It's just a funny coincidence. But I had to actually pull in measurements from a lot of instruments and then put it into a GPIB linked series of instruments and then hook that up to LabVIEW. And that was a lot of fun. And for me at the time, that was certainly the most advanced thing I had ever done with electronics as a graduate student. And then, it just kind of took off from there. And now, here I am.
Joe Grand:
Yeah. That's exactly it. It's like trying to get things from the real world into some sort of format that becomes usable for something else, sort of the interfacing types of stuff. And, yeah, in the other podcast, we talked more about my history and how I got into all of that stuff. But it really, it's just so cool to be able to at any level have something in your head or build something that does something.
Joe Grand:
And that's what's so exciting about it. And then, the hacker perspective really is interesting to bring to the engineering world, because not all engineers think like hackers, and not all hackers think like engineers. And especially for an audience like this, where we're designing products that are going out in the marketplace or that are being used, we kind of have to think about hackers are going to do to our systems because it's only a matter of time, right?
Joe Grand:
If your product is worthy, somebody is going to want to hack it at some point. And it's sort of like there are a lot of things we can think about from a design perspective, even from a circuit board design perspective, layout perspective, and parts placement, all these things, that could be useful, even if it's too hard or too expensive to implement higher level types of security.
Joe Grand:
So there's just so many fun things. And being a hacker to me is sort of I tell people. It's like people are always going to get sick. So being a doctor, probably isn't such a bad thing. If you're into the medical profession, people are never going to stop being sick. So with hacking, it's the same thing or security really. There's always going to be some security issues and hackers, either good hackers or bad hackers. I kind of used the term as both, because I grew up as a good hacker. And there wasn't really an intentionally malicious element at the time. And it wasn't like it is now where there's the ransomware and groups specifically hacking to make money.
Joe Grand:
So I kind of use it as an overarching thing. But, yeah, hackers aren't going away. And part of it's curiosity. And we like to mess with stuff. And then, if it's the other element, if there's money to be made, there's going to be people trying to mess with products. So it's a great kind of world to be in because you get to see so many things from so many different angles and so many different approaches of how to not only design hardware, but how to break that hardware that's being designed.
Joe Grand:
And then as new products come out or new chips come out or new technologies come out, trying to figure out how you can still defeat those, so, yeah, it's just like a never ending kind of collection of things to do.
Zach Peterson:
Yeah, definitely a game of cat and mouse. And I know that these days, cybersecurity is top of mind. Whenever there's a major hack, it hits the news. But it seems like there's so much that goes on under the surface behind the scenes. You never hear about it just because it's becoming more and more common in the software world.
Zach Peterson:
But one of the things that I think that brings up, especially as more IOT devices get deployed in the field and then the attack surface increases is what happens at the hardware level, and what can designers do to implement some even just basic barebones like hardware level security.
Zach Peterson:
What are some of the things that designers can do that are the hardware equivalent of Windows Defender? What can people do? That's one of the things that I wonder about. What's the basic antivirus program you can put on your hardware. And I suppose if you think about a typical embedded system, there's the application level, something that might happen in the cloud, or in a command center somewhere, whatever it may be.
Zach Peterson:
Then, there's your code level that's on the device and the application that you actually deploy on the device. And then, there's the hardware itself. And so it's almost like with software, you've got one big attack surface. But it's all on the web.
Zach Peterson:
With hardware, you've got three different attack surfaces. There's the code level, the application linking it all together, and then the hardware itself. Yeah. I mean, are the challenges and the thought process behind designing for trying to protect devices that are in the field any different?
Joe Grand:
Yeah. I mean, that's a great point. So if you think about hardware products, they are basically for the most part general purpose computers running specialized code or generalized systems or even a specific system doing something. It's a computer.
Joe Grand:
So not only do you have the physical hardware stuff that we can talk about. But you mentioned, you have the firmware level or software level, which is the code running on the thing. And then, you have usually some sort of connectivity side. So whether it's a network connectivity at some point. So, you have these three levels. And there's usually different people, depending on the size of the company and the product being built. There's different people working on those different levels.
Joe Grand:
And I don't know how much communication is going on between those groups. But at every one of those stages, there are security things to think about. I also like what you said too, about how there are certain times where we see hackers in the news, and we see a product being hacked or a database being released or Twitter, Facebook, or whatever, some network-related thing. It's almost to the point of every day, there's something.
Joe Grand:
It's like, oh, but there's so many things that you don't hear about. And it's usually the ones that you don't hear about are the scary ones because if there's an attack that we don't know about, but some organization has a zero day. So basically some known exploitable vulnerability on a product that hasn't been reported and hasn't been released and hasn't been fixed. Those are extremely valuable. And that's a whole other world of if somebody finds a vulnerability, somebody like me, if I find a vulnerability in something, if I happen to be hacking on something, I would report that to the vendor, give them some time to fix it, depending on how responsive they are and then go and share that with other people so they can better protect themselves.
Joe Grand:
Sometimes, vendors aren't as responsive. So you have to go kind of just full on open the kimono and release all the information to kind of force them to do something. But there are people that will find vulnerabilities and not release them and use them for their own advantage, whether it's a business advantage or financial gain or whatever it is. And those are the ones that you don't hear about.
Joe Grand:
So you don't even know necessarily if you're being exploited or attacked in some way. The whole thing is really hard to think about. If we want to talk about kind of what can we do at a basic level, from a hardware kind of board level, I talk about low-hanging fruit. And what ends up happening is there's so many things that happen. Anytime a hacker like me goes and approaches the problem, whatever the product is, I always go through this kind of set process, which is funny because you normally hear about hackers, think outside the box. And they do whatever they want. And that's true.
Joe Grand:
But coming from an engineering background., I have my process that I go through. And then, within those steps, maybe I'll branch out and think about things a little bit differently. But just to give you that sort of idea, I kind of have information gathering stage where I'm going… Even before I open up the product, I'm going to find whatever information I can about it, so user documentation, information intended for the end user or for the customer. But then maintenance information, patents, certifications, marketing materials, press releases, all of these things that, on their own, maybe aren't a big risk or a big threat.
Joe Grand:
But if I know that company X has partnered with company Y and uses all of company wise chips in their product, now I know that I'll target that company. And that's going to make it a little bit easier, especially in case if a vendor scratches the part numbers off the chips or something like that, or covers it in epoxy that makes it slightly harder for me to get to, by reading about this relationship I already know. And I have a pretty good guess at what those parts could be.
Joe Grand:
So information gathering's huge. Patents also, companies are protecting themselves with patents, not necessarily as a way to engage in innovation, but to protect themselves against other people doing those same things. And within the patent, there's lots of information about how the product works or how the security mechanism works, certifications. Even FCC certification, a lot of information becomes public about the test of how the system was tested, your schematics, your block diagrams, photos of the system.
Joe Grand:
And you can even search the FCC website, say, by the vendor and find things that were certified even before they're released to the market. So all these things that aren't even really hardware-hacking related, you can get a lot of information before you go to that level. So that's the first step.
Joe Grand:
Second step is going to be to tear apart the product, so dissembling it in a way that-
Zach Peterson:
Obviously.
Joe Grand:
... you can get access to the circuitry. And sometimes, that doesn't matter. Sometimes, we do that on one to figure out how it works. But then, an attack might be that we find some vulnerability that's over the network. So you don't always need to take apart the product. But it helps to give you all the clues you need to then go do something else.
Joe Grand:
Then, component identification. And what I really look at is not only the components themselves, but how they're connected. Engineers and manufacturers are going to need things like test points and debug interfaces. And generally, we're going to see silk screen markings on there for the park designators and sometimes for even other stuff because the more information that's on the board, the easier it's going to be for us as engineers to know we're looking at for manufacturers to know what they're looking at. They don't have to reference other stuff as they're assembling or as they're reworking or whatever it is.
Joe Grand:
And then for test engineers as well, the more information that's there. So we can take advantage of what's on the board to kind of know where we want to target. Another thing I like to do is look specifically at buses and interfaces. So how chips communicate to each other.
Joe Grand:
So if you have a micro-controller and some external peripheral, or you have a microprocessor, an external memory, or whatever it is, you're usually unless everything's integrated or you have an FPGA that's handling everything or an ASIC that's handling everything, you generally will have your main part with some peripherals.
Joe Grand:
Targeting the communicate between those things, by using a logic analyzer and oscilloscope basically standard engineering tools for the most part. There are tools created by hackers for hackers to make hardware hacking easier. But in general, you can just use standard engineering tools. So as an engineer, you can go through the same process to verify and think about, "Okay, if I'm a hacker, this is what a hacker's going to do. Let's see how easy it is to do, or let's see if we can protect it in some way."
Joe Grand:
But looking at chip-to-chip communication, a lot of times, I2C, SPI, 1-Wire, even if they're faster interfaces, they tend to be transmitting in clear text. So data that's coming across, unless you've built in some additional encryption of those data blobs, which sometimes, you can't do because the peripherals that you're using just don't support that.
Joe Grand:
So a lot of that stuff can be attacked that maybe we use in some way. And then, we do some signal modification or spoofing or things like that. And then, one solution that people are like, "Well, if we want to prevent people from sniffing buses on the board, let's just bury the signals on inside layers." And it's like, "Yeah, okay. You might be able to do that if it's a slow interface of some sort. But what if you're dealing with something faster that if you bury, now that affects your signal integrity of your system or of those signals or something?"
Joe Grand:
So the solutions to prevent some of this stuff and get rid of this low-hanging fruit are not always as easy as they seem. But that's the type of stuff, is looking at these things and thinking about how do they affect me as the designer? How do they affect me as the manufacturer? And then, how could they affect a hacker? Because a hacker's going to take advantage of anything that the engineer puts on the board to make the engineer's job easier. It's going to make the hacker's job easier.
Joe Grand:
And that's what I consider the simple types of stuff. And we can go into all sorts of other things about component placement, component access, and all these other things to think about too. But it really is like starting with the most obvious things. And the problem is stuff like debug interfaces and test points, we know we need those. And like, "Okay, then, do we do password protection on it? Do we disable it, enable code protection or disable the debug interface after manufacturing, in hopes of preventing an attacker from getting access to that?"
Joe Grand:
But, yeah, then, you could do fault injection to reenable that. And you're kind of at the whim of the chip vendors and how they implement security properly in order to give you security properly. So it's a whole mess of wires at that point.
Zach Peterson:
So what I'm hearing here is that you're... And tell me if this is wrong. But what I'm hearing here is that you're actually looking through the hardware and scanning the hardware to see what type of data you can pull off to figure out what is your other way in, because it seems to me that if you have a device deployed in the field, maybe it's connected to a network.
Zach Peterson:
There's an application running somewhere. You're not necessarily interacting directly with a device at the end of the day. I mean, maybe you are. Maybe you go out in the field somewhere and just mess with the device. But I guess if you wanted to disable something and you're in the field and you find you could take a hammer to it, but if you wanted to really take over a piece of equipment and still have it be operational, you need to know what it's doing on the device. And then also, how does it interact with its end applications somewhere else?
Joe Grand:
Yeah. I think it really depends on the product. Some people don't go through the detailed steps like I do. I just like seeing the physical layout to get a better understanding of what's happening. But if a hacker is more skilled in software, they might just extract the firmware and go straight to looking through the firmware for problems, or if it's a Linux-based system, for example, you pull off SPI flash or you pull off the SD card or whatever it is that has the kernel and the file system. And now, you do all your software tricks on that device.
Joe Grand:
So you don't have to necessarily always go to the physical board level stuff. I just think it's all about gathering clues and getting more information about the system. And that's what I like. But you don't need to. And if you have a network application, for example, or an IOT connected something, if you can monitor network traffic and you see what it's doing, then, maybe you inject things or you find a vulnerability and the network configuration or other stuff.
Joe Grand:
But I do think that going through the steps is typically if you have the time and the energy to do it as a hacker is what you do, so you can get as much information as you can. And that's what I end up. When I teach classes about hardware hacking, I sort of explain that of like, "This is not always required. But the more information you get, the more clues you can get are going to help come in later I think, when it's time to go do something else."
Joe Grand:
And one good example of that is like, say, you have a system that has some hard-coded credentials to log into some server on the backend or something like that. If you can get those from the physical memory of the device on the board, now, you have those credentials, assuming they haven't, they're not updated and changed, now, you can use those to do some other network attack, either acting as the hardware or sending data in some other way or accessing the network outside of what that device is supposed to be doing on the network.
Joe Grand:
So there's all sorts of other things. But it's like you get these little nuggets of information that are probably useful later on. And it's important to think about that from a design perspective of like, "Okay. We're doing an IOT device." And you almost always want to assume that somebody will get physical access to your device and get information off of it to get those clues.
Joe Grand:
And people always say, "Well, you get physical access, then, game is over." But you can still make it harder for the attacker. But it's almost like when I use my computer, I assume that my computer is compromised in some way, because I travel all over the place. I do take security precautions. So, please, don't try to hack me. But I kind of assume, "Okay, it's probably compromised. So I'm going behave accordingly."
Joe Grand:
And I think as a designer, you want to assume, "Okay. My product will be looked at. It will be tampered with," and proceed accordingly. If somebody gets physical access, does that matter? Maybe. Maybe not. If they are going to get physical access and say, "I do have some credentials or an encryption key or something I need to keep secret," let's think about that and say, "Okay, if I know that somebody is going to attack my system to try to find that piece of information, let's think about how we can better design our system to prevent that."
Joe Grand:
So maybe instead of using an off-the-shelf microcontroller, general purpose, one, you're using something that's advertised as a secure microcontroller that has some secure elements with some internal memory storage, specifically for encrypted elements and other things. But if you don't know that in advance, if you don't know that somebody might get physical access and try to extract information, you won't know as a designer to even consider using a secure device.
Joe Grand:
And that's why I think just thinking like a hacker at all stages of the process, engineers and management and levels above that should all be thinking about this because you can only anticipate so much. And the things you can anticipate, you should try to implement or make harder to attack at some point.
Zach Peterson:
Yeah. It sounds like thinking about what are the two or three areas of the device that are most important to protect, because from what I'm hearing, it sounds like there are some areas of the device that you don't really need to worry about. I mean the obvious one would be a power connection or something who's going to hack you through the power connection. But maybe the other one is part numbers. Do we have to scrub every part number? Maybe not. So thinking about those areas that are most critical [crosstalk 00:24:44].
Joe Grand:
That's right. And usually you want… And that would be called like a threat model is what we call it in fancy cyber security world. And that basically is thinking about how somebody could attack your system and then you decide what's worth protecting. So if I'm designing a security system for my house, for example, which is what I did, I would say, "Okay, if I was going to break into my house, how would I do it? Where are the entry points? Where are the exit points?"
Joe Grand:
So, okay. I'm going to try to protect those in some way. Where do I put my cameras that's going to catch or motion sensors, I should say, inside of my house, that's going to catch somebody? Even if they don't go in the traditional entry and exit points, I'm still going to find them. So you threat model all of this. And then it's like, how likely is it that somebody's going to climb up to the second floor and go through the window?
Joe Grand:
And you kind of say, "Okay, well, I don't need to put an alarm on every single window because if I put the motion sensor, that's going to catch them." So you think about all these things. That's what you want to do from a hardware perspective also, from a product perspective. Okay.
Joe Grand:
If I'm a hacker, they're going to go for the debug interface. So if you get access to, say, JTAG or a vendor-specific debug interface on the board, that can be pretty damning. That's kind of the root access, the administrator access to the system at that point. If you can single step through your code and you can change memory and change registers and do everything on the fly, an attacker's going to have complete control. But we know that we need those things for manufacturing.
Joe Grand:
So the solution is like, "Okay, let's see if we can lock that down." How does that affect us in the future? How is that going to affect repair? So we just have to model everything. And what's hard is nothing's 100% secure. it really is this cat and mouse game between how much time and effort is required to gain access to the system and is gaining access to the system worth that amount of money, time, effort, et cetera.
Joe Grand:
And you have to think about that for every product. Every product is different. And if it's say an IOT device that maybe is a cheapo webcam, you might think, "Well, it doesn't matter if somebody hacks it." But then, you end up with something the Mirai botnet where there's all these insecure IOT devices that were hacked with default credentials and things that just weren't secure from a network perspective.
Joe Grand:
On their own, it doesn't matter if they're hacked. But now, when somebody has a million of them, they can be used for distributed denial-of-service attack or something else. And you don't want to be that company that is responsible for being hacked and being used in some malicious way.
Joe Grand:
As far as power supply, I will say, because you mentioned, well, usually, you don't have to worry about power supplies. But sometimes, you do. And there's a whole other world of more an advanced hardware hacking technique called fault injection, where you're basically forcing the system to misbehave in a way that's beneficial to you.
Joe Grand:
And this mostly comes into play when you're trying to defeat code protection of a chip either to prevent reading from flash memory or it's to re-enable a debug interface that's been locked. Basically, anytime there's a yes or no check where you're trying to change the program flow, you can actually do fault injection through glitching the power supply. So kind of bringing the power supply line low very quickly usually on the core voltage line of a chip.
Joe Grand:
But sometimes, you don't have access directly to the core voltage. You can do it on the general system voltage. You have to remove some of the capacitors and stuff so that's not actually cleaning up the glitches that you're trying to do. So something that generally doesn't seem you should worry about could potentially be an issue.
Joe Grand:
The other one related to fault injection is with a crystal like an external crystal or an external oscillator to provide your heartbeat to your system. It's sort of as engineers, we kind of think about, "Okay, we need that. We need to put it as close as we can to the chip." And it provides the heartbeat, and it does what it does. We don't really think about the fact that, "Oh, what if somebody removes that, creates their own oscillation instead, say, with an FPGA and then at just the right point when they want, they can inject an extra pulse in there to cause what I say like a heart palpitation to kind of mess up the internal logic to maybe change the program flow, turn a yes decision to a no decision and then gain access to a system?"
Joe Grand:
So those are not typically things that an engineer thinks about. But now, we have to because these advanced attacks are so much easier now with the right types of hardware tools. So it's hard to anticipate everything.
Zach Peterson:
Yeah. Sure. Sure. And I kind of figured I would have to eat my own words when I said something about power supplies because you're way more [crosstalk 00:29:37].
Joe Grand:
Well, it's just one of those things that 10 years ago, it was known that you could do fault injection, but it wasn't as easy. And now it's really easy. Really, you could do it with a MOSFET, and you just crowbar core voltage to ground for X microseconds. And some chips are vulnerable to that.
Joe Grand:
So as time goes on and as technology increases, as attacks increase, it's like we wouldn't have been talking really about fault injection back then. And now, we are. So exactly like how you said it, is what engineers are thinking. It's very hard to keep up with exactly everything that attackers are doing. We can only do our best and hope that we can learn from existing attacks on other products that we can then use to make ours better hoping that nobody hacks us first, which is hard.
Joe Grand:
So, I created the electronic badge for DefCon 27, which is a, the world's largest hacker conference. And I designed the electronic badge very early on. We were one of the first conferences, as far as I know, to design a badge that was electronic and artistic and stuff. So I kind of retired for a long time, came back to do DefCon 27. And it was a really cool NFMI near-field magnetic induction communication and a lot of stuff I wanted to play with as far as the micro-BGA and tiny parts and things.
Joe Grand:
So I wrote the code. And it was open source and people could explore it and see what's going on. And one of the attendees actually found some vulnerabilities in my implementation. So basically, I had some buffer overflows of copying memory and not bounding and not properly checking for the length of things that I was copying. And then, he was able to create this whole exploit chain to essentially, through the wireless communication, execute code on the victim badge, which is cool and really amazing of... Took him like three years.
Joe Grand:
But it goes to show that even somebody like me who is trying to pay attention at stuff and even though I wanted people to mess around with it, I wasn't really a 100% focused on security. But it's one of those things that it happened. And when he came out with that, it felt a little embarrassing because, "Oh, I totally should have caught that." But at the same time, it's really hard to catch every single thing.
Joe Grand:
So from a vendor perspective, from an engineer hearing perspective in the off-chance that you do get hacked, hopefully, you don't. But if you do, it's sort of like, it's something that you can't necessarily get angry about and you just want to say, "Okay. I made a mistake," and try to fix it and acknowledge it because everybody makes mistakes. We're only human.
Joe Grand:
And even if you're trying to focus on doing the right thing, sometimes, you make a mistake. And it happens. And the larger the product is, the more complex it is, the more chance there is of somebody hacking it. And that's just the way it is. So if somebody finds something, hopefully, you can update that, make it better for the next time around and then make it harder for the attackers. And maybe that turns them off and they go somewhere else or maybe they hammer on it again, and they hack it, and you fix it. And you hack it and gets better and better and better.
Zach Peterson:
Yeah. I'm hearing something interesting here because it's kind of the broader trend within the industry outside of IOT and embedded, which is the de-siloing and trying to get different groups to collaborate. I think the person who's actually doing the hardware, doing the physical layout, could probably have better collaboration with the person who's doing the firmware. And then, that guy could have better collaboration with the people who are doing the software and the web interface and everything.
Zach Peterson:
And there's just so many barriers that need to be broken down if security really is a priority. And so, I think that's where tools like Altium 365 can be really useful because you can put all this stuff in one place and everybody can check each other's stuff.
Zach Peterson:
And it seems like that is really one way to, I guess, help people catch each other's errors like you've mentioned. I know the open source community, sometimes, it's really valuable because people can put their stuff out there. Others can make edits to it. You can clone it. You can make it more secure. You can add features. You can make it better. And it almost seems like hardware teams should kind of take that same approach and model their collaboration [crosstalk 00:34:01].
Joe Grand:
That's a great point. And it's funny because a lot of times when we're talking about security and especially within manufacturing, we want to silo. And we want to compartmentalize. We don't want the circuit board manufacturer necessarily to have full insight into our schematic. We don't want a third party who's programming our parts to know what that is going into.
Joe Grand:
And [inaudible 00:34:25], you do want to compartmentalize. But from a design perspective, you don't want to compartmentalize. You want to have your hardware engineer talking to your firm or engineers, talking to the network or application engineers because they all communicate or from a technology level. So going back to the beginning of the conversation, your hardware, your firmware, your network, different people are working on those. But the firmware is running on the hardware. And somebody might do something in firmware that the hardware designer wasn't anticipating.
Joe Grand:
They might store credentials, for example, in an insecure piece of memory that the hardware design wasn't intending that memory to be used for. They might have thought that was going to be used for user data that's not as important or user settings.
Zach Peterson:
Well, then, if the board design is not anticipating that, someone could then come in and get that [crosstalk 00:35:19].
Joe Grand:
That's exactly it. So they could basically make use of a firmware level kind of mis-implementation or misuse by accessing the hardware.
Zach Peterson:
Well, I think it would be even something that would, might be innocuous from the perspective of a firmware developer. But then, the hardware guy comes in and says, "Well, I'm just going to lay this out like I've always laid it out." And if they're not thinking security, or if they're not even thinking, "Hey, someone could pull these credentials off of this interface because I've left the pins exposed," then it's just kind of [crosstalk 00:35:50].
Joe Grand:
That's exactly right. And that's why the hardware designer needs to talk to the firmware designer and say, "Okay, what information do you have? Where do we need to put it? What needs to be treated as protected? What needs to be treated? What can be open?"
Joe Grand:
And then even at that network level, how is the system being accessed? Where are those credentials stored? How is the network configured? How do they talk with the firmware engineered to understand all those intercommunication? So, yeah, you're right. There's all these little bit that can be attacked. And those are the areas where hackers are looking for those interfaces and kind of the mis-implementations or a mistake that somebody made. And if there isn't communication within the team, that's where things happen.
Zach Peterson:
Yeah. I also imagine that there is an important element here to think about, which is future proofing because you had brought up that people find an exploit or they find a mistake. Somebody finds it, whether it's an attack or not. It gets corrected. They update the application. They update the web app, whatever it may be.
Zach Peterson:
But at the hardware level, let's say you have a million devices deployed in the field and it's a hardware level exploit, what are you going to do is send a technician out to a million different locations to swap out devices? I mean, it seems like at some point, if you screw up the hardware, that's it. And people got to rebuy a million new devices.
Zach Peterson:
And there's only so much you can update away in the firmware, if you're going the typical MCU plus peripherals type of architecture that, I mean, 90% of embedded systems do that. And it almost seems like something like an FPGA might be preferable or doing some of that critical security intensive data processing off the device if the device is constructed in such a way that maybe those interfaces are accessible. What do you think about that?
Joe Grand:
Yeah. I completely agree. Even if you think about software, so, now we're at the point where every week, we're getting updates to our software, Patch Tuesday, they call it. And now, there's a joke of Exploit Wednesday because when the patches come out for automatic updates for Windows, people look through those patches and see what was fixed in the code in the operating system, and then create exploits and say, "Well, why was that patched?" Maybe that's some exploitable security thing and then create an exploit based on the patch.
Joe Grand:
So even in the software space where patching happens more often, it's still a really hard problem to stay secure. What also happens is people will turn off automatic updates, especially in large installations. They don't want to risk their systems going down by implementing a patch or implementing something without it being tested first. So that's why a lot of this malware and fishing and ransomware works because it's taking advantage of exploits generally, unless there's zero days like we talked about before.
Joe Grand:
They're generally taking advantage of known vulnerabilities or weaknesses that the people who are targeted just haven't fixed. So software given the fact that you can do network types of updates and stuff, it's better. It's helping a little bit maybe. But from a hardware perspective, that's just multiplied by some huge number because not only is it harder to do flash updates like you said.
Joe Grand:
But more importantly, if it is a physical board level problem or chip level problem, you're screwed. And there are chip vendors that that will try to implement some sort of patching mechanism. So if there's, say, a vulnerability found in the bootROM of the Silicon chip, this is common for more of the secure microcontrollers.
Joe Grand:
Then, you might be able to add some code in there to patch that vulnerability, because you're not going to re-spin the Silicon right away. But even those might have problems. But it's something that, a lot of times as designers, we're just inheriting the security or lack of security that was given to us by the chip vendors. And at the time, we might say, "Okay, chip vendor X seems to have good security. They're protecting things in the right way. We're going to use that device as intended or as recommended by the vendor." And we're secure for X amount of time until somebody finds a vulnerability. And now, all of a sudden, all your devices are vulnerable.
Joe Grand:
And that is probably the hardest problem of hardware security from a design side, is once things are in the field, if you can't do a firmer update to fix it and even doing proper firmware updates and having code signing and verifying that the firmware that you're loading on is legitimate and hasn't been compromised. But the fact of fixing hardware and replacing hardware is expensive, and generally it's not going to be done.
Joe Grand:
And one example, I'm not going to mention the vendor's name but one chip vendor has billions of chips that are out in the marketplace in mid-range and low-range, kind of mostly consumer devices. But a very, very well-known chip vendor has a vulnerability in their chip that they know about. But they haven't updated their Silicon. I don't know if that's just because they want to steer engineers more towards their newer version that maybe has better security.
Joe Grand:
But whatever it is, the billions of devices that are already in the field are now vulnerable. Now, that this attack is known, which means somebody could go, if they get physical access, they could, for example, extract the firmware or really change the firmware, change functionality. And those things are going to remain vulnerable until the product life cycle goes through. And then eventually those products get replaced.
Joe Grand:
And then, there's a new version that comes out. So yeah, I mean, it's a world where, like I said, security is never going to go away because you're always going to have vulnerable devices besides the new devices that come out, you have all the old devices that are vulnerable too. So really, it's a very, very hard challenge.
Joe Grand:
And I don't envy anybody that is responsible for designing the security of a system, because it is very, very challenging and very hard.
Zach Peterson:
Yeah. So there's something that comes to mind, I guess, at least on the physical side, because that's really where board designers have to spend 90% of their time, unless they're also just having to be doing firmware. But at least on the board design side, it sounds there's a few practical things that are just kind of bare bones Windows Defender level, security tactics buried to critical traces, black out the critical components, maybe use non specialized packaging so that it's not so obvious what this particular component might be.
Zach Peterson:
I'm wondering are there any packages that are actually more secure than others? It would seem to me a BGA, if you've got your microcontroller, it's in BGA footprint, only those outer balls are really the most easily accessed because everything after those first two rows is going to have to go through an inner layer. And if you do it with blind and buried vias, then you can put everything in the inner layers and come up somewhere else, wherever it needs to go.
Zach Peterson:
And then as long as you leave those output points concealed, then, it's that much harder to physically probe, or is this just an issue that if somebody tries hard enough, they're going to figure out how to get [crosstalk 00:43:22]
Joe Grand:
Yeah. I mean, a lot of what you just mentioned would be called security through obscurity. So if you're scraping part numbers off the board or off the chip, and you're leaving your silk screen off the board, you're burying your traces, you're using BGA or micro-BGA, or even say chip on board, or maybe a package on package. You can get more complicated that basically is just going to slow the attacker down. And it might limit the capability, or it might require more CA capability to do.
Joe Grand:
So, for example, a BGA part, maybe, that's going to turn away some curious high school kids that maybe don't have access to some rework equipment. But if you have access to those things, you can take the BGA off. You can very easily now make your own circuit boards.
Joe Grand:
Obviously, with the Altium and lots of other tools, you can create some sort of interface board, some sort of shim to sit in between the BGA part and the rest of the board and break out the signals that you want. So there are things to do. It just makes it harder instead of soldering wires or tapping onto a leaded part or clipping on with the chip clip or something, you have to think more about the attack process before you do it. And, yes, it makes it much harder. But people who are determined will still go after it.
Joe Grand:
But maybe it's like, okay, if I know they're going to target my part, we make it BGA. We use BGA underfill to make it harder, to get access to the balls. Maybe, maybe you add on all these additional layers. You enable whatever security you can. Maybe, you encase it in epoxy as well. It all depends on how long you want to prevent the attacker from getting to it, is what it comes down to. But, yeah, it's really like, if somebody's determined enough, they will. It just makes it more difficult.
Zach Peterson:
And then, I wonder if there's ever any… And this is going to sound like total James Bond kind of stuff. But I wonder if there's any way to design a device so that if you do try and access it, you actually eliminate all the data or just destroy the critical parts of the device.
Joe Grand:
That happens. Yeah. So those would be things called anti-temper mechanisms. And that would be kind of physical security for your electronic system. Generally, we'll see that with accelerometers for something's enabled. And wait a second, this power meter on the side of the house, once it's installed and operating, it shouldn’t be moved. So if it's moving, [crosstalk 00:46:00] know.
Joe Grand:
I've seen light sensors. So if it's say photo dialed on a board and you open the package, assuming the system's running or it is powered in some way to know. It says, "We're wait a second. I see light when I shouldn't." Another really common one is using switches, micro-switches of some sort, push button switches, or maybe just two pins with a connector that gets pulled off when you open the housing."
Joe Grand:
So you can do those things to prevent the physical access. And that really is very similar to a home alarm system where you have these different things that sense if there is something happening and then respond in some way, which usually is going to erase memory, maybe, it erases a firmware. Maybe, it does physically self-destruct.
Joe Grand:
Depending on what you're working on, I've seen systems that will destroy certain parts of their components or erase certain areas of memory or more commonly remove power from the battery back RAM that's storing the critical things. So yeah, those are all things that, especially if used in layers, they're good because if you know how that is done, you can generally figure out a way to defeat it, which is why the information-gathering stage and that product tear down stage, if you have multiple devices is useful, because then you can figure out what those mechanisms are and then defeat them to get what you need and then maybe the next time you attack it, you don't need to do that again.
Joe Grand:
You also did mention FPGAs, and that's something where personally from a hacker perspective, FPGAs I think are very difficult to work with. So they really do make a great kind of opaque box that if you're trying to offload some critical functionality or maybe you use an FPGA with a hard or a soft core inside, now you have your microcontroller and your peripherals all in one chip. It changes the attack approach.
Joe Grand:
And it's something where there are researchers looking into, how can we reverse engineer the bit streams to figure out what they do? How can we modify bit streams to perform some additional functionality within FPGA? But it's way harder compared to extracting firmware from a microcontroller where now you can just look through the code line by line and disassemble it and see what's going on and patch it and do whatever you want.
Joe Grand:
So is it good? I guess also maybe that security through obscurity, if you're just trying to make it harder, but it will definitely make it harder. That doesn't mean it's still going to have a debug interface. It's still probably going to have interchip communication. If there's outside stuff. It might have a UR interface for human boot log information or whatever.
Joe Grand:
So it depends on how the FPGA is used and configured. But it does make it harder because somebody's not going to go grab a data sheet of the FPGA and know exactly what it does. You have reconfigurable pins. You have a data sheet, that's 5,000 pages about how every pin can do everything and how you can configure the logic in a million different ways. That's not as helpful as finding a data sheet that says exactly what each pin does and what peripherals are there and everything.
Joe Grand:
So that is a possibility too of using an FPGA or even using an ASIC, of using your FPGA for the design. And then if your volumes are high enough, you start doing some custom chip fab of things as well.
Zach Peterson:
That's another great point. Yeah. Going into an SOC that you can now take full control over in terms of where the interfaces are located on the physical device.
Joe Grand:
Exactly. Yeah.
Zach Peterson:
How they're going [crosstalk 00:49:38].
Joe Grand:
We haven't even touched on chip level hacking, of hacking the chips looking at the Silicon. That's a whole other world. But FPGA vendors do seem to be more cognizant of that than general purpose micro-controllers meaning they have had attackers try to decap their chips and find out what's going on. But, yeah, decapping and that physical chip level stuff is the next level down that, that you run into the same issues that you run into on a board level, as far as identifying signals and injecting signals and patching things and soldering and de-soldering cutting traces, all that.
Joe Grand:
But now, you're at a microscopic level. And, eventually you just can't hide. If somebody can get access to individual transistors or your flash memory area, or your configuration bites, or your secure key storage or whatever it is on your dye, it's usually game over. But it also now increases that complexity way more and then limits the type of attacker that's actually looking at your stuff.
Zach Peterson:
Yeah. Yeah. And it seems there's so many more vulnerabilities that if they are on in FPGA, they are much easier to reconfigure a way compared to the typical MCU plus peripheral type of hardware.
Joe Grand:
Yeah. Especially if it's a hardware or chip level implementation level thing. You could try to reload the bit stream and hope that's okay as long as that bit stream to FPGA kind of interaction is secure, which generally most of the time, these days, there is some sort of code signing and protection for doing FPGA reconfiguring, as opposed to microcontrollers where not so much.
Zach Peterson:
It's extremely interesting. And you're opening my eyes to this whole other area of electronics design that I don't get enough exposure to. So I definitely want to thank you for coming on the podcast with me. Super, super informative. And I've learned a ton. And I'm probably going to email you personally and ask you questions. [crosstalk 00:51:42]
Joe Grand:
Sure. Yeah. I mean, if anybody watching-
Zach Peterson:
Hopefully, that's-
Joe Grand:
... watching has questions feel free to reach out to me also. And I think the main takeaway for all of this is as you're designing, as you're having your design reviews, as you're doing every level of your project, just think what would an attacker do? Why would a hacker want to get to my thing and what would they want to do with it? If you put yourself in that mindset of every stage, you're going to be much better off than if you just pretend it's not going to happen.
Zach Peterson:
Sure. Sure. That makes total sense. Well, thank you again so much for coming on the OnTrack Podcast. And everybody out there who's listening, again, this is Joe Grand. I'm going to dig up your previous AltiumLive presentation and we'll put it in the show notes. We'll have some other links in the show notes. And if you haven't signed up for AltiumLive this year, go over to altium.com/summit, and fill out that registration form, and sign up. Thanks again, everybody, for watching. And don't stop learning, and stay on track.