AI Vision Systems in Manufacturing with Deepview's Eliyahu Davis

Created: July 31, 2024
AI Vision Systems in Manufacturing with Deepview's Eliyahu Davis

Join us on the Altium OnTrack podcast as Zach Peterson sits down with Eliyahu Davis, the lead engineer at Deepview AI. The two explore the future of AI vision systems in manufacturing. Discover how Deepview AI is revolutionizing factory automation with advanced deep learning cameras.

Listen to the Episode:

 

Watch the Episode:

Key Topics:

  • The role of sci-fi-inspired vision systems in manufacturing.
  • Insights into Deepview AI’s groundbreaking products.
  • The impact of AI on factory automation and efficiency.
  • Eliyahu Davis’s background and contributions to the field.

Links:

Connect with Eliyahu here

Learn more about DeepView here

Transcript:

Zach: What motivates us is more about doing useful things on the path towards like our sci-fi future, right? So if the sci-fi future is like computer vision systems that can drive cars or computer vision systems that are part of like a sort of general intelligence, I hate to even use that term 'cause it's, it's so like dubious. But approaching that, let's say we really wanted to say what, where's a product that's a market where we can add a lot of value that we understand that we have experience in and can, can solve problems. And then, but what motivates us really is, is the long-term future.

Zach: Hello everyone and welcome to the Altium OnTrack podcast. I'm your host, Zach Peterson. Today we're talking with Eli Davis, lead engineer at Deep View AI. Deep View is now part of the Altium Startup program and so I'm very happy to be talking to them about their technology and I think this is gonna be a very interesting conversation. Eli, thank you so much for joining us today.

Eliyahu: Yeah, thank you so much for having us. And I know we recently joined the program. We've been using TM for almost five years now, so we're, I don't know if we're veterans, but we're getting there.

Zach: That's, that's great to hear. Yeah. And of course coming into this startup program, I think a lot of people come in not having used Altium before, but that's really cool to hear that you guys actually have been using Altium for quite a while.

Eliyahu: Yeah, totally. And I can share with you some of our, our board designs. We've gone through six iterations of our primary product, which is a deep learning camera for factory inspection. And all of our hardware design from the very beginning has been an Altium.

Zach: That's very cool. That's very cool. So I think before we get into all of that, if you could maybe just briefly tell us a bit about your background and how you got into this area of engineering.

Eliyahu: Yeah, totally. Well, growing up I read like tons of sci-fi, like probably a lot of engineers out there. My dream was always to work at NASA and work on like, you know, interplanetary kind of spaceships, like interstellar spaceships. Like as a kid, that's what I, that's what got me into engineering. So I, I studied computer engineering. I went to the University of Miami in South Florida and that was, we call it Suntan U but also, also it was like, for me it was like Suntan U but also like, you know, engineering till two in the morning university. So studied computer engineering and then worked a couple summers at the NASA Marshall Space Flight Center in Huntsville, Alabama. And so NASA has 10 centers. That's where they, that's where they design, that's where they designed the rockets. That's where the Saturn five moon rocket was designed by Von Brown. That's where the space shuttle was designed. And then that's where the SLS was designed. So I worked on the flight software team as a intern and they were working on the space launch system. This is in like 2014, 2015 and it's, it's since launched, but basically working as a NASA engineer that led to a bunch more experiences in aerospace and ultimately wanted to start a company with my family in Michigan. And that's what led to Deep View. We're a Michigan company and we're taking some of the advanced, you know, deep learning and analytics stuff that we learned in aerospace. Had another fortunate experience after college to work about four years at a aerospace electronics company. And when that company was acquired, then we started Deep View in 2019. So the, the rough order is like computer engineering, sort of NASA worked four years in aerospace and that led to all the kinda like deep learning and, and sort of manufacturing experience that then we brought into Deep View in 2019. So the goal of, of Deep View is to create re useful products that use deep learning for computer vision. So there's a ton of interest in large language models and their capability is like astounding and growing more so each day. We also think that vision is an essential component. Like you're starting to see multimodal models that can process images, but if you, if you look out into the future, a sort of like a processing system that can only operate on text is fundamentally limited. Ideally it can experience all the modalities that we do if, if not in the same way in, in its own way. But vision is, is really at the heart of that, a large portion of our brains process visual information and really like a, a deep learning system of the future, whether it's a humanoid robot or you know, potentially a general intelligence has to be able to experience the world in a way that is not just text. You can imagine if you did nothing but read books in a dark room with no lights on, like that's kind of like what a LLM is in a like loose analogy. So you want intelligent system that can experience the world. And so that's what Deep View is working on. And we've got sort of like our, our products that we're releasing along the way and our, our first segment is, is factory where, you know, there's a $20 billion market for visual inspection systems. And we're basically saying, you know, historically it might take 40 hours of, of programming per camera to set up one of these, one of these like, you know, computer vision systems that live in factories. If you go to, you know, an automotive factory, you might see hundreds of cameras inspecting parts. So we're just like, our website says the next generation of machine vision for that $20 billion market, we're creating useful products that are basically using deep learning to take the, to take the load off of the, the human setup instead of taking 40 hours to set up a camera to inspect a part. You know, just give it 10, 20 examples of of good parts, 10, 20 examples of bad parts and have a system that can visually process with very limited examples. So, you know, that's a whole whirlwind, but really aerospace got us into engineering and then really, you know, looking at what markets could we, could we, you know, be successful in to work on this like long-term like deep learning problem. And that's what led us to the, to the factory.

Zach: Okay. That's interesting. I, I don't hear often a lot of people making the switch from let's say aerospace over to, you know, industrial automation. But I think here you've got a pretty interesting idea here, which is making a factory automation inspection kind of system really plug and play. And how big of a pain point is that for companies? You know, you mentioned like a 40 hour setup time for just for cameras. Does that kind of set up time really have an issue across all sorts of different types of inspection systems, whether it's, you know, visual or even other factory automation systems, whether they're visual or, or otherwise?

Eliyahu: Yeah, totally. And let me just give you some more background 'cause then it'll, it'll sort of make more sense. So graduated college in roughly 2015, 2016 timeframe. And right when I graduated, like even 10 days before I graduated, so Deep View is actually our second company. We, my brother and my dad and I, we started a company like right, right as my brother and I were graduating, my brother also graduated in 2016. And so we started a company like in April of 2016 and that is really an engineering services company focused on the automotive industry and, and focused on factory automation. So it was, it was really two experiences because like basically from 2015, 2016 to 2019, I was doing both like aerospace software engineer for electronics company and working as a, a sort of like small business owner in engineering services for the factory, for the automotive industry. And it was really like doing both of those jobs and you put 'em together, there was like the, the software experience and electronics experience on one hand, but then it was like living in Michigan, working in factory automation and at the engineering services company we had several engineers that were full-time installers of factory cameras, basically vision engineers in the industry parlance. So it was, it was both the experience of writing software, but the experience of having our team installing cameras that, that we kinda like discovered, hey, these things are really cumbersome to set up and like literally some of the camera like software that is designed is called a cockpit. Like an airline cockpit. And the analogy is like if you go into an airline cockpit, there's thousands of knobs and it's like it's dizzying to see all the things that are going on. So on one hand we wanted to make something that's super streamlined. On the other hand, from, from a customer that's buying a camera, they may not, they may not actually care that it, that it is very expensive to set up. I mean, for them it's like, okay, do I spend 10,000 on hardware or 40,000 on hardware with installation? And some of the larger companies, they, it, it may not move the needle for them. Like they may not actually care or, or, or make a difference. But for, for those kinds of companies, if, if, if it's not cost and efficiency then it's, it's enabling new types of applications. And especially being a new company, you typically don't get the work that is like the, you know, tried and true easy applications. You get the bleeding edge until you're the proven provider that is then trusted and then you get like all the mainstream stuff. So like in our first sort of like, I would say, you know, maybe like 15 installations or so, it's, it's all like pushing, pushing the boundary of what's possible by using deep learning to solve basically vision tasks otherwise would, would require a person to hand sort parts. And there's something like 500,000 quality inspection workers in the US that are in factories just sorting parts all day for quality. I think the US is still, I think the world's largest manufacturer and people don't realize that. And so, you know, manufacturing is, is a huge part of the economy. Maybe a third of the US economy is manufacturing maybe more. And so if you've got 500,000 people doing these checks, and another detail to it is like if you have a complex like automated production cell, you may not be able to have a person be inside of that cell. So we have, we have one customer that I'm thinking of right now where literally their whole like very expensive machine is only feasible because they're able to get sort of really tiny cameras installed inside the production cell. So it's, it's both simplifying but also enabling, enabling new applications that were otherwise unfeasible. So.

Zach: Okay, that's really interesting. You mentioned that I think earlier the factory inspection market was $20 billion. In terms of market size. So within that $20 billion, which part of it are you addressing and what types of, of parts can be inspected with your system? I know earlier you mentioned automotive and I think that's a pretty big one, but is this type of system that you're developing really tried to aim at pretty much any part that could come off factory line?

Eliyahu: In the long term? Ideally, yes. We're segmented to specific types of applications currently. So if you take that $20 billion market, that's roughly, it's $5 billion in camera systems and then the 15 billion is like advanced metrology, which is a fancy word for measuring and 3D sensors. So of that we're, we're in that $5 billion market. And then within that $5 billion market for cameras, there's, there's really four main segments, which is robot guidance. The, the acronym is Gigi, GIGI. So the first one is robot guidance. Number two is identification, which is reading barcodes and matrices. So robot guidance, part identification. Then there's gauging, which is mainly part of that 15 billion, right? And then there's inspection. So we're, we're in that inspection like one fourth vertical. And then even for inspection you've got what I would call really two types. You've got quantitative and qualitative. So quantitative inspection would be like, I need to see something where like, I need to make sure this part is within this like, you know, micron tolerance, right? Where I need to have like a gap between these two card doors that is, you know, less than this certain spec. Then you've got sort of qualitative inspection. And that's really where like the 500,000, like human inspection workers come in, they're not necessarily doing quantitative things, they're doing things you can see with your eyes, you know, like whatever the part might be. It could be a seat, it could be a tire, it could be a break, you know, it could be like pharmaceutical packaging, right? Like every, every sector of, of manufacturing is represented in, in like inspection because just with the entropy of the way things happen is that like ideally a production cell is perfect, but typically what happens is like if you're running millions of parts, things go wrong and you can root cause those, and sometimes you can and sometimes you can't. So if you can't root cause something we're, we're in the qualitative segment of the market where like we tell, we tell customers there's, there's two main criteria. It's, it's for things you can see with your eyes that a person could do. And then to be able to give us enough examples to train the deep learning system. So if you can give us typically on the order of 20 defective parts and 20 good parts, that gets us going. And then for very high reliability, like 99% in production, you might need to give us a hundred good and a hundred bad. And as you train the system, it builds an internal representation within the deep learning model of what it's identifying. So we're, we're in that sort of like $5 billion camera market looking at inspection for sort of qualitative subjective things. And our goal is to approach the reliability of a person. I think if you have a person that is like very focused and like they'll always exceed the capability, but the problem is like if a person, some of these plants run seven twelves a week, so if you're, if you're pulling a hundred hours a week, like at a certain point it, you just can't maintain that same level of focus. So it's, you know, the goal is to approach the reliability of a very focused person, you know?

Zach: Yeah, that makes a lot of sense. Now you've mentioned providing examples to the system and then you know, both good parts and rejected parts to the system so that it can, you know, learn the difference. If you were to, let's say, take some of those rejected parts and tag them with a root cause or a known root cause of the defect, would the system also then be able to infer the root cause of defects for newly noticed, let's say parts that it has to then reject? So once you put it on the factory line, if it starts noticing defects, it could then possibly suggest, hey, this is probably the root cause, or are you guys not there yet?

Eliyahu: I would say that's a, it's a very interesting question and the way I would think about like that question is kinda like, imagine you're a quality inspection worker, you may or may not understand the underlying production process. So if it's like an injection molding press, you might know, the odds are if you're, if you're inspecting parts, you probably don't understand that press, but there's probably somebody in that factory who does, typically it's somebody who's like either automation manager or a, like, it's like a process engineer, manufacturing engineer. So the manufacturing engineer or manager will have a deep understanding of that production process and they may be able to, to root cause it, but the person who's sort of like inspecting the part probably doesn't understand necessarily how the process works. They may or may not, I would say it's, it's kind of outta scope for our system. It's more just like finding those things and that it's really on the burden of a person to be able to root cause it, it's a really interesting idea though because like, like you hear sometimes talk about like lights, lights off automation where like you can imagine a, a fully automated factory of the future where it's, there's no humans involved whatsoever, right? Like you could, you could build a factory on a mine have automated mining equipment, right? Like in theory you could, you know, have automated like transport of like whatever raw materials into that factory you could have fully automated. Like, I mean we're getting into the realm of like science fiction, but like a, a really a governing mechanism where it's like, you know, a, a factory has like a plant manager who's a person, you know, it, it may be within our lifetimes that a computer can perform that same like plant manager function, which is a very difficult task to manage this very complex factory, right? So if you get to that point, then you probably have systems that like can govern like a whole production process. But for where we're at, it's sort of like more science fiction and, and like hopefully it's within the realm of feasible the next 10, 20, 30 years. But that's, it's, it's the subject of debate as to like, you know, how intelligent can these systems get? So

Zach: No, I think that's fair with, with lights out automation and, and trying to then take defects and, and find root cause I mean, it, it seems to me that the amount of data that you would need to then apply to each defective part in terms of like tagging, let's say gets really large, especially when you have a really complex process and the more steps you have leading up to the production of that part that's being inspected, the more data you would have to apply in order for a machine learning algorithm or for an AI to then infer what the potential root cause of, of the defect was. Right? And then e even then, it's probably only a suggestion, right? It could say, well it's 95% likely that it's this root cause but there could be these two or three other things,

Eliyahu: Right? The, the way, the way things are typically now is that it's typically like both root cause analysis and inspection are kind of complimentary 'cause you would think like just root cause it out. But there, there's some processes like I'll give you an example. So like if you're molding a piece of plastic that could be like a car interior, like a cup holder, right? That might work at like 99.99% of the time. But then what happens when, like there's a transient voltage that changes like the pressure in the machine, right? Or like, you know, there's little like a little air bubble that gets into like, you know, the, the material, right? Like there's just, like a lot of times companies will both say like, hey, here, here's our known failure modes, here's our mitigation steps. And then like a vision check is sort of like, it's sort of redundancy and like there, there are certain parts that even have like two people checking because like one person is not reliable enough or it's also a quality of labor situation where in, in some labor markets you'll have like one person checking apart other labor markets. You might have two, two people checking that part based on like the level of reliability. So it's, it's sort of like, like any any part that you really want to like not fail then it, it's sort of like you wanna root cause out everything and then whatever known failure modes that can happen just based upon like let's say you optimize the production process perfectly, sometimes there's still things that can happen and then it, there may not be like immediate like remedial measures to fix it. So that's where it's sort of just like, you kind of, you're just trying to cover your bases like in, in both ways. And the quality of the product is often the quality of the company. If it like a Toyota car that last 250,000 miles right? Versus another car that may not last that same amount. Like that's, that's a very tangible difference. 

Zach: Yeah. That's interesting you say it's it's also related to the product or the, the quality of the company. Excuse me. Yeah, and then I think Toyota is always brought up as a really great example of a, a company that produces, you know, high quality products. So even if you did have a system that could identify let's say these potential root causes, at some point it's the culture of the company that's responsible for even implementing that and, and actually figuring out how to solve that problem.

Eliyahu: Yeah, totally. And if I would say something really positive, I would say Japanese companies in general, like the lead, the leader in, in, in factory robotics, Hispanic, and if you read their, like their investor documents or like what they put out there, it's just a, it's a level of like sort of like love and attention that goes into the product and that's sort of like it can't be faked, right? So you might, it's a spectrum where on one hand you have love, love and attention and care and be, and then there's there's a spectrum of, of the other, the other part too, right? So it's, it's one of those things where it's just a level of pride or, or or love that you put into your process. And I think like it's a really interesting cultural thing where like anybody on a Toyota production line can, can stop the line, whereas in other cultures stopping the line is like very taboo 'cause it's like we gotta hit our numbers like you just slowed us down. Whereas there's like a, a methodical approach and there's an empowering of the sort of like, you know, production worker to really like take ownership of the entire process. And I think that's, that's the ideal situation where, you know, it's like whether it's employee ownership or different things where each person is a, a stakeholder in, in the quality of the product and then that, that leads to a car that lasts two 50,000 miles, right? Is that every, every part of the company is like, has a lot of pride. I think you could, you could apply it to any, to any sector of the economy. If somebody's got a lot of pride in their work, they probably do a great job.

Zach: You know, from listening to you talk about this, it, it sounds like your philosophy is kind of, I think aligns with mine a little bit in terms of the role of ai, which is that your, your product is meant to be an enabler and I think prob probably an enabler of maybe that newer factory worker who is still trying to get to grips with the process and all the intricacies and this gives them another tool to, to maybe operate at a higher level than they would be able to otherwise. Do you, do you think that's a fair statement?

Eliyahu: I would say it's, it's really, it's really, I would say the what what motivates us is more about doing useful things on the path like our sci-fi future, right? So if the sci-fi future is like computer vision systems that can drive cars or computer vision systems that are part of like a sort of general intelligence, I hate to even use that term 'cause it's, it's so like dubious, but approaching that, let's say we really wanted to say we we're a product that's a market where we can add a lot of value that we understand that we have experience in and can, can solve problems. And then, but what motivates us really is, is the long-term future and then building that sort of like capability so that we can do more advanced things over time. But then, so within that factory market, what what motivates us is like if you're, if you're like, if a customer's got a a multimillion dollar machine and they need your vision check to make sure that machine is actually feasible, that's very motivating, right? Because it's so like now you're able to make that process in an automated way. And I think the whole, the whole promise automation is that ideally you don't have 500,000 quality inspection workers, you know, doing checks. Like ideally they can do things that are more interesting and there's such a labor shortage that it's, it's not the same question of before, which is like, what are these people gonna do? It's more like our whole economy could just create so many more new useful things if we didn't have like the labor inefficiencies of like quality checks for example. So it, but it's one of those things, whether somebody's experienced or inexperienced, ideally it can just be a, a helpful tool that's useful and helps us to, you know, sort of like provide something useful while building for the future. So

Zach: Yeah, that's that's really great to hear and you've, you've brought up something really, really important, which is labor inefficiency and it's not necessarily about replacing people, but maybe helping them do more with the time.

Eliyahu: Yeah, totally. And I think like one amazing thing of the last like 10, 15 years with like cell phones is that with like Uber and DoorDash, it, it's almost like there's, there's more work out there in terms of like enabling people to do things. Whereas like, I remember when I was growing up, like I worked in a factory in 2009 outta high school in 2009, there was not like anybody with motivation could go do work. There was just a shortage of like, it was like you're lucky to be able to get a job. Like I got, I, when I was in high school, I washed dishes, I worked at McDonald's, I worked at Logan Steakhouse, like, you know, I worked in a factory is like, you know, cut and rubber, you know, like waking up at 5:00 AM right? Like whereas the a a as like technology evolves then it's like really it should be that there's a a a, a prolification of goods and services to where now somebody could pick up their phone drive for Uber, you know, work three jobs if they want to, like really, whereas before there just wasn't that kind of opportunity. So I think the more opportunity there is out there, then ideally the machines can do things that we don't wanna do.

Zach: Yeah, I think that makes a lot of sense. Get switching gears just briefly over to the, to the technology development. Your system is described on your website as having a built-in neural network training and development. So when I hear that, it sounds to me like all the training and inference is done on the device rather than happening on a server somewhere or happening in the cloud. Is that correct? And is that kind of the intent of how this system works?

Eliyahu: And what people may not realize is that like the cloud is really not not allowed in factories and if you think about it from a factory standpoint, if you have to make a million parts per year, you don't want internet disruption to stop your production line. Basically. Like also in the automotive industry, if you're supplying parts to the F-150, you can be charged like hundreds of thousands of dollars and it's per minute that you shut down the line. If you're a supplier and you're making one part out of 10,000 that goes into the F-150 and you stop that line, like you just, you just can't do that, right? So any, anything that is like a perceived impediment to production is really like a non-starter. So what we wanted to do is to put the training capability into the end device so that you could really install it on the production line, capture some examples, and ideally be up and running the same day, provided you have enough examples to train the system. And there's other innovations too, but that's, that's the really core one is like take a, a, a, a very lean edge device and then write like super efficient software to enable that. Typically the cloud is like running a a giant model and it's kind of like this, it's sort of like easy to get results with a giant model. It's hard to get results with a very tiny model. So it's really how do you take a very efficient sort of like training process and if we're training on like 40 images, we can train on device in like 15 minutes. So it's, it like our metrics are always like minimal examples, minimal training time, minimal setup to where like in the next three to five years, we wanna have a model that basically doesn't need to be trained at all. If you have a foundation vision model that just has been trained on all the images that humanity has like on the internet, right? If you, if you train it on like every image that's out there and you basically have that foundation model, ideally you can basically just, you know, install the camera and then say, here, find any defects, right? And there there won't even need to be necessarily a training step because the, the model will have like the capability built in and that's, that's what we are, you know, aspiring to, to get to.

Zach: Very, very interesting, I wonder what type of a model they would call it as aside from foundational vision model, because we have this term, you know, LLM, but what what you're really describing is more, I guess a small language model or like an SL right?

Eliyahu: Well like there's, there's this something called like distillation and deep learning and where you can take like a huge model and you could have it train a smaller model. It's kinda like a a a teacher teaching a student sort of thing. And there's also the, the density of these systems is, is increasing in terms of their representational capacity. So you look at like the Facebook like Meta models. So they, if you look at like Meta two to Meta three, basically he 80 billion, the 80 billion parameter model is basically in, in the version two is equivalent to an 8 billion parameter model in version three. So it got 10 times more dense in representational capacity. So it's like these things are scaling out massively, but they're also getting more dense. So that, you know, ideally five years passes what you can do with a a 1 billion parameter model. You know, 'cause we're going from like 10 billion to a hundred billion to a trillion parameter models. The human brain has like on the order of a hundred trillion connections. And if we're at 1 trillion parameter models and a human brain is a hundred trillion parameters, it's not an exact analogy 'cause a neuron is not a digital neuron. Like it's not the same thing. But as a loose analogy, like we're approaching a weird thing where you got a data center the size of New York City, that's a hundred trillion parameters and that might be in the next five years. So that won't be what's running like an our factory device, you know, we'll have like some very condensed version, but the the explosion is like, is both in scaling up but also density. 'cause if you say, well it's, it's a hundred trillion parameter model, but it's a hundred times more dense, that's like a 10,000 x factor in five years if that, if that comes to fruition. So it's, it's, it's like, it's really interesting to think about where it will, it will all go, but at the heart of it is just, it is just writing efficient software. It's just how much can you get with a, you know, you, you're limited, you're limited in your compute capability. It's not, it's not a massive data center. It's a very small chip and what, what's, what's the maximum that you can like, you know, squeeze out of that.

Zach: Yeah, I think there's this perception that as you try to take AI and apply it to some of these new challenges that haven't been addressed before, there's this perception that the parameter count has to go up. But in reality, you can probably do a lot better either in terms of accuracy or efficiency or both if that parameter count actually -

Eliyahu: Goes down.

Zach: Yeah, totally. And like, I think it was like, there was like a Tesla earnings call where Ashok, who's the lead on the autopilot, he talked about like five dimensions of basically these deep learning models are improving along these different dimensions and so it's, it's scaling out where they're getting larger, right? But then like the underlying architectures are improving, right? It's that leads to like the density and that can be achieved both in hardware and in software. So the software is the architecture, the hardware is that like, you know, you can, you can train faster, you can train more throughput, but it, it's really interesting, like what I think about is like the, the scaling is like the, the, the bottlenecks are in areas that you wouldn't necessarily expect. So like I think like the new Elon Musk AI company, they're, they're famously building like a hundred thousand GPU cluster in, in, so it's basically, it's like $3 billion for that cluster. 'Cause each of these machines is $30,000. So it's, they, they, they raised 6 billion and they're like, all right, we're gonna put half into this, you know, massive, massive cluster. But the, the things that you think about is, it's like, and there's like a Mark Zuckerberg podcast where he talks about this, but it's like it might take a gigawatt of energy. So like imagine like a, a, I don't know how many people that is maybe a, a city of a million people to power that cluster, right? And so you might need like a massive nuclear reactor to power a hundred trillion parameter model or you might need like a, like a terawatt of energy, right? So like you can imagine like acres and acres of solar panels or like a dense nuclear reactor, like you're running into these like the power budget because like let's say you had like, let's say $30 billion today, like I think Microsoft is spending $50 billion in CapEx to build out like data centers. So let's just say you, you built out, you know, like with, with current technology, it'd be probably somewhere in the order of like $300 billion to build a hundred trillion parameter model. So maybe in five years it's 30, 30 billion to build a hundred trillion parameter model. You still have this like massive, like where do you get all the energy for that? You know, you're, you're, you're gonna see probably like a massive data center that costs 30 billion along with like, you know, a hundred power plants around it to power it, right? Or, or some innovation where it's a massive solar panel farm with batteries or you know, nuclear reactors or like, that's really interesting to think about. Anyway, I, I don't know, we could like, I dunno if it's interesting to you.

Eliyahu: No, it's, it's all interesting to me. And I mean that's, you call it the sci-fi future. I definitely would agree with that description. Going back to your background, just, just for a moment in, in making that transition to, to having the startup company, what level of PCB design experience did you have? Because at some point all of this has to live on a PCB and you know, you're building hardware.

Zach: What was that experience like for you before having a startup company and then, you know, after getting started and how did you start down that path of, of maybe gaining that PCB design experience needed to build this type of system?

Eliyahu: Thanks for, thanks for asking that because I really wanna talk about this. So my background was primarily as a software engineer, right? And I, I studied computer engineering in, in college. So, you know, I took like, you know, electrostatics, electrodynamics like those kinds of classes from a physics standpoint at least the sort of 1 0 1 or you know, 2 0 1 kind of thing. And same thing with electronics, like electronics 1 0 1, electronics 2 0 1. So I wasn't like totally blind like, you know, I knew like, you know, the, the basics of like a circuit, I would say like very basic circuit design, you know, like what is a resistor, what is a capacitor, right? Like what is dc what is ac like, had some electronics background, but it was really mainly a software engineer. And we really wanted to have the innovation where you have the GPU inside the camera to enable on-camera training and that, that basically required doing hardware. So it, we did about a, a year of like early RD before the company was started from 2018 to 2019. Then we had a demo that operated on a steel inspection. So like, basically there's a, a Russian steel company called Sever Stall and they put out on the internet that $120,000 prize to anybody that can solve their, like their defects in steel. It was like six or seven main types of defects. And so we, we worked on that project, we didn't win, but we created a really interesting demo and then the software was kinda like a demo software. This is in October of 2019. And then we put that on the shelf for like six months and basically had to learn how to design hardware in order to make the product that we wanted to make. And what, what we sort of did is really like three, three essential things and basically starting from scratch that I would, that I recommend, and this is what we did, the, the first is like Fedevel on YouTube. His name is Robert. So I know Altium has online resources, I haven't explored those as much, but Fedevel on YouTube has got like hundreds of videos on PCB design and he even has a paid version for once you kinda level up. But what we basically did is like, you know, I'll, I'll back up first. First it was like, okay, we wanna design, we wanna design hardware. There's like 10 or 15 CAD packages out there. Why, why choose Altium? And in any given like engineering segment, you always wanna have the best engineers on your team. So you wanna, you wanna use the tool that the best engineers use so that you can work with the best people. It like, it's kind of obvious. And so we looked at all the different options and we're like, the engineers that we wanna work with use Altium, right? And then the reason they do is also the reasons that we wanna select it, which is it's, it's fully end-to-end. It's beautiful, it works. You do schematics, you do layout, you go to production with Gerber's. So it's, you know, it's got, and and increasingly like integrating more things like, like your bill of materials, that's something that's a new feature over the last few years, right? So it, it, it allium is really like a comprehensive solution to board design and it's what the best engineers use. So we, we sort of like decided, hey, we're gonna, we're gonna use all team. And then it was sort of like, well, okay, how are we gonna, how are we gonna learn this? So Fedevel on YouTube again, amazing, amazing. And what we kind of started to do is like he has in a bunch of intro videos and one of them is creating a, a circuit that turns on and off LED. So there's like nine components in the board. It's like two inches by two inches, right? It's like a very simple PCB, but you go through the process of, okay, I'm gonna design the schematic again, which is, you know, nine components or six components or something like that. I'm gonna do the layout and I'm gonna make Gerbers and you can even make this board if you wanted to, right? And what I notice in like big companies is like there's a separation of labor where one person will only do schematics or component selection or component libraries, or one person will only do layout. Well if you're just starting out, you don't necessarily have the luxury of like a large team to separate these things out. So you have to have like a holistic approach. But starting out with like a simple board that's several components that gives you confidence that, hey, I understand all the different aspects of this now it's just how do I, how do I sort of like scale this up to do what I wanna do? So Fedevel on YouTube is, is probably number one the most important thing. And then number two is there's a very experienced hardware engineer that's local here in Michigan and he had kinda like a freelancing company and basically asked him to help with this design. He goes, no, I'm way too busy. Like I got a full-time day job. Like I just can't do it. And so I said, well, like, would you mind like tutoring me? And then we'd meet up at the library like once a week for like two, three hours. Like the library was a pathway between us. And for like several months I was basically like on one hand doing like learning, learning the sort of like how to use Altium. But then really having a, a tutor and mentor is, is really essential because like there, there was a bunch of things that I was not even aware of and he kind of said, here's the documentation you need to be aware of, like high speed layout. You need to be aware of like fiducials on your boards, they can be assembled. You need to be aware of like if you're doing like ethernet or HDMI or cameras or SSDs, you need to have like impedance, like an impedance measurement. And then I'd go like on, on YouTube and be like, okay, well what is impedance? Like how does this work? Then, okay, talk to the the PCB design company. Okay, can you confirm this impedance measurement so we can route these house high speed signals. But if I would've just been self-taught without, without having a great mentor, I wouldn't have even been aware of these things because somebody who's been doing it for 10 years, they know all the pitfalls and they can sort of just, and a lot of times you might work for a week really hard, but then you've got like 10 or 20 questions and then after that you're sort of like distilling their experience and and and gaining it. And then the the third key thing, so the first one is like great online resources, right? Are are essential and sort of starting with a small board and then building up, right? Number two is ideally if you can have a mentor and I don't know if there's like an online service for like partnering people up with mentors, but like that however you can do that, call people up, like ask for referrals. Like, you know, like basically just pick up the phone. I think I like, and, and it is really, it, it's a fortunate thing, like to have a mentor is like, is such a blessing in any area of life. So I just wanna say like, we're very, very thankful 'cause you're, you're kinda like a five-year-old. You don't really know what to do and you need like a parent sort of aspect to teach you. So we were, we were very fortunate. And the third step is it's, it's really trial and error. So we first, like our first board we produced was a little adapter board, but maybe 20 or 30 components. And then from there our version one of our PCB was like hundreds of components and almost nothing worked on it. So we had the blessing of like great mentorship and great resources, but we, our scope was evolving. Like we thought we needed some things that we didn't necessarily need and the board didn't work at all. We, we had production problems with who we chose to do assembly. Like we, we kind of run into like a bunch of issues, but then version two started to actually work. And it's really just the, I would say perseverance. Like, there's a great quote that I heard recently, which is like, like, you know, Mr. Beast on YouTube, there's a, there's a amazing quote that is he found out that he just, like, he just loved making videos when he was a teenager and he would just grind on YouTube, like just making videos. And he basically said that he knew that he loved it. So it was just a matter of time before he hit whatever goal he wanted to. So I think like you, you can, you can do anything you set your mind to. It's just like a lot of time, a lot of energy, a lot of hard work, right? If it's not 10,000 hours, maybe it's six board revisions before it, before it works. And then right now we're on our seventh to like fix everything that like works but is maybe suboptimal right? Or hard to assemble or you know, like a pain, right? Or a failure point or whatever. So now we're going from like, you know, it, it fully works to make it easy to make, easy to produce, like eliminate the failure modes, delete parts. But those six iterations were probably, this is just to give a sense of time, like October, maybe six months to get to version one. And then I would say roughly 15 additional months to do iteration two to five. And then we were working super hard. It wasn't like two years sounds like a long time, but it's really like you're parallelizing everything where, you know, while you're designing, you're talking to the PCB design, the production company. And like, so I would say anybody can do it. It's really like, try to find those three things: online resources, try to find a mentor if somebody's starting out, and then be prepared to, like, we were fortunate also to be able to have the time and money and energy to be able to iterate towards a solution and yeah, so hopefully, hopefully that's, that's like, you know, helpful.

Zach: No, I think that's, those are all great pieces of advice and it's, it's really cool to hear your journey to getting this all started. I think we're out of time today, but I want to thank you for coming on the podcast and I think your, your idea for your company is really cool. And so as this all develops, we'd love to have you come back on and check in and see how you guys are doing.

Eliyahu: Yeah, awesome. And we're, we got some other new projects in TM too that maybe we talk about next time.

Zach: Definitely, definitely. Thank you so much for being here, Eli.

Eliyahu: Okay, all great talking to you Zach. Appreciate it.

Zach: Alright, thank you. Yep. Bye. To everyone that's here listening, we've been talking with Eli Davis, lead engineer at Deep View AI. If you're watching on YouTube, make sure to hit that like button, leave a comment in the comment section and subscribe. You'll be able to keep up with all of our tutorials and podcast episodes as they come out. And last but not least, don't stop learning, stay on track and we'll see you next time. Thanks, everybody.

Related Resources

Related Technical Documentation

Back to Home
Thank you, you are now subscribed to updates.