广告
驾驶安全与保龄球有关?
00:00
00:00

Bumper Bowling and Driver Safety

By this time, EE Times readers and listeners of this podcast are familiar with the notion that fully autonomous vehicles are not going to be widely available any time soon. The reason is because making autonomous vehicles safe enough is a lot harder than originally anticipated.

Bumper bowling

The delay has led some companies to shift their focus from fully autonomous driving to technologies that help make humans better drivers. That’s called driver-assist, also known by the acronym ADAS.

That shift in focus created a wider understanding that technologies for autonomous driving are different from technologies for ADAS. That said, driving is driving, and driving safety is driving safety.

Intel’s Mobileye operation develops processing technology for sensor-based self-driving car and advanced driver-assistance systems. The company had begun to think about what the safety model should be for autonomous vehicles.

Jack Weast, who is vice president, autonomous vehicle standards, at Mobileye, reached out to my colleague Junko Yoshida, one of the leading journalists covering the automotive industry. He wanted to talk about this safety model, which he described in a rather unique way. Here’s Junko:

JUNKO YOSHIDA: I heard from Robin, your PR person, last week, and she told me, “Jack wants to talk about bumper bowling.” I asked, “What is bumper bowling?” I think bumper bowling has something to do with RSS? Responsibility-sensitive-safety.

JACK WEAST: Yes.

JUNKO YOSHIDA: Okay. Let’s start from bumper bowling. Define bumper bowling. How it has anything to do with RSS.

JACK WEAST: That’s a great question. You might think, at the surface, there’s nothing to do with bowling and automated driving. But I’ll try to make the connection in a way that makes sense.

JUNKO YOSHIDA: Okay.

JACK WEAST: You’re correct. Responsibility-sensitive-safety was our safety model that we first released in 2017. What it defines, it’s a mathematical model that defines a safety envelope, along with proper responses that the vehicle take. So that if the automated vehicle is following RSS, it should never initiate an accident. And if all of the other agents around the vehicle are operating according to reasonable and foreseeable assumptions about their behavior, then we should also be able to respond to the behavior of others. So really it does everything that we can to try to prevent an accident from happening either by our own cause or from the cause of others.

What does this have to do with bumpers, though? And some of the recent announcements? We’d always imagined RSS as being something for driverless vehicles. But we thought at one time, What if we were to put RSS in a human-driven vehicle? And what if RSS could, on a preventative basis, preemptively intervene into your driving task as a human to help you avoid making mistakes? And to help you respond to bad behavior, say, by other drivers. And so it’s kind of a way, then, if you think about if you’re not a terribly great driver, RSS can help you be a much better driver. Because it’s sort of always working there and protecting you in the background and preemptively braking, preemptively turning you back into your lane, not in an emergency sort of way, like AEB systems, but in a very smooth preventative standpoint.

So in that way, we come back to bowling. I’m a terrible bowler. My mom’s actually quite good at it. And so when I bowl, though, I can bowl a lot better when you have bumpers in the lanes. And bumpers are these inflatable tubes that you put into the gutter lanes on each side of the bowling alley, so that when you roll the ball down the alley, it might bounce around from side to side, and you’ll eventually get to the pins without going in the gutter. So maybe in the same way, RSS and a human-driven vehicle helps you get to your destination without bumping into things along the way.

JUNKO YOSHIDA: Wow! I thought that bumper bowling is for kids. But I guess it’s analogous to novice drivers or even experienced drivers who really need to be reminded of kind of defensive driving, isn’t it, in a way?

JACK WEAST: You’re right. Even the best of us are guilty of looking at that phone when we shouldn’t or being distracted by the squirrel outside the window or something else. That’s one of the other really important differences here. We humans, at the end of the day, have sensors — our eyes, ears — particularly our vision, that’s looking just forward.

The thing about RSS and the ability to put it into a human-driven car with 360 degree sensing, you can get protected then from potential accidents that you don’t even see because they’re in your blind spots or they’re behind you or something like that. It’s really the combination of that safety model with super-vision technology about sensing from a 360 degree environment that differentiates it from traditional driver-assistance systems.

JUNKO YOSHIDA: Let’s talk a little bit about the math. We watched some of the presentations you have done in the past on RSS. We’re talking about explicit traffic rules versus implicit traffic rules. You’re saying that RSS, in the end, is about formalizing those implicit traffic rules so that it can be interpreted by a machine. So tell me, what rules are you actually teaching under RSS, to those ADAS, or in this case, autonomous vehicles?

JACK WEAST: The first rule of RSS is really about following at a safe distance. So this is kind of one of the most basic examples we could think of. You’re following another vehicle. So how RSS would work is, let’s say… and we’ve all experienced this… where you set your cruise control and it’s a fixed speed. What does the car in front of you do? They’re constantly like adjusting their speed and you might say, I’ve got to turn off my cruise control. Now I can turn it back on. I wish they would just drive one speed. But they don’t. Right? Humans aren’t well behaved always like that.

So RSS, in the context of a mobilized super-vision ADAS system. What it would do is, it would preventatively brake, make small micro-braking maneuvers. So you’re just driving. And you might not even notice it, but RSS is calculating exactly what a safe distance is based on the safety model. And if that vehicle in front of you varies its speed a little bit, based on the RSS formulas, the mathematics, our car, the car that I’m driving, or the autonomous car in the case of a driverless car, is going to then apply just a little bit of brakes. We might not even notice it. So not an emergency brake, because it’s not an accident-type scenario. But according to RSS, it’s I’m going to get into what we call a dangerous scenario if I don’t apply a little bit of brakes.

So the system would apply a bit of brakes and then probably let off. Apply a bit of brakes, let off. Where you might not even notice it. Now if the car in front of you were to aggressively slam on their brakes, then that braking profile goes from a little of brakes to then a more significant braking to still avoid the accident. But the point is, it’s kind of always in the background, a safety capability that’s working with you.

It would work the same way for lane change maneuvers. Let’s say, again, you might have lane change capability that’s still hands free. In the case of this Simulglide supervision ADAS, or in the case of a driverless vehicle, the vehicle wants to change lanes.

Rule number two about RSS and the lateral safe distance calculates how much distance you need to have laterally between vehicles that you’re driving next to. Then that would tell the vehicle or the driver, Is this a scenario where it’s safe for me to make this lane change and move from this lane to the other? Or am I going to cut off this car and get into an unsafe lateral situation?

Those are two examples of how we use the mathematical calculations about really what is a safety envelope around the vehicle to inform and protect a human driver in the case of the supervision hands-free ADAS, or to make sure that the automated vehicle is making good driving decisions in the case where there’s no driver at all.

JUNKO YOSHIDA: I think the last time you and I talked, you were talking about RSS becoming IEEE standard. I think it’s been discussed at P2846. Is that right?

JACK WEAST: Yes, that’s correct.

JUNKO YOSHIDA: So how far are we? I think the initial goal was the end of this year or early next year to have some kind of a first draft of the standard.

JACK WEAST: That’s correct.

JUNKO YOSHIDA: Can you update us?

Jack Weast

JACK WEAST: I’d be happy to. In fact, I just came out of our workgroup meeting prior to talking with you here. I’m pleased to report that yes, as you’ve noted, we’ve contributed RSS model to this workgroup and the concepts behind RSS in terms of assumptions and the role of the automated vehicle needing to make reasonable for foreseeable assumptions about the behavior of other users. Such as, What is the maximum braking capability of the vehicle that I’m following, for example.

These concepts from the RSS model and others is really the first focus area for the standard workgroup. And so far, we are still on track despite COVID and everything else to have a first version of the standard by the end of the year. And we really look forward to sharing it with the world at that time.

JUNKO YOSHIDA: I have a basic question. Once you actually have this RSS in your AV stack… for example, like the announcement you made with China, Geely, last week. You are providing them not on the super-vision that consists of 11 or 12 cameras. Eleven cameras. Plus the AV full stack, right? And I can see the RSS can be implemented inside the stack, but what if I’m participating in IEEE and I have my own AV stack. How do I implement RSS then?

JACK WEAST: That’s a great question. One of the things that we’re doing with the standard, as well, we have contributed RSS, other companies have contributed their safety models as well. And we’re committed to making sure that the standard is technology-neutral. And so the standard will not require anyone to implement any specific version of something, will not require them to have a particular kind of chip or a sensor or whatever. So it’s entirely possible that you could build your own safety model but still conform it with the standard. Just as we believe RSS would be a safety model that would be conformant with the standard as well.

So at this point, we’re solving a problem for the industry, but if we don’t really start making some positive contribution, there may not be an automated vehicle industry for us to sell into. So now’s not the time to kind of get specific about winners or losers, but really trying to do something for the good of the whole industry. We all have this common problem, and we all need to rely on this ability to balance safety and utility based on these assumptions about other road users.

JUNKO YOSHIDA: In the current discussions at IEEE, I know this is going to be technology-neutral, but are companies like Waymo or Tesla part of this discussion?

JACK WEAST: Yeah. I’m very pleased to have Waymo as my vice chair, Uber as our secretary, and we have over 25 I think at last count companies across the OEM community, the tier one community. We even have some government representatives there, some research institutions. It’s a wonderful mix of different entitles. Again, from that standpoint, when you have that many people at the table, clearly what you’re going to come up with has got to work for everyone.

So that commitment to something that’s technology-neutral, it’s not requiring people to implement something from Mobileye, just as it’s not requiring people to buy a chip from some other vendor. It’s really about solving the safety challenge for the whole industry.

That’s why we published RSS in 2017 openly. That’s why we contributed it. Because we just want to help solve this problem. It’s not about forcing a particular implementation of a safety model down anybody’s throat. It’s about doing the right thing for the industry and collaborating with governments on what “driving safely” means. And a really important part of that — probably the most important part of it — are the assumptions. What are we allowed to assume about the behavior of other road users, pedestrians, whatever as automated vehicles make driving decisions?

JUNKO YOSHIDA: Speaking of assumptions, I actually spent quite a bit of time — maybe during the COVID-19 days I have too much time on my hands. I was watching Mobileye’s video that they made public in January, but I also watched the May video. It’s kind of interesting, because there’s drone footage, there’s a view of what a safe driver is doing. And then on top, there’s visualization software actually displayed. These visualization softwares, by the way, is this showing what the machine is seeing? Or is it for the human consumption?

JACK WEAST: That’s an excellent question. It’s a bit of both. First I’ll say that it is giving you a representation of what the machine is seeing. If you look carefully, you’ll notice that certain vehicles or other objects that are seen by the vehicle will change color from time to time. That’s an indication of how those objects are behaving in relation to the planning function of the vehicle. Or as we’ve talked about in the context of RSS. A vehicle makes a sudden braking maneuver, you’ll likely see that vehicle change color, because that’s an indication that the automated vehicle software recognizes that perhaps you’re in a dangerous situation, you need to perform a proper response, so you’re noting the object differently than others.

It’s very technical, this display that you can see in the video. So certainly you could imagine that when we have a solution for consumers and passengers, our friends and family, not from our industry, it might be a bit more simplified, it might be a bit easier to understand. but right now it’s definitely a mix of both what the vehicle is seeing but also a rich technical display of the internal functions of the software as they’re working.

Probably a little too much detail for our friends and family, but it’s nice to see.

JUNKO YOSHIDA: No, no. It’s interesting. What I’m getting at is this: When you talked about the safety model that needs to be sort of spelled out in a manner that a machine can interpret what the model is, but also it is deeply cultural. So it needs to be adjustable to the cultural differences. For example, in Israel, I noticed that when cars… For example, going to take the unprotected left turn, and the car is kind of inches out so that it sort of creates an opening for itself. It’s a bit too aggressive for my taste, because I wouldn’t do that, but it’s probably necessary in Jerusalem. And I’m assuming that sort of behavioral thing is actually baked into the AV stack. Are you going to change that to a Chinese version? How does that work?

JACK WEAST: That’s an excellent question. Really it’s kind of the brilliant part about having these implicit driving rules that we were talking about earlier embodied in the safety model, not necessarily in all of the rest of the automated driving stack.

JUNKO YOSHIDA: Oh, okay.

JACK WEAST: Because of going back to these assumptions that we’re defining in the IEEE standard. If we plug in different values for the assumptions, you directly impact and result in different behavior by the automated vehicle as it’s operating on the road.

So for example, if the amount of safe distance from a lateral standpoint of the sides of the car, in countries like Israel, people drive much more closely next to each other than we do here in the U.S., right? That’s because there are certain values plugged into those assumptions about what is a reasonable and foreseeable kind of lateral maneuver by those other agents? Now in the U.S., we could use different values, which might mean that some of those maneuvers you see in that video in Jerusalem may not be possible in the U.S. because we set the balance differently. Where in China you could use different numbers entirely.

So that’s the beauty, actually, of separating your AI functions in the vehicle that are trained through various data sets from the safety model. Because you can tune the safety model….

JUNKO YOSHIDA: So it’s separate! Ahhh.

JACK WEAST: Yes. And then directly be able to see different performance in the vehicle in the real world. And so that scalability is really important. But to contrast that, if you’ve only done testing in one place in the world and you don’t have any kind of safety model that has adjustable parameters where you can change the values, it is going to be a lot more work to deploy that in cities where people might drive differently.

JUNKO YOSHIDA: I didn’t realize those are separate. That’s interesting. Let’s talk about the Geely deal. The omnum said that this is a game changer. Some people are questioning that. But I think it’s a big deal. It’s a big deal because Mobileye’s IQ5 is going to be the first time it’s implemented in cars in volume. And it’s going to start in 2021. Is that right?

JACK WEAST: I believe it is 2021. And if omnum says it’s a game changer, then it must be.

JUNKO YOSHIDA: Who could question, omnum, right?

JACK WEAST: Exactly. He knows what he’s talking about. Trust me. I’ve had the pleasure of working with him for some time. But what’s really important about it, though, and one of the key things that shouldn’t be missed, is really the record time in which we are taking the capability that right today, as you and I are talking, is on the road in Israel being tested with our camera-only automated vehicle fleet. That same software, those same chips that are in testing today, is going to be in the market now in a very short timeframe.

Typically, the design lifecycle for automotive, as you well know, could be four to five years. So really, that’s kind of a game changing thing about this. Not only is it a commercial application of our automated vehicle software and silicon, but it’s getting to market in record time. Two or three times faster than anything else. So that’s really kind of the key thing. Less than a year really from where we are today to commercialization.

The other thing that’s really interesting about it that also is a game changer, that I’ve not noted, is also the over-the-air update capabilities. So here you’ve got a platform that will be in a consumer car, where Mobileye will be able to update the capabilities of that super-vision technology over time. So the safety features could evolve, they could improve, they could adapt. So that also for the first time for Mobileye to be able to directly update the solution in the car is a key game changer as well.

JUNKO YOSHIDA: I have a question about this over-the-air update. I think that’s what everybody is wanting. And it’s needed. And yet how much control do you have over ECUs that are beyond the reach of the chips that Mobileye is providing?

JACK WEAST: That’s another element of this deal that I didn’t mention, also. Typically what we would do is provide just a chip and a software, and then a Tier One partner would put that together as their traditional design lifecycle. Not that there won’t be Tier One or other integrators as part of this, but what’s different is, Mobileye is providing not just the solution stack with the hardware and the software, but also the portion of the platform that interfaces with the vehicle buses and performs the actuation as well. It’s called a multi-domain controller. Essentially it becomes then a complete subsystem for ADAS capability. Where it’s not just Mobileye providing the camera, and that out data goes somewhere else. This is the complete solution. It’s kind of like an automated driving kit all in itself in terms of its capabilities. Like we say, coming from the camera-only cars that we have driving around Jerusalem, but it contains all of those elements.

So when the over-the-air update happens, it can update not only just the driver assistance software, but it can also be updating some of the multi-domain controller software as well that’s doing more of the dynamic control functions for the vehicle also. So it’s starting to provide that ability to update more than just the traditional functions you would think of from Mobileye.

JUNKO YOSHIDA: I see. When you say “multi-domain,” multi-domain is actually limited to the vehicle’s actuation part that helps the assisted driving, no?

JACK WEAST: That’s correct. It is still limited to that. You’re right.

JUNKO YOSHIDA: All right. Very good! Just for the record, how long has Mobileye been testing AV software thus far in Israel?

JACK WEAST: You’re going to put me on the spot there. I’m going to take a guess. Forgive me; I don’t know the exact number. But I believe it’s been at least a period of, say, two years probably. A year and a half to two years I would think. But Mobileye’s made some incredible progress very fast. And I think part of that comes from their design philosophy. You don’t just code up a bunch of stuff and throw it on the road and see what happens. You think deeply about the design of the system and you try to understand what the design looks like on paper and do formal verification of a design built on paper.

I’ll give you another strange analogy. We’re remodeling our home right now and we’re doing some tile work. And we had a tile person come in, and they didn’t’ think at all about the design before they started slapping tile up on the wall. Then at the end, it looked like a mess! So guess what? It’s all coming off; we’re starting over on paper. We’re designing it and then we’re putting the tile up, and now we’re going to have a beautiful result. So in other words, taking more time up front to really think about what you want to build and making sure it’s the right way to build it. That’s a faster path to the end stage than coding rapidly and then trying to fix it later.

JUNKO YOSHIDA: So in other words, you used the terminology, “formal verification.” Did the fact that you guys are only depending on cameras, did it make it easier to do the formal verification faster?

JACK WEAST: Yeah, it’s actually disseparate from that. The formal verification piece is used for the safety model. So the safety model is formally verified. But you bring up a really excellent point, though. What about perception? What about vision? Those systems are probabilistic by their nature. They are known to have failures by their nature. It’s just the name of vision algorithms, and there’s no perfect sensor on the planet that gives you 100% accurate sensing all the time for its lifetime in the car.

So here what you have to do is think differently about how you want to solve this problem to make sure that you’re delivering a sensing capability that is sufficient. So here we have a unique approach because our camera subsystem is so strong. And because we have the ability to operate vehicles as you’ve seen by the videos on the roads in Jerusalem, but camera only. We have a separate vehicle that has radar and lidar only. And that vehicle is going to have the same ability to operate to the same degree as the camera system. Now you combine those together, and you essentially have redundant but diverse sensing implementations that are operating in parallel. So we can produce two world models and combine them, as opposed to only being dependant on one world model and depending on it alone for accuracy.

This means that you can think about it as a kind of a situation… Let’s say you have an iPhone in one pocket and an Android phone in the other pocket. The odds of both of those phones failing at exactly the same time is extremely rare. And so goes the same logic. The logic that the camera subsystem would fail in exactly the same way and at the same time as the radar and lidar subsystem that’s separate means that the probability of a sensing failure for the system overall is much, much, much, much lower than you would have if you were relying on one sensor type overall, or one sensing channel overall.

JUNKO YOSHIDA: I didn’t realize this: You guys actually have another vehicle just using lidar and radar?

JACK WEAST: That’s correct.

JUNKO YOSHIDA: I did not know that!

JACK WEAST: We do.

JUNKO YOSHIDA: That’s interesting. How’s that going? Is it coming along the same level of the autonomy that your camera-only super-vision has achieved? Or are they still working on it?

JACK WEAST: Everything’s still in development, so we’re still working on it. But the intention is that when we deliver a commercial driverless vehicle, it will contain both the camera-only subsystem as well as the radar-and-lidar-only subsystem, and they’ll be working in combination. The reason why we test separately is because if you put them all in the same vehicle first, nobody’s going to believe that our car is actually driving with camera-only, for example. But you can look at the car. There are no radars, there are no lidars on that car. It’s truly camera-only. And if you look carefully enough around the streets of Israel, you will also see a car from ours that has radar and lidar only. But we think that what we call “true redundancy” is an important way to approach sensing and to try to make sure that you reduce the chances of a sensing failure as much as possible.

There’s one more thing that’s really interesting to think about, here, too. What does a sensing failure mean? What constitutes a sensing failure? I’m very happy to contribute an article to your forthcoming book that everybody should go buy on this topic of sensing in automotive. What is a sensing failure? If I have a classification error for an object that’s 300 meters away off the side of the road in a park, is that a sensing failure? By some measure it is, if all I’m doing is looking at the sensor by itself in a vacuum and I’m feeding some test data in. I’d say, Well, this is a classification of this mailbox in a park 300 meters away. Who cares? From a safety standpoint, it’s irrelevant.

So we have to think about sensing failures in the context of a sensing failure that would lead to a safety incident. And that’s where combining a sensing model like we have with true redundancy with a safety model like RSS can provide you much more intelligent understanding of what a sensing failure means and if it would lead to a violation of the safety model. Because if you’re in violation of the safety model, then you are at an increase risk, you’re in a dangerous situation. Maybe something from a safety standpoint could happen.

But it’s important to look at it from a system level perspective, not just the sensors by themselves.

JUNKO YOSHIDA: Very good. I learned a lot. Usually I do, but think you very much for the interview.

JACK WEAST: My pleasure, Junko.

JUNKO YOSHIDA: Thanks for coming to the show.

JACK WEAST: Always a pleasure talking to you.

BRIAN SANTO: That was Jack Weast from Intel’s Mobileye sensor operation. During their discussion, Weast mentioned AspenCore’s book on the automotive market. Well, since he brought it up – Junko and the rest of AspenCore Media staff have written a book that examines how sensing and decision-making technologies can help people as they drive, about some of the remaining challenges to implementing those technologies, and how soon they might arrive.

The book collects some of our recent reporting, along with new contributions from EE Times staff and from some of the leading thinkers in the tech and automotive industries. We expect to have that book available on October 19th. There will be a link to the book on the EE Times home page then.

And that’s a wrap for the Weekly Briefing for the week ending October 9th. Thank you for listening. The Weekly Briefing is available on iTunes, Android, Stitcher and Spotify, but if you get to us via our web site at www.eetimes.com/podcasts, you’ll find a transcript along with links to the stories we mentioned, along with other multimedia.

This podcast is Produced by AspenCore Studio. It was Engineered by Taylor Marvin and Greg McRae at Coupe Studios. The Segment Producer was Kaitie Huss.

I’m Brian Santo. See you next week.

HIDDEN TRACKS:

BRIAN SANTO: That’s a beautiful way to wrap up, but I can’t wrap up there. I gotta know: Have you ever met the slightly less famous Keith Jackson?

KEITH JACKSON: No, I never have. I always enjoyed watching him on Saturdays, but never got to meet him.

感谢收听本期推送,全球联播 (EE|Times On Air) 现已同期在喜马拉雅以及蜻蜓FM上线,欢迎订阅收听!
广告