Did Uber Do Enough to Make Test AVs Safe?







realjjj

User Rank
CEO






Re: So many stupid morons involved in this.

realjjj  
5/29/2018 8:50:40 PM


NO RATINGS





The human operator is supposed to make the car as safe as any other car. If Uber failed, that"s where it failed.

The autonomous system is being tested, there is no threshold for how "crappy" it can be, as long as it does not make the car more dangerous than the average car. You can argue that disabling AEB did that but few vehicles have AEB for now - and do remember that AEB is an autonomous feature that clearly shows us how much value autonomy can provide. From an ethics perspective they could have done more ofc but that"s true for every vehicle out there as cost is the main constraint.

Uber"s drivers were supposed to do more than just drive and that"s not acceptable. More training and oversight was likely needed too.








EELoser

User Rank
Freelancer






Re: So many stupid morons involved in this.

EELoser  
5/29/2018 7:54:31 PM


NO RATINGS





They didn"t test it well enough before releasing it onto the public.  The governor had no test plan or testing requirements and Uber had a faulty system.
It was a joke!  It couldn"t detect a bicycle jaywalking way in front of it.
Uber is responsible for the death.  They need to discontinue their whole program which sounds like what they are doing, they are incompetent at this.  A few more deaths and a bunch of lawsuits aimed at the companies directly and the total autonomous self-driving business will end.  GM can"t even design an ignition switch without getting sued for $594 million.  Somehow I think self driving hardware is a little more complicated and prone to bugs/malfunctions.
 
 
 







Bert22306

User Rank
Author






Re: So many stupid morons involved in this.

Bert22306  
5/29/2018 7:11:33 PM


NO RATINGS





Well, I know of at least one mom who got killed by car, while she was walking on the sidewalk along a busy highway, and I know of one dad, very recently, who was killed in the city, when a driver jumped the curb and slammed on the accelerator instead of the brake. What jack-asses do we blame for these accidents? They were both car vs pedestrian. I know of car vs car accidents too, where people were killed. In two cases, it was silly hot-head teenagers or early 20s, running head-on into another car. Pure human wreckless error.

I happen to know the people involved in both of these pedesrtain accidents, and in the car vs car accidents. There are thousands of these every year, that no one is getting all high-drama about. Autonomous driving would most likely have prevented all of these accidents. And yet, for some reason, people are attempting to prevent autonomous driving from being deployed.








EELoser

User Rank
Freelancer






Re: So many stupid morons involved in this.

EELoser  
5/29/2018 6:59:36 PM


NO RATINGS





Glad it wasn"t your Mom that was killed.  The Jack-Ass Governor and Uber CEO didnt" test their car out well enough before deploying it into the public.  How does it feel to be a guinea pig?
But I guess the needs of the many outweigh the needs of the few. So her death was acceptable.
 







Bert22306

User Rank
Author






Re: So many stupid morons involved in this.

Bert22306  
5/29/2018 4:53:05 PM


NO RATINGS





Well, in this particular case, assuming that what video we saw of the accident is representative of "truth," human operation of the Uber would almost certainly have resulted in the same collision. To the human eye, the obstacle appeared suddenly, out of deep shadow, at the last second. And the accident would certainly never have made national news. Just another traffic fatality.

Question: which stupid morons would we have to blame then? Those that won"t allow for autonomous driving to become implemented? The Uber had 6 seconds to react (and did not, for reasons that still need to be fully fleshed out). The human would have had, I think we determined, 1.5 seconds, assuming the eyes were pointed towards the left of the scene and that the mind was engaged in driving. What stupid moron would we blame, under normal circumstances? The status quo is roughly 35,000 to 40,000 annual driving fatalities, in the US. Do we wax dramatic/poetic why that"s acceptable?

If the problem had been that the car had to choose between one large object or another to strike, then I might grant that the autonomous algorithm would have had to correctly classify each obstacle. Not that it couldn"t do so, but at least, there would have been this extra dimension, a dimension which seems to have been brought up here, for no good reason. The simple facts are, in none of these two fatal accidents has anything that esoteric been involved.








EELoser

User Rank
Freelancer






So many stupid morons involved in this.

EELoser  
5/29/2018 3:59:43 PM


NO RATINGS





overpaid Uber CEO, idiotic Governor of Arizona, etc.  All bad decisions. They should be held responsible for their bad decisions to Guinea Pig the public. So predictable....If they can"t even make an airbag or ignition switch without killing people how can they make a self driving car?

The family apparantly settled for a meager amount.  They should have gone on TV for weeks and done interviews and get a better settlement, at least as much as the CEO"s pay for the predictable outcome of killing their mother.








Piyush.Patel

User Rank
Author






unknown object dilemma

Piyush.Patel  
5/28/2018 11:18:22 AM






If an object is large enough (box is big enough and getting bigger with time), trajectory is mostly clean (non-random) and can be easily predicted to intersect the car"s trajectory, then why is there a need to know what object it is? Does it matter if it is an animal, a human, a human with a dog, a human on a bike, a human dragging a bike, a human dragging a bike with stuff, does it matter at all what is there as long as it is easy to predict it will hit the car until something is done? One would expect that action would be taken even if it cannot be determined what the object is as long as it is big enough and approaching the car"s path. Do we know if this was actually the case with the Uber design? i.e it required the object to be known before taking (braking) action? What other autonomous vehicles have this case and are operating on the streets? Perhaps there is a concern that making such decisions may cause the car to stupidly brake (false positives) in case like an approaching flying plastic bag or other such cases which humans can eliminate, but this means that false positives should be covered more with neural net training and the big enough unknown object case still used to take action (the unkonwn object case can be added later to the neural network training by using the stored vehicle sensor data corresponding to the classifier output of unkonwn object).







Bert22306

User Rank
Author






Don"t think the nature of the moving obstacle should have mattered

Bert22306  
5/27/2018 6:36:12 PM






Junko, thanks for the useful, informative update. It does substantiate much of what realjjj was saying.

Once again, sticking to the algorithm failures and staying away from all the peripheral finger pointing, which has its own place, but will not solve the design flaws, I"m not sure I buy the idea that the nature of this moving obstacle should have made such a big difference. Assuming it even did - some of this report seems to have been conjecture, not necessarily fact.

Such a large object was on a "collision course." That is all that should have mattered. Computing a collision course is very straightforward. There was no indication at all that Ms Herzberg was traveling an erratic course, in those last 6 seconds, to make the prediction of an imminent collision tricky. Those 6 seconds would have been really ample time for the car to stop, and let the "large obstacle" (could have been a moose or a deer, for that matter) go on by. I mean really, it"s silly to think that the computer algorithm should have made a different choice, if it had "known" that this large obstacle on a collision course was carrying bags, or even wheeling a bike. Come now.

In fact, assuming that the video we saw from the on-board camera is representative of what a human would have seen, this should have been a perfect example to show why autonomous driving is useful, and why it will most likely supplant manual driving in due course. The autonomous driving algorithm had six seconds to react. A human might have had, at the very best, assuming fully alert, I think it was 1.5 seconds. That"s a huge difference.

Side bar. I"d like to emphasize again that automation is not "for convenience," except perhaps in the very early stages. Automation soon becomes mandatory, as human control cannot cope with the advancing sophistication of the system.

Take a communications example. Telephone networks were designed, from the inception, to be circuit switched. This allowed for manual control, as circuits were expected to be set up in human-friendly amount of time, say several seconds, and to stay active for a human friendly amount of time, at least minutes. Automating human telephone operators was no doubt initially a cost-saving measure, but as more and more people got connected, automatic dialing became a necessity. It would have taken way too many human operators, to keep up with the increasing load.

But don"t stop there! In the 1960s, packet switching, instead of circuit switching, was invented. There is no way on earth that a human operator can be involved in switching packets. The process is orders of magnitude too fast for humans, and only becoming more so as broadband speeds go up. The Internet is only possible because that human operator is out of the loop.

That describes the future of driving. If for no other reason, the roads we have will need to be used more efficiently, than what erratic human drivers can hope to achieve. There"s a limit to how many new roads can be built, in urban and suburban areas especially,to cope with the increasing congestion.








imispgh

User Rank
Freelancer






The problem here is the "experts" - Current process fatally flawed

imispgh  
5/27/2018 12:16:23 PM


NO RATINGS

1 saves




Junko

Unfortunately, the "experts" cited here, especially Mr. Koopman and NTSB are wrong about the use of public shadow driving for AI and testing. That is the root cause. These systems nor this process should be used in the public domain.

It is a myth that the use of public shadow driving to develop autonomous vehicles will ever come close to actually creating one. You can never drive the one trillion miles, spend over $300B or harm as many people as this process will harm trying to do so. What happens when you move from benign and hyped scenarios to running thousands of accident scenarios thousands of times each?

The answer is to leverage FAA practices and use aerospace/DoD level simulation.

Impediments to Creating an Autonomous Vehicle https://www.linkedin.com/pulse/impediments-creating-autonomous-vehicle-michael-dekort/

Autonomous Levels 4 and 5 will never be reached without Simulation vs Public Shadow Driving for AI - https://www.linkedin.com/pulse/autonomous-levels-4-5-never-reached-without-michael-dekort

DoT, NHTSA and NTSB are Enabling Autonomous Vehicle Tragedies - https://www.linkedin.com/pulse/dot-nhtsa-ntsb-enabling-autonomous-vehicle-tragedies-michael-dekort/

My name is Michael DeKort. I am a former systems engineer, engineering and program manager for Lockheed Martin. I worked in aircraft simulation, the software engineering manager for all of NORAD, the Aegis Weapon System, and on C4ISR for DHS. I also worked in Commercial IT and Cybersecurity. I received the IEEE Barus Ethics Award for whistleblowing regarding the DHS Deepwater program post 9/11 - http://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=4468728

I am also a member of the SAE On-Road Autonomous Driving Validation & Verification Task Force 







<!--

The blogs and comments posted on EE Times do not reflect the views of EE Times, UBM Electronics, or its sponsors. EE Times, UBM Electronics, and its sponsors do not assume responsibility for any comments, claims, or opinions made by authors and bloggers. They are no substitute for your own research and should not be relied upon for trading or any other purpose.

-->






Comments

Popular posts from this blog

"Johnny English Strikes Again" for more family-friendly fun

What can we learn from the people Trump has pardoned so far?

Woman sheds fat and gets ripped in 5 months WITHOUT diets: "They"re stupid"