Trusting in an AI that could kill you

Many car companies are racing towards bringing self driving cars to consumers and advancing the future, but how much should you actually trust the software that is being tested on public roads with human lives as an acceptable error margin? In the rush of companies to top their competition and bring the “car of the future” to the market, many vital flaws have been overlooked.

Though self driving cars are able to slow themselves in order to avoid collisions, many brands do not allow for their cars to brake until recently. The safety standard was only changed to require emergency braking to be enabled in self driving cars after an uber self driving car hit a pedestrian due to failing to recognize them in the street. It may be great that they were willing to change the standard after an incident, but is it really fair that someone lost their life just to figure out something as simple as a self driving car needs the ability to brake in an emergency? These companies have massive amounts of funding and could due more controlled testing to discover issues like this, but instead, they send a one ton hunk of steel onto the road with an underdeveloped system for driving and test their softwares issues with human lives. That may be fine if the software itself had been developed well, but the software itself seems to put the developers to shame.

The crash involving an uber self driving car showed major holes in the technology responsible for navigating a 1.5 ton death machine that already kills nearly 1.25 million people per year due to accidents. The pedestrian who was hit was detected only 6 seconds before impact, identifying her first as an unknown object, then a car, and then a bicycle, changing its predicted path for her travel for every classification. If a car can’t even tell a pedestrian apart from a vehicle or a stationary object, it’s hard to believe it can be put onto our roads without inadvertently causing a massacre. The car eventually detected that it would need to stop 1.3 seconds before the impact, and then proceeded to do nothing as it slammed through a pedestrian. A company called Waymo recently released a video of their self driving car’s view as it allowed cyclists to pass in order to show off their current progress, but the software shown in the video appears highly unstable. The cyclist’s predicted path flickered many times, and though a bit of instability can be fine, it flickered multiple times to position that the car would not detect as a potential crash, and if it had been going full speed, the cyclist would have been hit. The pathing also went through a parked car at times, showing that the software doesn’t always predict for objects when deciding on the pathing of other vehicles. Issues like this would be fine if it was being tested in a safe environment, but instead this software is being pushed onto the street.

In the rush to release and perfect self driving cars before other companies, some steps related to safety have been rushed through, and lawmakers aren’t able to keep up with the speed at which  autonomous vehicles are being developed. A large amount of safety restrictions are currently put in place by companies rather than the government, which allows for potential loopholes or places that do not have restrictions and are unsafe. A company called created a self driving device for cars that cost under $1000, but their product launch had to be aborted after they received a letter from the national highway traffic safety administration. Though it’s great that the administration stepped in, a lot of their concerns were not breaking any set laws, and contained some things that they allowed larger companies to do. Because classifications for self driving cars as well as regulations are still a bit loose, some companies can get away with stuff that others cannot. During the crash between a pedestrian walking with a bike and an uber self driving car, uber was not held legally responsible even though the system failed to warn the driver that emergency braking was necessary, and also failed to brake. It’s rather hard to support production of a product when companies can legally get away with a lethal failure of software. Not only are there major issues and concerns with the current software, but there are a lot of problems with where it’s planned to go, and most conversations regarding ethics are completely on the wrong topic.

Most conversations regarding self driving cars in ethics tend to focus on whose safety should be prioritized in case of an impending crash. MIT created a so-called “moral machine” regarding self driving cars, which compiled surveys from multiple countries to see who they would  prefer to be saved. From what I read, it seems to just be a completely biased system deciding on who should be killed, which is really not something we should release on our streets. Along with that questionable choice for researchers to focus on, many companies are claiming that in the future, a 5g network between self driving cars would be an ideal point. If done right, I personally believe it could turn out rather well. However, hackers have already found ways to take over normal cars. With self driving cars, that already provides even more concern, and if it’s all on a cellular network, it would be way too dangerous and easy to access. You can currently buy a cell jammer online for around $20; if nothing is done to fix the current bugs and exploits in networks, you could cause a massive crash for $20 with almost no effort, and it would be rather hard to track it to you. I personally am a fan of the concept of assisted driving, but everyone jumped way too far ahead with self driving cars.

Car accidents are a major cause of death in the United States, but releasing unstable software onto our streets is nowhere near the best way to help prevent incidents. In the excitement and business race to make a self driving car, everyone skipped past creating softer driving assistance to actually prevent incidents. With loose laws, major issues, low amounts of testing, and known issues that are still unfixed, self driving cars just don’t seem like something that would realistically help for the next few decades. We are currently butchering our own environment, yet instead of making electric cars and more efficient cars cheaper, companies are focusing on making million dollar unstable devices to try to rush themselves into the future for profit.

Leave a Reply

Your email address will not be published. Required fields are marked *