Whew, boy! Rick touched on a lot of tech minefields with his previous blog on the shortcomings of Tesla. One of the first is the idea of responsibility. Is the company that designed the “AI” at fault? This branches out to the question of what a reasonable expectation of an AI even is—and this is where Silicon Valley and the rest of the world are going to butt heads (Shut up, Beavis!).
I watched in horror the documentary about Elizabeth Holmes and the con she pulled. Even more recently, a friend of mine was applying to a promising startup and wanted me to look into its approach and science. As I read through the site and bios, I saw a nearly incomprehensible string of buzz words incorrectly and redundantly slapped together. I told him to steer clear if possible—if not, proceed with caution.
The problem is that the “move fast and break something” mentality has worked wonders in software, but now it is poking its head out into the daylight of physical applications such as self-driving cars and medical tech. What is acceptable loss? What are our expectations? These areas are so new, we have not as a society really grappled with these questions . . . so if we accept that the AI is not going to be perfect, we need a captain to steer the ship. In the case of self-driving cars, this is literal.
I was pretty terrible at baseball as a kid. When I was lucky enough to be put on the field, the coaches always put me in the right field, the location where balls are least likely to be. I got so bored out there that the one time a ball did come my way, I did not have my glove on . . . If a car drives itself perfectly 99 percent of the time, how do we create awareness for the 1 percent time? Will algorithms that analyze health records desensitize doctors to warning signs they would have normally registered? A study that came out of France found that turning on the cruise control in a car increased drowsiness.
Another interesting idea is that the technology will be ready in ten years. My thoughts are that it will not be this
technology but an entirely different one. What we are doing now is machine learning, but what we need is AI. Machine learning needs data in the form of specific examples to begin to learn, but AI does not need to see a specific example to adapt to a situation. We don’t have that, and maybe we should be waiting until we do to apply automation to the dangerous stuff. However, asking consumers to be patient could be a death knell for these companies with their “Jetsonesque” promises. A prominent example from the biotech field is gene therapy. It was an idea that was full of promise, but it was rushed, causing its implementation to be delayed decades.
All this being said, if Rick is truly this unhappy with his Tesla, I am happy to switch cars with him to alleviate his pain!