AI has had one safest technology roll-outs in history.
Read that again, because it's a fact.
It's used by billions with a tiny fraction of a percent of actual problems.
And yet it's seen as dangerous or unsafe by many.
There's a constant chorus of people shouting about its supposed dangers with no evidence whatsoever where it matters most: here, in the real world.
So what do we actually have here in reality?
A few cases in courts about early versions of ChatGPT allegedly being too sycophantic and not recognizing mental illness or someone in trouble, that are still making their way through courts and may prove wrong or right (some of the media released snippets are damning but not definitive).
Time will tell. Innocent until proven guilty. The very nature of court litigation is often to find a scapegoat for something that gets thrown out in the actual process of the trial.
But outside of that, what?
Answer: not much.
And viewed through the lens of other technology in history its incident rate is probably lower than lawnmowers.
It makes little sense when you think of it through the lens of other tech like cars and planes, which had atrocious early track records.
AI even has a better track record of safety than nuclear.
Despite being incredibly safe overall, nuclear had several high profile and dangerous failures with Three Mile Island and Fukishima.
With AI, nothing of the sort. Not even remotely.
I can hear the naysayers now saying "so far, but just you wait!"
And yet we keep waiting. And waiting. And waiting.
AI fear is a remarkably resilient beast.
It's resilient despite zero actual harms manifesting here in reality land.
Self-driving cars are remarkably safer than humans who kill 1.2 million people and injure 50 million more each year world wide. (I wrote 1.5M in an early posted and missed my typo).
Waymo cars are roughly 10X safer than humans with minimal injuries and fatalities. Even early self-driving cars had incredibly good safety records vis-a-vis early cars driven by humans that had bad safety records even up through the 1950s and 60s.
When it comes to cars, society actually resisted making them safer. People fought having to wear seatbelts because they had to pay for them. They resisted early drunk driving laws as impingements on their freedom.
Early plane travels was incredibly dangerous. It took many many decades of work to make them the marvels of safety they are today.
What about jobs?
We have AI execs talking about the "end of work" and yet they're hiring more people in the very profession that is supposedly most exposed: programming. Often at super high salaries approaching half a million dollars a year.
Demand for good programmers is rising.
We've certainly had execs claim they let people go because of AI. But a deeper look at these claims quickly reveals that most of them are just an easy way to get around labor laws or to simp to shareholders and more readily attributable to COVID over hiring. Tell shareholders "AI" is the reason for layoffs and you're rewarded for being more "efficient." Tell them you have to lay people off because you over hired or just made mistakes and your stock gets hammered.
The truth is that anyone who uses AI seriously at the frontier sees how much they have to baby sit it and hand hold it and steer it. It is not doing any job end to end. It's doing tasks and that is about it.
Now it will certainly get better but will it magically make the leap from task to job? Maybe. But we'll need evidence of that in, you guessed it, reality before we start making policy decisions.
So what other problems do we have here in reality?
Nothing but the two problems I've already discussed at length in my work:
Surveillance and weapons of war.
But these are not new. They're just things that AI enhances, just like computers enhanced them, and better materials science, and many other tech revolutions before them.
Again, ask yourself, really ask yourself, where are the real problems?
And again, there's a loud chorus of people who keep shouting "just you wait, I imagined this problem in my head and it's totally inevitable because I say so" and yet billions of people are using this technology every day with no problems.
Now you could say "Russell's Turkey." The trend is the trend until it breaks. But then the burden is on you to prove the trend is breaking. There is no evidence of it other than in people's minds.
At what point do people just wake up and realize that none of this makes any sense?
It's not that there won't be problems. It's just that often times the problems we imagine (we've been imagining the end of all work for 100 years) don't match what happens in actual reality. The problems turn out to be very different and you can only deal with them when they come up.
A lot of politicians today imagined if they had only "gotten ahead" of the Internet with regulations we'd be in a much better place.
Utter nonsense. When Section 230 was passed the number one question among Congress was "what is the Internet?" And these folks are supposed to imagine TikTok 25 years later?
No.
We have to deal with problems as they come up, not imaginary problems that some very vocal people promise are coming. The burden is on them to prove it and writing long essays from "first principles thinking" and scary books does not count as evidence for anything at all.
At what point does the cognitive dissonance hit and people wake up and say, maybe I was wrong?
Probably never.
Beliefs are a tricky thing and wrong beliefs have caused more problems in world history than AI ever will.