Home Tech The Real AI Threat Comes from People, Not Computers

The Real AI Threat Comes from People, Not Computers

0
The Real AI Threat Comes from People, Not Computers

0:00

Amid the flood of sensationalist headlines, the likelihood of an AI robot apocalypse terminating your existence is low. However, death by countless e-paper cuts is not entirely out of the realm of possibility.

AI is not exactly a novel concept. If one defines it as any heuristic algorithm performing tasks traditionally carried out by humans, its roots can be traced back at least to the 1970s.

Classic illustrations include mail-sorting algorithms utilized by postal services. More contemporary examples that you’ve likely interacted with for over a decade are Google Maps and Amazon’s recommender system.

Given this longstanding presence, it should be less awe-inspiring when a company heralds its latest AI-powered innovation. This makes AI appear less intimidating, while broad calls to regulate it can seem more misguided.

However, it’s important to acknowledge that significant advancements have occurred in AI over the last five to ten years, particularly in generative AI. This area has gained considerable attention due to its ability to generate impressive visuals and provide quick summaries on almost any topic, though these outputs aren’t always reliable for factual accuracy or political neutrality.

While there are risks associated with AI, they are not necessarily the ones that spring to mind first. The genuine threat isn’t an AI akin to Skynet gaining sentience and deciding that eradicating humans will solve the world’s problems.

Instead, the danger lies in the faith we place in AI, the often opaque metrics employed to market AI solutions, and a collective eagerness to adopt AI to seem modern or tech-savvy, even when it compromises privacy, freedom, or convenience.

For those who create AI programs, the notion of a program “working” is viewed differently than it is by the general public. A certain level of error is anticipated.

Take, for instance, a facial recognition program intended to identify criminals. Programmers might calculate several metrics, considering how often the program accurately flags a target, correctly dismisses a non-target, and produces either false positives or false negatives.

Similar metrics might be used to gauge the effectiveness of programs designed to flag alleged “misinformation” or narrowly defined indicators of someone being distracted in school or while driving.

Yet, determining how accurate is “accurate enough” to declare a program “works” and whether “works” implies achieving a broader objective are decisions made by business executives. The philosophical consideration of whether the repercussions of a false positive are acceptable may be disregarded entirely.

Rather than thoughtfully contemplating these implications, many people idolize anything a marketing team loosely labels as AI, eagerly ceding their decision-making and agency to polished versions of fallible algorithms that few comprehend.

Therefore, while you probably won’t be engaging in shootouts with Terminators anytime soon, finding yourself pulled over because an “accurate enough” facial recognition program misidentified you as a criminal, or your personal digital assistant decided you were too distracted to drive after your eyes briefly left the windshield, doesn’t seem far-fetched.

No comments

Leave a reply

Please enter your comment!
Please enter your name here

Exit mobile version