There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk (2024)

Sharing is Caring!

Authored by Charles Hugh-Smith via oftwominds,

Mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.

The mythology of Technology has a special altar for AI, artificial intelligence, which is reverently worshiped as the source of astonishing cost reductions (as human labor is replaced by AI) and the limitless expansion of consumption and profits. AI is the blissful perfection of technology’s natural advance to ever greater powers.

The consensus holds that the advance of AI will lead to a utopia of essentially limitless control of Nature and a cornucopia of leisure and abundance.

If we pull aside the mythology’s curtain, we find that AI mimics human intelligence, and this mimicry is so enthralling that we take it as evidence of actual intelligence. But mimicry of intelligence isn’t intelligence, and so while AI mimicry is a powerful tool, it isn’t intelligent.

The current iterations of Generative AI–large language models (LLMs) and machine learning–mimic our natural language ability by processing millions of examples of human writing and speech and extracting what algorithms select as the best answers to queries.

These AI programs have no understanding of the context or the meaning of the subject; they mine human knowledge to distill an answer. This is potentially useful but not intelligence.

The AI programs have limited capacity to discern truth from falsehood, hence their propensity tohallucinatefictions as facts. They are incapable of discerning the difference between statistical variations and fatal errors, and layering on precautionary measures adds additional complexity that becomes another point of failure.

As for machine learning, AI can project plausible solutions to computationally demanding problems such as how proteins fold, but this brute-force computationalblack-boxis opaque and therefore of limited value: the program doesn’t actually understand protein folding in the way humans understand it, and we don’t understand how the program arrived at its solution.

Since AI doesn’t actually understand the context, it is limited to the options embedded in its programming and algorithms. We discern these limits in AI-based apps and bots, which have no awareness of the actual problem. For example, our Internet connection is down due to a corrupted system update, but because this possibility wasn’t included in the app’s universe of problems to solve, the AI app/bot dutifully reports the system is functioning perfectly even though it is broken. (This is an example from real life.)

In essence, every layer of this mining / mimicry creates additional points of failure:the inability to identify the difference between fact and fiction or between allowable error rates and fatal errors, the added complexity of precautionary measures and the black-box opacity all generate risks ofnormal accidentscascading into systems failure.

See also An interest rate cut right now won’t save them. There’s a lag effect in both directions. A hard landing is baked in. Americans worry more now than during the Great Recession

There is also the systemic risk generated by relying on black-box AI to operate systems to the point that humans lose the capacity to modify or rebuild the systems.This over-reliance on AI programs creates the risk of cascading failure not just of digital systems but the real-world infrastructure that now depends on digital systems.

There is an even more pernicious result of depending on AI for solutions.Just as the addictive nature of mobile phones, social media and Internet content has disrupted our ability to concentrate, focus and learn difficult material–a devastating decline in learning for children and teens–AI offers up a cornucopia ofsnackable factoids, snippets of coding, computer-generated TV commercials, articles and entire books that no longer require us to have any deep knowledge of subjects and processes. Lacking this understanding, we’re no longer equipped to pursue skeptical inquiry or create content or coding from scratch.

Indeed, the arduous process of acquiring this knowledge now seems needless:the AI bot can do it all, quickly, cheaply and accurately. This creates two problems: 1) when black-box AI programs fail, we no longer know enough to diagnose and fix the failure, or do the work ourselves, and 2) we have lost the ability to understand that in many cases, there is no answer or solution that is the last word: the “answer” demands interpretation of facts, events, processes and knowledge bases are that inherently ambiguous.

We no longer recognize that the AI answer to a query is not a fact per se, it’san interpretation of reality that’s presented as a fact, and the AI solution is only one of many pathways, each of which has intrinsic tradeoffs that generate unforeseeable costs and consequences down the road.

To discern the difference between an interpretation and a supposed fact requires a sea of knowledge that is both wide and deep, and in losing the drive and capacity to learn difficult material, we’ve lost the capacity to even recognize what we’ve lost: those with little real knowledge lack the foundation needed to understand AI’s answer in the proper context.

The net result is we become less capable and less knowledgeable, blind to the risks created by our loss of competency while the AI programs introduce systemic risks we cannot foresee or forestall.AI degrades the quality of every product and system, for mimicry does not generate definitive answers, solutions and insights, it only generatesan illusion of definitive answers, solutions and insightswhich we foolishly confuse with actual intelligence.

See also The Man Who Predicted the Collapse of the Japanese Economy Explains What Really Happened (Jul 5, 2024) - The 20 Trillion Dollar Problem

While the neofeudal corporate-state cheers the profits to be reaped by culling human labor on a mass scale, the mining / mimicry of human knowledge has limits.Relying on the AI programs to eliminate all fatal errors is itself a fatal error, and so humans must remain in the decision loop (the OODA loop of observe, orient, decide, act).

Once AI programs engage in life-safety or healthcare processes, every entity connected to the AI program is exposed to open-ended (joint and several) liability should injurious or fatal errors occur.

If we boil off the mythology and hyperbole, we’re left with another neofeudal structure:the wealthy will be served by humans, and the rest of us will be stuck with low-quality, error-prone AI service with no recourse.

The expectation of AI promoters is that Generative AI will reap trillions of dollars in profits from cost savings and new products / services. This story doesn’t map the real world, in which every AI software tool is easily copied / distributed and so it will be impossible to protect anyscarcity value, which is the essential dynamic in maintaining thepricing powerneeded to reap outsized profits.

There is little value in software tools that everyone possesses unless a monopoly restricts distribution, and little value in the content auto-generated by these tools: the millions of AI-generated songs, films, press releases, essays, research papers, etc. will overwhelm any potential audience, reducing the value of all AI-generated content to zero.

The promoters claim the mass culling of jobs will magically be offset by entire new industries created by AI, echoing the transition from farm labor to factory jobs. But the AI dragon will eat its own tail, for it creates few jobs or profits that can be taxed to pay people for not working (Universal Basic Income).

Perhaps the most consequential limit to AI is that it will do nothing to reverse humanity’s most pressing problems.It can’t clean up theGreat Pacific Trash Gyre, or limit the 450 million tons of mostly unrecycled plastic spewed every year, or reverse climate change, or clean low-Earth orbits of the thousands of high-velocity bits of dangerous detritus, or remake the highly profitablewaste is growth Landfill Economyinto a sustainable global system, or eliminate all the sources of what I termAnti-Progress. It will simply add new sources of systemic risk, waste and neofeudal exploitation.

There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk (1)

Views:141

There’s Just One Problem: AI Isn’t Intelligent, and That’s a Systemic Risk (2024)

FAQs

Is AI actually a risk? ›

Real-life AI risks

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

How does AI take risks instead of humans? ›

An example of AI taking risks in place of humans would be robots being used in areas with high radiation. Humans can get seriously sick or die from radiation, but the robots would be unaffected. And if a fatal error were to occur, the robot could be built again.

What are the existential risks of AI? ›

Existential risk from artificial general intelligence refers to the idea that substantial progress in artificial general intelligence (AGI) could lead to human extinction or an irreversible global catastrophe.

Is AI anything to worry about? ›

Can AI cause human extinction? If AI algorithms are biased or used in a malicious manner — such as in the form of deliberate disinformation campaigns or autonomous lethal weapons — they could cause significant harm toward humans. Though as of right now, it is unknown whether AI is capable of causing human extinction.

Can AI cause human extinction? ›

AI researcher Roman Yampolskiy estimates a 99.9% chance of AI leading to human extinction within the next 100 years.

Is there a sentient AI now? ›

Sentient AI is an artificial intelligence system that is capable of thinking and feeling like a human. It can perceive the world around it and have emotions about those perceptions. The AI we have right now is not capable of experiencing sentience, and whether it ever will remains unclear.

Why is AI detrimental to society? ›

AI algorithms are programmed using vast amounts of data, which may contain inherent biases from historical human decisions. Consequently, AI systems can perpetuate gender, racial, or socioeconomic biases, leading to discriminatory outcomes in areas such as hiring, lending, and criminal justice.

Which jobs will AI replace? ›

What Jobs Will AI Replace First?
  • Data Entry and Administrative Tasks. One of the first job categories in AI's crosshairs is data entry and administrative tasks. ...
  • Customer Service. ...
  • Manufacturing And Assembly Line Jobs. ...
  • Retail Checkouts. ...
  • Basic Analytical Roles. ...
  • Entry-Level Graphic Design. ...
  • Translation. ...
  • Corporate Photography.
Jun 17, 2024

Will AI ever become self-aware? ›

We don't know whether AI could have conscious experiences and, unless we crack the problem of consciousness, we never will. But here's the tricky part: when we start to consider the ethical ramifications of artificial consciousness, agnosticism no longer seems like a viable option.

Can AI take over the world? ›

If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

Who created AI? ›

While Alan Turing had the famous test named after him, John McCarthy is usually acknowledged as the person who invented AI. At the gathering for a 1956 Dartmouth summer research project on artificial intelligence, the coining of the term was attributed to him.

Is AI really a threat to jobs? ›

A 2023 study by McKinsey estimated that half of today's work activities could become automated by 2060, signaling the potential for drastic changes to the workforce in the coming decades. The adoption of AI has already been associated with job cuts.

Is Alexa considered AI? ›

If not, what category do they fall under? - Quora. These voice assistants are prime examples of AI technology.

What is the biggest danger of AI? ›

Here are the biggest risks of artificial intelligence:
  • Job Displacement. ...
  • Economic Inequality. ...
  • Legal and Regulatory Challenges. ...
  • AI Arms Race. ...
  • Loss of Human Connection. ...
  • Misinformation and Manipulation. ...
  • Unintended Consequences. ...
  • Existential Risks.
Jun 2, 2023

Will AI take over humanity? ›

1. AI will take over the world. If you believe science fiction, then you don't understand the meaning of the word fiction. The short answer to this fear is: No, AI will not take over the world, at least not as it is depicted in the movies.

Can we trust the AI? ›

Just like humans, AI systems can make mistakes. For example, a self-driving car might mistake a white tractor-trailer truck crossing a highway for the sky. But to be trustworthy, AI needs to be able to recognize those mistakes before it is too late.

Is AI a security risk? ›

AI and data risks

Both types of poisoning can occur within an active AI chatbot application or emerge from the LLM supply chain, where reliance on pre-trained models, crowdsourced data, and insecure plugin extensions may produce biased data outputs, security breaches, or system failures.

References

Top Articles
Black Panther: Wakanda Forever Review - IGN
Black Panther: Wakanda Forever First Reviews: Thoughtful, Spectacular Sequel That Raises the Bar for the MCU
Exclusive: Baby Alien Fan Bus Leaked - Get the Inside Scoop! - Nick Lachey
Ups Dropoff Location Near Me
Botw Royal Guard
Nehemiah 4:1–23
25X11X10 Atv Tires Tractor Supply
Craigslist Vermillion South Dakota
My Vidant Chart
Craigslist/Phx
Was sind ACH-Routingnummern? | Stripe
Wordscape 5832
Valentina Gonzalez Leak
David Turner Evangelist Net Worth
Bestellung Ahrefs
Rhinotimes
Used Drum Kits Ebay
National Weather Service Denver Co Forecast
Enterprise Car Sales Jacksonville Used Cars
Dutch Bros San Angelo Tx
Payment and Ticket Options | Greyhound
Sonic Fan Games Hq
Me Cojo A Mama Borracha
Puretalkusa.com/Amac
Craighead County Sheriff's Department
Ess.compass Associate Login
Where to Find Scavs in Customs in Escape from Tarkov
MLB power rankings: Red-hot Chicago Cubs power into September, NL wild-card race
Ou Class Nav
15 Primewire Alternatives for Viewing Free Streams (2024)
Hellraiser 3 Parents Guide
Inter Miami Vs Fc Dallas Total Sportek
Claio Rotisserie Menu
Table To Formula Calculator
Diggy Battlefield Of Gods
Syracuse Jr High Home Page
Rlcraft Toolbelt
Fandango Pocatello
What Is Xfinity and How Is It Different from Comcast?
Metra Union Pacific West Schedule
Gabrielle Enright Weight Loss
Foolproof Module 6 Test Answers
Mid America Irish Dance Voy
Nba Props Covers
Joblink Maine
Market Place Tulsa Ok
Ihop Deliver
Rétrospective 2023 : une année culturelle de renaissances et de mutations
Billings City Landfill Hours
Prologistix Ein Number
Craigs List Sarasota
O'reilly's Eastman Georgia
Latest Posts
Article information

Author: The Hon. Margery Christiansen

Last Updated:

Views: 6442

Rating: 5 / 5 (70 voted)

Reviews: 85% of readers found this page helpful

Author information

Name: The Hon. Margery Christiansen

Birthday: 2000-07-07

Address: 5050 Breitenberg Knoll, New Robert, MI 45409

Phone: +2556892639372

Job: Investor Mining Engineer

Hobby: Sketching, Cosplaying, Glassblowing, Genealogy, Crocheting, Archery, Skateboarding

Introduction: My name is The Hon. Margery Christiansen, I am a bright, adorable, precious, inexpensive, gorgeous, comfortable, happy person who loves writing and wants to share my knowledge and understanding with you.