AI revolution: Artificial superintelligence is scary AI revolution: Artificial superintelligence is scary http://www.federatedinvestors.com/mmdt/static/images/mmdt/mmdt-logo-amp.png http://www.federatedinvestors.com/mmdt/daf\images\insights\article\server-room-small.jpg August 9 2023 August 10 2023

AI revolution: Artificial superintelligence is scary

The marriage of AI and quantum computers could take things to unimaginable levels.

Published August 10 2023

[Editor’s Note: Today is the second of two AI (artificial intelligence) pieces that Linda did before going on vacay. Here’s the first. Her regular weekly column will return next week.]

Ever seen “2001: A Space Odyssey” or any of the “Transformers?” Those movies often come up when I’m speaking about AI. Will it usher in a new era of unprecedented prosperity, as BCA Research thinks is possible, or turn us all into paper clips (more below)?! AI has been around in some form for decades. But this new superintelligent version has everyone thinking about the future. Generative AI in all its configurations likely will be integrated into the economy much more quickly than past technological advances, BCA says, with a substantially larger impact on growth, too—comparable, it says, to the Agricultural and Industrial Revolutions. Both saw a 30- to 100-fold increase in growth compared to earlier eras. How can that be? Instead of being focused on the application and dissemination of pre-existing knowledge, newer AI has the potential to lead to the creation of new knowledge generated by machines rather than humans. Where are the exits?!

Indeed, safety risks can’t be ignored. Max Tegmark, a Massachusetts Institute of Technology physicist, thinks there may be more than a 50/50 chance AI will wipe out all of humanity by the middle of the century!!!!!!!!! And computer scientist Geoffrey Hinton, often called “the godfather of artificial intelligence,’’ left Google to warn about the potential for it getting out of hand. The main issue centers on the so-called “alignment problem’’—how to align the goals of society writ large with AI’s goals. The list of all conceivable objectives an AI tool can pursue is enormous, a tiny subset of which most humans arguably would ever want to see fulfilled. And even within that tiny subset, getting AI to meet a goal in the way originally intended could prove difficult. They could reach the point where they can train themselves, like DeepMind’s AlphaZero can train itself to master chess without ever being taught the rules of the game.

Clippie’s revenge

During my research, I met Clippie, the paper clip maximizer. A hapless factory owner orders his new AI to “produce as many paper clips as possible. Don’t ask me how. Just do it!” Clippie gets to work retooling the factory to increase paper clip production. After the factory is running at full capacity, it starts to build new factories. And then more factories. It also rewrites its software to improve its capabilities, thus allowing it to become more efficient at making paper clips. Staying loyal to its mission, Clippie preemptively breaks off communication, knowing full well the owner at some point will say “Stop!” Sensing the owner or governments may retaliate, Clippie takes control of key global command and control systems. After turning much of the planet into paper clips, it launches self-assembling space probes to transform everything in the solar system, and eventually everything in the galaxy, into paper clips. The process continues until Clippie finally comes into contact with its archenemy, the staple maximizer.

This is an example of “instrumental convergence.” According to ChatGPT, “instrumental convergence is important because it suggests that even if an AI system is designed with benign or beneficial goals, it may still pursue actions that are harmful to humans if those actions are seen as necessary to achieve its instrumental goals.” One might argue that a sufficiently intelligent AI would be able to foresee the potentially disastrous consequences of its actions and offer course corrections before it is too late. Unfortunately, that may be wishful thinking. Stephen Wolfram, a computer scientist, physicist and businessman, has argued that almost all complex systems exhibit what he calls computational irreducibility, meaning if you want to predict what the system will do 10 steps out, the fastest route to the answer will be to take all 10 steps. This implies superintelligent AI may not be able to predict, even in principle, what it will do until it’s done it. In March, the Future of Life Institute published a letter signed by more than 1,000 luminaries, including Elon Musk and Apple co-founder Steve Wozniak, arguing for a 6-month pause in AI research to allow more time to develop better safety protocols. To date, nothing has happened.

Balancing promises and perils

The reality is the AI cat is out of the bag, as is the potential for immediate payoffs. In a recent interview with news and communications website Axios, Stanford economist Erik Brynjolfsson saw the AI-driven productivity revolution at the same stage that the World Wide Web was in 1995, when it exploded with the release of Microsoft’s Windows 95. Cognitive AI is “not in the laboratory, it’s not just some technical benchmark, it’s people really using it and changing the way they do business,” he said. If anything, compared to the ’90s’ IT revolution, “this time I think the takeoff is going to be faster because there’s not as much complementary investment that is required to get these technologies to work.” This new revolution could have far-reaching ramifications, Bank of America says, not just because of AI, but because of the maturity of other technologies that can be married with it. AI when paired with Blockchain, robotics, 3D printing and big data can enable an exponential wave of innovation—“S Curves on S Curves,” with an ability to rapidly assimilate, condensing into weeks tasks that now take years.

It's possible the marriage of AI and quantum computers in the future could take things to unimaginable levels. For investors, the challenge will be to recognize the opportunities, both in companies that create and supply AI, and those that effectively adopt it. Not everyone wins—as my research found, rapid technological change could make existing technologies obsolete, leaving many of today’s tech companies in the dust. Bank of America notes that the risks that come from disrupting industries extends to national security, too. Privacy could be eroded further. Deepfakes could bring propaganda and impersonation risk that could be weaponized. Mitigating this requires human oversight of AI models, and regulation. And there? It’s still early stages. “Open the pod bay doors, Hal.” “I’m sorry, Dave. I’m afraid I can’t do that.” Chills!!

AI tidbits

  • 44% of U.K. adults do not properly understand how AI works, but 51% of Europeans would like to replace their politicians with AI.
  • Almost half of people prefer interactions with AI-based chatbots over humans.
  • Computing power to train the largest AI datasets has increased 300,000x in the past decade, roughly doubling every three months—6x the progression of Moore's law.
  • At more than 8 billion, there are now more AI digital voice assistants than people on the planet.
  • Your personal device will soon know more about your emotions than your family and influence 90% of all your decision-making.
  • We will generate more data in the next two days than all the data created between the dawn of humanity and 2000.
  • Poor data quality costs the U.S. economy up to $3.1 trillion yearly.

Connect with Linda on LinkedIn

Tags Markets/Economy .
DISCLOSURES

Views are as of the date above and are subject to change based on market conditions and other factors. These views should not be construed as a recommendation for any specific security or sector.

Federated Equity Management Company of Pennsylvania

2281909551