Rising Cyber Threat of AI: The Kiplinger Letter

Security experts warn that generative AI brings new risks with no clear defenses. With AI's rapid adoption, businesses are vulnerable.

AI or artificial intelligence security risk: Green robots representing good with a single red or bad robot in the middle of the pack
(Image credit: Getty Images)

To help you understand how AI and other new technologies are affecting energy consumption, trends in this space and what we expect to happen in the future, our highly experienced Kiplinger Letter team will keep you abreast of the latest developments and forecasts. (Get a free issue of The Kiplinger Letter or subscribe). You'll get all the latest news first by subscribing, but we will publish many (but not all) of the forecasts a few days afterward online. Here’s the latest…

A new and rising cybersecurity threat: Vulnerabilities from artificial intelligence as companies increasingly adopt generative AI, the tech behind ChatGPT and many other chatbots. 

AI threats join a slew of other cyber risks. Though new AI security risks are known there are no surefire ways to address them. One big issue is the massive amounts of data needed to train complex AI models so users can create text, code, images, video, data analyses, charts, etc., by writing questions or prompts in plain English. 

Subscribe to Kiplinger’s Personal Finance

Be a smarter, better informed investor.

Save up to 74%
https://cdn.mos.cms.futurecdn.net/hwgJ7osrMtUWhk5koeVme7-200-80.png

Sign up for Kiplinger’s Free E-Newsletters

Profit and prosper with the best of expert advice on investing, taxes, retirement, personal finance and more - straight to your e-mail.

Profit and prosper with the best of expert advice - straight to your e-mail.

Sign up

The AI chatbots can leak company info in a data breach. Sensitive company info is used for internal AI chatbots, including financial data, customer info, product research and legal files. Publicly available AI chatbots may pose more risk because now an outside party controls company data. 

Hackers can even trick the AI model to get it to leak sensitive data with clever phrasing or repeated questions…a “prompt injection attack”, even if there are guardrails in place. Sources of data, programming code, company secrets, etc., are at risk. 

Cyber pros are not confident in the current security of generative AI tools — both internal company apps and external ones from Google, ChatGPT and others. Some of the top recommendations for securing AI tools:

  • Vet vendors closely.
  • Have a clear policy on generative AI, including what apps are approved and in use, who is using them and what data are involved.
  • Consider blocking certain outside apps.
  • Vendors such as Microsoft, Forcepoint and Palo Alto Networks can scan AI models for security risks and track sensitive data and employee use.

It’s a fast-growing area...

Meanwhile, hackers are weaponizing emerging AI tools for cyberattacks, such as creating malicious software or sophisticated email phishing operations. Criminals can do this without technical know-how — just ask an AI chatbot for help. 

Other trends to watch:

  • An increase in supply chain attacks, where hackers target third-party software vendors and regular suppliers to steal the info they hold. 
  • New legal liability for security leaders related to SEC regulations and lawsuits. More companies are extending directors' and officers' insurance to security execs.
  • Companies spending more to battle deepfakes — AI-manipulated media that put executives or customers at risk. Defensive tools include Reality Defender, a deepfake detection system.
  • Ransomware remains a big problem, with hackers trying to lock down data and extort a payment. Attackers are becoming more evasive and persistent, too. 

Businesses should continue to emphasize tried-and-true best practices — patching software regularly, two-factor authentication, employee security training, incident response plans, etc. Plus, they should always prioritize security when adopting new AI tools.

This forecast first appeared in The Kiplinger Letter, which has been running since 1923 and is a collection of concise weekly forecasts on business and economic trends, as well as what to expect from Washington, to help you understand what’s coming up to make the most of your investments and your money. Subscribe to The Kiplinger Letter.

Related Content

John Miley
Senior Associate Editor, The Kiplinger Letter

John Miley is a Senior Associate Editor at The Kiplinger Letter. He mainly covers technology, telecom and education, but will jump on other important business topics as needed. In his role, he provides timely forecasts about emerging technologies, business trends and government regulations. He also edits stories for the weekly publication and has written and edited e-mail newsletters.

He joined Kiplinger in August 2010 as a reporter for Kiplinger's Personal Finance magazine, where he wrote stories, fact-checked articles and researched investing data. After two years at the magazine, he moved to the Letter, where he has been for the last decade. He holds a BA from Bates College and a master’s degree in magazine journalism from Northwestern University, where he specialized in business reporting. An avid runner and a former decathlete, he has written about fitness and competed in triathlons.