HB Ad Slot
HB Mobile Ad Slot
Use Natural Intelligence Before Artificial Intelligence
Friday, August 8, 2025

The cutting-edge technology encompassing Artificial Intelligence (AI) solutions is astonishing, and this technology has led to a steady increase in organizations adopting or developing their own AI solutions.

Several healthcare and customer service organizations are using AI technology to streamline business processes by mimicking or replacing humans with robotics, and this has led to noteworthy cost savings as a byproduct of early AI adoption.

Not all early adopters of AI were able to reap these rewards. Because of AI’s inherent security risks, some organizations experienced unplanned business disruptions, significant reputational damage, and financial loss. For example, when Samsung employees used ChatGPT for internal code review purposes, they accidentally leaked confidential information, which resulted in Samsung banning the use of generative AI tools.

Is It Time to Embrace AI and Does Its Strengths Outweigh Its Weaknesses?

According to a publicly accessible AI solution, its most significant information security risks are:

  1. Phishing Attacks
  2. Ransomware
  3. Advanced Persistent Threats (APTs)
  4. Zero-Day Exploits
  5. Man-in-the-Middle (MitM) Attacks
  6. Insider Threats
  7. DDoS Attacks
  8. Misconfigured Access Controls
  9. SQL Injection Vulnerabilities
  10. AI-Enabled Attacks

Most of these attack methods have been around for years and each should not be taken lightly, as their high-risk significance can expose an organization to unauthorized access to its network and information systems. In turn, unexpected information system downtime, significant disruptions to business service, reputational damage, and financial loss could result.

Moreover, AI’s mainstream usage has increased the likelihood of greater data privacy and security risks that result from deceptive practices. Take for example ‘AI-Enabled Attacks’ which leverage unpredictability to create deepfake news, videos, and audio to mislead people into thinking that real events have occurred when in fact they have not.

Other types of AI-enabled attack methods use weaponized malware which mimics legitimate network traffic, making it much harder for entities’ network operations teams to detect and defend against. The byproduct of these efforts can include accidental misconfiguration of security controls (e.g., antivirus software), with an increased susceptibility to malware that can allow an adversary to gain unauthorized access to Protected Health Information (PHI) and perform data exfiltration through illicit means.

How Does an Organization Defend Against These Attack Methods?

Deploying a customized AI solution that integrates predictive behavioral analysis techniques into network monitoring is a type of method that can allow for timelier detection of unusual network activities. For supplemental support, organizations should consider:

  1. Creating an AI governance policy
  2. Implementing strict information access controls
  3. Using secure coding practices
  4. Employing data encryption to prevent unauthorized data manipulation
  5. Providing relevant security awareness training
  6. Conducting continuous IT audits and network monitoring activities, to detect behavioral anomalies such as unauthorized AI use

Managing AI Risks

The adoption of a comprehensive AI framework is essential for managing AI risks and will help ensure proper governance of AI solutions.

Below is a brief outline of notable frameworks worth considering.

Conclusion

Although AI risks can be prevented and mitigated, failure to govern and deploy a secure AI system can result in significant fines imposed by governing regulatory bodies such as Health and Human Services, when protected health information is abused, or by the European Union if an organization fails to adequately implement data protection safeguards.

Before deploying AI solutions, organizations should establish AI ethical use committees to govern information security initiatives such as: the deployment of guardrails which may permit for the early detection and prevention of AI-related risks; secure system development life cycle practices; alignment of controls with AI framework requirements and security control standards (e.g., NIST Cybersecurity Framework version 2.0).

This article was originally published in Financial PoiseTM. This article is subject to the disclaimers found here.

HTML Embed Code
HB Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 
NLR Logo
We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up for any (or all) of our 25+ Newsletters.

 

Sign Up for any (or all) of our 25+ Newsletters