AI Cybersecurity Best Practices: Meeting a Double-Edged Challenge
Artificial intelligence is already showing its potential to reshape nearly every aspect of cybersecurity – for good and bad.
If anything represents the proverbial double-edged sword, it might be AI: It can act as a formidable tool in creating robust cybersecurity defenses or can dangerously compromise them if weaponized.
Why is AI security important?
It’s incumbent upon organizations to understand both the promise and problems associated with AI cybersecurity because of the ubiquity of all iterations of AI in global business. Its use by bad actors is already a source of concern.
According to McKinsey, AI adoption by organizations surged to 72% in 2024, up from about 50% in prior years across multiple regions and industries. But the intricate nature and vast data requirements of AI systems also make them prime targets for cyber-attacks. For instance, input data for AI systems can be slyly manipulated in adversarial attacks to produce incorrect or damaging outputs.
A compromised AI can lead to catastrophic consequences, including data breaches, financial loss, reputational damage and even physical harm. The prospect for misuse is immense, underscoring the critical need for robust AI security measures.
Research by the World Economic Forum found that almost half of executives worry most about how AI will raise the risk level from threats like phishing. Ivanti’s 2024 cybersecurity report confirmed those concerns.
Despite the risks, the same Ivanti report found that IT and Security professionals are largely optimistic about the impact of AI cybersecurity. Almost half (46%) feel it’s a net positive, while 44% think its impact will be neither positive nor negative.
Read more: 2024 State of Cybersecurity Report - Inflection Point
Potential AI cyber threats
AI introduces new attack vectors that require specific defenses. Examples include:
- Site hacking: Researchers have found OpenAI’s large language model can be repurposed as an AI hacking agent capable of autonomously attacking websites. Cyber crooks don’t need hacking skills, only the ability to properly prompt the AI into doing their dirty work.
- Data poisoning: Attackers can manipulate the data used to train AI models, so they malfunction. This could involve injecting fake data points that influence the model to learn incorrect patterns or prioritizing non-existent threats, or subtly modifying existing data points to bias the AI model toward outcomes that benefit the attacker.
- Evasion techniques: AI could be used to develop techniques that evade detection by security systems, such as creating emails or malware that don't look suspicious to humans but trigger vulnerabilities or bypass security filters.
- Advanced social engineering: Since it can analyze large datasets, an AI can identify targets based on certain criteria, such as vulnerable past behaviors or susceptibility to certain scams. Then, it can automate and personalize an attack using relevant information scraped from social media profiles or prior interactions so it’s more believable and likely to fool the recipient. Plus, generative AI can draft phishing messages without grammar or usage mistakes to look legitimate.
- Denial-of-service (DoS) attacks: AI can be used to orchestrate large-scale DoS attacks that are more difficult to defend against. By analyzing network configurations, it can detect vulnerabilities then manage botnets more effectively as it tries to overwhelm a system with traffic.
- Deepfakes: AI can generate convincing visual or sonic imitations of people for impersonation attacks. For example, it could mimic the voice of a high-level executive to trick employees into wiring money to fraudulent accounts, sharing sensitive information like passwords or access codes or approving unauthorized invoices or transactions. If a company uses voice recognition in its security systems, a well-crafted deepfake might fool these safeguards and access secure areas or data. One Hong Kong company was robbed of $26 million via a deepfake scam.
A “soft” threat presented by AI is complacency. There's always a risk of over-reliance on AI systems, which might lead to laxity in monitoring and updating them. One of the most important measures for protecting an enterprise from AI issues is through continuous training and monitoring, whether AI is being deployed in cybersecurity or other operations. Ensuring that AI operates with the organization's best interests in mind demands ongoing vigilance.
Watch: Generative AI for InfoSec & Hackers: What Security Teams Need to Know
AI cybersecurity benefits
AI cybersecurity solutions deliver the most significant value to an organization in the following ways:
Enhanced threat detection
AI excels at identifying patterns in vast datasets to detect anomalies indicative of cyber-attacks with unprecedented accuracy. While human analysts would be overwhelmed by the volume of data or alerts, AI improves early detection and response.
Improved incident response
AI can automate routine incident response tasks, accelerating response times and minimizing human error. By analyzing past incidents, AI can also predict potential attack vectors so organizations can strengthen defenses.
Risk assessment and prioritization
AI can evaluate an organization's security posture, identifying vulnerabilities and prioritizing remediation efforts based on risk levels. This helps optimize resource allocation and focus on critical areas.
Security considerations for different types of AI
Security challenges associated with AI vary depending on the type being deployed.
If a company is using generative AI, the focus should be on protecting training data, preventing model poisoning and safeguarding intellectual property.
In the case of weak (or “narrow”) AI such as customer support chatbots, recommendation systems (like Netflix), image-recognition software, assembly line and surgical robots, the organization should prioritize data security, adversarial robustness and explainability.
Autonomous “strong” AI (aka Artificial General Intelligence) is a work in progress that doesn’t yet exist. But if it arrives, companies should focus on defending control mechanisms and addressing existential risks and ethical implications.
Watch: How to Transform IT Service Management with Generative AI
Latest developments in AI cybersecurity
The rapid evolution of AI is driving corresponding advances in AI cybersecurity that include:
- Generative AI threat modeling: AI cybersecurity tools can simulate attack scenarios to help organizations find and fix vulnerabilities proactively.
- AI-powered threat hunting: AI can analyze network traffic and system logs to detect malicious activity and potential threats.
- Automated incident response: AI cybersecurity solutions can automate routine incident response tasks like isolating compromised systems and containing threats.
- AI for vulnerability assessment: Can analyze software code to find possible vulnerabilities so developers can build more secure applications.
AI cybersecurity courses
Investing in AI cybersecurity education is crucial for building a workforce that understands how to use these tools. Numerous online platforms and universities offer courses covering various aspects of AI security, from foundational knowledge to advanced topics.
Top cybersecurity solution providers will offer a wide range of courses and training to give your team the skills it needs to get the most out of your platform.
AI cybersecurity best practices
Implementing a comprehensive strategy for putting AI into action for cybersecurity is essential.
1. Set out data governance and privacy policies
Early in the adoption process, establish robust data governance policies that cover data anonymization, encryption and more. Include all relevant stakeholders in this process.
2. Mandate AI transparency
Develop or license AI models that can provide clear explanations for their decisions, rather than using “black box” models. This is so security professionals can understand how the AI arrives at its conclusions and identify potential biases or errors. These "glass box” models are provided by Fiddler AI, DarwinAI, H2O.ai and IBM Watson tools such as AI Fairness 360 and AI Explainability 360.
3. Stress strong data management
- AI models rely on the quality of data used for training. Ensure you're using diverse, accurate and up-to-date data so your AI can learn and identify threats effectively.
- Impose robust security measures to protect the data used in training and operating an AI model, as some may be sensitive. Any breaches could expose it, compromise AI effectiveness or introduce vulnerabilities.
- Be mindful of potential biases in your training data. Biases can lead the AI to prioritize certain types of threats or overlook others. Regularly monitor and mitigate bias to ensure your AI is making objective decisions.
Learn about: The Importance of Accurate Data to Get the Most From AI
4. Give AI models adversarial training
Expose AI models to malicious inputs during the training phase so they’re able to recognize and counteract adversarial attacks like data poisoning.
5. Implement continuous monitoring
- Conduct continuous monitoring and threat detection systems to identify bias and performance degradation.
- Use anomaly detection systems to identify unusual behavior in your AI models or network traffic patterns to detect potential AI attacks that try to manipulate data or exploit vulnerabilities.
- Regularly retrain your AI cybersecurity models with fresh data and update algorithms to ensure they stay effective against evolving threats.
6. Keep humans in the loop
AI is not infallible. Maintain human oversight, with security professionals reviewing and validating AI outputs to catch potential AI biases, false positives or manipulated results the AI might generate.
7. Conduct regular testing and auditing
- Routinely assess your AI models for vulnerabilities. Like any software, AI cybersecurity products can have weaknesses attackers might exploit. Patching them promptly is crucial.
- AI models can generate false positives, identifying non-existent threats. Adopt strategies to minimize false positives and avoid overwhelming security teams with irrelevant alerts.
- Conduct frequent security testing of your AI models to identify weaknesses that attackers might exploit. Penetration testing expressly designed for AI systems can be very valuable.
8. Have an incident response plan
Create a comprehensive incident response plan to effectively address AI-related security incidents.
9. Emphasize employee training
- Educate employees about the risks associated with AI and how social engineering tactics might be used to manipulate them into compromising AI systems or data security.
- Conduct red-teaming exercises that simulate AI-powered attacks, which help test your security posture and spot weaknesses attackers might exploit.
- Collaborate with industry experts and security researchers to stay abreast of the latest AI threats and best practices for countering them.
10. Institute third-party AI risk management
Carefully evaluate the security practices of third-party AI providers. Do they share data with other parties or use public datasets? Do they follow Secure by Design principles?
11. Other best practices
- Integrate your AI solution with threat intelligence feeds so it can incorporate real-time threat data and stay ahead of new attack vectors.
- Ensure your AI solution complies with relevant industry standards and regulations. This is mandatory in certain sectors. For instance, in the automotive and manufacturing sectors, an AI must comply with ISO 26262 for automotive functional safety, General Data Protection Regulation (GDPR) for data privacy and National Institute of Standards and Technology guidance. AI in healthcare must comply with the Health Insurance Portability and Accountability Act in the U.S., GDPR in Europe and FDA regulations for AI-based medical devices.
- Track metrics like threat detection rates, false positives and response times. This way, you’ll know the effectiveness of your AI and areas for improvement.
Win by being balanced
For any organization venturing into this bold new AI cybersecurity frontier, the way forward is a balanced approach. Leverage the copious strengths of AI – but remain vigilant as to its limitations and potential vulnerabilities.
Like any technology, AI is not inherently good or bad; it is used by both good and bad actors. Always remember to treat AI like any other tool: Respect it for what it can do to help but stay wary of what it can do to harm.