Artificial intelligence has emerged as a game-changer in cyber-defence over the past years, one that’s transforming industries in ways never previously seen. On one hand, for many cybersecurity enterprises, AI tools provide opportunities to find vulnerabilities faster, automate response, and raise the bar on threat detection. On the other hand, cyber-defenders are aware that AIs are being weaponised and as a result, are expanding the adversary landscape in ways that are both more abundant and sophisticated. Defenders are facing complex challenges that need to be appropriately addressed in order to leverage AIs safely.
AI tools, such as ML and generative AI, enable enterprises to discover their own systems’ weaknesses at a much faster rate than before, through automated scanning, anomaly detection, and predictive modelling. AI can also surface vulnerabilities that human testers may overlook or take longer to find (McKinsey & Company, 2024). For instance, for frontier AI models, benchmarks have included tasks for intelligence gathering, vulnerability exploitation, and malware development across the entire cyber-attack chain (DeepMind, 2025). This gives defenders an understanding of the weak points in their controls.
AI helps enterprises detect intrusions, identify anomalous user, or machine behaviour, and triage incidents. Automated tools can scan logs, network traffic, or system performance for deviations from baseline that might signal an attack or compromise. The benefits lie in the ability to scale these tasks, thereby leading to faster response times and minimising human error. Traditional cybersecurity approaches are struggling to deal with volume; AI helps control the floods of alerts and data (ScienceDirect, 2023)
AI safety, governance, and controls have been key topics of awareness for Australian organisations. Many organisations have implemented formal AI safety and governance policies, which include data sovereignty and privacy with a number of organisations taking a proactive approach by building AI tools that have been designed not to transmit or share data in ways that are unexpected and outside their users’ knowledge. Traceability and auditability of AI system decisions have also been a part of the focus.
If defenders are gaining powerful tools in AI for securing their ecosystems, the cyber-adversaries are certainly not standing by. AI offers a variety of new capabilities, dramatically lowers the technical bar, and increases scale for attackers.
Threat actors are using AI in a variety of ways to weaponise AI for attack automation. This includes using it for reconnaissance activities, automating phishing or social engineering campaigns, generating malicious code or exploit scripts, and customising attacks to target individuals or organisations. AI also plays a key role in activities such as code troubleshooting and translation/localisation, and it also assists in the generation of more realistic deception.
Generative AI tools allow the creation of realistic content that can be abused. Deepfake technology, other fake audio/video, or online impersonation can be leveraged to spread misinformation, for social engineering, or for reputational attacks. Prompt injection and adversarial inputs are also emerging threats; users carefully craft inputs to AI models to cause the models to respond in unexpected ways, usually for malicious purposes.
As the adoption of AI and digital infrastructure increases, so does the attack surface. Organisations are seeing more activity around non-human identities (service accounts, APIs, machine identities, etc.). These are often privileged and less observed, but breaches into these identities are particularly damaging.
AI tools reduce technical thresholds. Attackers who previously needed deeper coding or specialist skills can easily use AI to generate phishing templates, author malicious payloads, or perform reconnaissance automatically. We will see more adversaries – or at least more activity from lower-level actors. More sophisticated state-sponsored or organised crime groups are also beginning to adopt AI to augment their operations.
Recent breaches at Qantas Airways demonstrate that even the most security-hardened businesses can be targeted through AI-assisted social engineering, deepfake impersonation, and exploitation of third-party access.
This is an example of how an enterprise can have all the technical defences in place (“bells and whistles”) and attackers with malicious AI (e.g., social manipulation or impersonation) still have exploitable openings. The attack also shows how third-party platforms/vendors can be a weak link.
In light of this duality, cybersecurity enterprises are now presented with a balancing act. Here are some strategic considerations, challenges, and implications below.
AI creates a paradox for cybersecurity enterprises. On one hand, AI increases the capacity to defend by finding vulnerabilities faster, enabling detection at scale, and providing governance tools. On the other, it multiplies the number and sophistication of attackers, creates new forms of threat, and accelerates the expansion of risk surfaces. The recent Qantas breach is a timely reminder that even “bells and whistles” in defences can be undermined by human-targeted AI techniques, and vulnerabilities in third-party or vendor systems.
For cybersecurity enterprises, the way forward is to embrace AI, but with careful governance: human oversight, adversarial testing, continuous monitoring, stronger authentication, and investment in defences must be applied to any deployment. With the right balance, AI can be a true force multiplier for cybersecurity, however, without the right checks and balances, it can increase the scale and sophistication of threats with equal impact.
Enjoyed reading this blog? Stay updated with our latest exclusive content by following us on Twitter and LinkedIn.
This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.
Strictly Necessary Cookie should be enabled at all times so that we can save your preferences for cookie settings.
This website uses Google Analytics to collect anonymous information such as the number of visitors to the site, and the most popular pages.
Keeping this cookie enabled helps us to improve our website.
This website uses the following additional cookies:
(List the cookies that you are using on the website here.)
More information about our Cookie Policy