Friend and Foe: How AI is Shaping the Cybersecurity Landscape

Share:

Artificial intelligence has emerged as a game-changer in cyber-defence over the past years, one that’s transforming industries in ways never previously seen. On one hand, for many cybersecurity enterprises, AI tools provide opportunities to find vulnerabilities faster, automate response, and raise the bar on threat detection. On the other hand, cyber-defenders are aware that AIs are being weaponised and as a result, are expanding the adversary landscape in ways that are both more abundant and sophisticated. Defenders are facing complex challenges that need to be appropriately addressed in order to leverage AIs safely.

AI Strengthening Cybersecurity Enterprises

1. More Efficient Vulnerability Discovery

AI tools, such as ML and generative AI, enable enterprises to discover their own systems’ weaknesses at a much faster rate than before, through automated scanning, anomaly detection, and predictive modelling. AI can also surface vulnerabilities that human testers may overlook or take longer to find (McKinsey & Company, 2024). For instance, for frontier AI models, benchmarks have included tasks for intelligence gathering, vulnerability exploitation, and malware development across the entire cyber-attack chain (DeepMind, 2025). This gives defenders an understanding of the weak points in their controls.

2. Improved Incident Detection & Response

AI helps enterprises detect intrusions, identify anomalous user, or machine behaviour, and triage incidents. Automated tools can scan logs, network traffic, or system performance for deviations from baseline that might signal an attack or compromise. The benefits lie in the ability to scale these tasks, thereby leading to faster response times and minimising human error. Traditional cybersecurity approaches are struggling to deal with volume; AI helps control the floods of alerts and data (ScienceDirect, 2023)

3. Policy & Governance Advances

AI safety, governance, and controls have been key topics of awareness for Australian organisations. Many organisations have implemented formal AI safety and governance policies, which include data sovereignty and privacy with a number of organisations taking a proactive approach by building AI tools that have been designed not to transmit or share data in ways that are unexpected and outside their users’ knowledge. Traceability and auditability of AI system decisions have also been a part of the focus.

AI Expanding the Adversary Landscape

If defenders are gaining powerful tools in AI for securing their ecosystems, the cyber-adversaries are certainly not standing by. AI offers a variety of new capabilities, dramatically lowers the technical bar, and increases scale for attackers.

1. Weaponisation of AI for Attack Automation

Threat actors are using AI in a variety of ways to weaponise AI for attack automation. This includes using it for reconnaissance activities, automating phishing or social engineering campaigns, generating malicious code or exploit scripts, and customising attacks to target individuals or organisations. AI also plays a key role in activities such as code troubleshooting and translation/localisation, and it also assists in the generation of more realistic deception.

2. Novel Attack Vectors: Generative AI, Deepfakes, Prompt Injection

Generative AI tools allow the creation of realistic content that can be abused. Deepfake technology, other fake audio/video, or online impersonation can be leveraged to spread misinformation, for social engineering, or for reputational attacks. Prompt injection and adversarial inputs are also emerging threats; users carefully craft inputs to AI models to cause the models to respond in unexpected ways, usually for malicious purposes.

3. Increased Attack Surface & Non-Human Identities

As the adoption of AI and digital infrastructure increases, so does the attack surface. Organisations are seeing more activity around non-human identities (service accounts, APIs, machine identities, etc.). These are often privileged and less observed, but breaches into these identities are particularly damaging.

4. Skill Lowering & Scaling Threat Actors

AI tools reduce technical thresholds. Attackers who previously needed deeper coding or specialist skills can easily use AI to generate phishing templates, author malicious payloads, or perform reconnaissance automatically. We will see more adversaries – or at least more activity from lower-level actors. More sophisticated state-sponsored or organised crime groups are also beginning to adopt AI to augment their operations.

Case Study: Qantas Breach & the Power of AI-Driven Social Engineering

Recent breaches at Qantas Airways demonstrate that even the most security-hardened businesses can be targeted through AI-assisted social engineering, deepfake impersonation, and exploitation of third-party access.

  • Qantas discovered abnormal behaviour on a third-party customer-service platform used by its call centre. The platform held up to six million customers’ records. Data stolen from this platform included “names, email addresses, phone numbers, birth dates and frequent flyer numbers.” Credit card information, passports, financial information, and login/password details were not exposed in the hack.
  • The breach “snuck in through the backdoor” with human/social engineering techniques despite Qantas having claimed to have “all the bells and whistles” of cyber defences in place. It is now reported with some credibility that the attackers (possibly a group known as Scattered Spider ) used AI-voice deepfakes (“vishing”) that mimicked the voice of Qantas staff or an “insider” voice, to socially engineer Qantas call centre or support staff into providing credentials or facilitating access.
  • The attackers may have also circumvented multi-factor authentication (MFA) with “vishing” or voice impersonation and social engineering of vendor employees. The human element and third-party risk can present potential entry vectors for sophisticated attacks, even when technical defences are strong.
  • Following the breach, Qantas has stated that it was working with government agencies and police, and had sought the advice of cybersecurity consultants; it has pledged to boost system monitoring and detection capabilities, restrict access to systems and information, improve third-party oversight and implement additional security measures.
Also Read:  Slopsquatting: The AI-Induced Supply Chain Threat You Shouldn’t Ignore

This is an example of how an enterprise can have all the technical defences in place (“bells and whistles”) and attackers with malicious AI (e.g., social manipulation or impersonation) still have exploitable openings. The attack also shows how third-party platforms/vendors can be a weak link.

Navigating the Path: Mitigation, Strategy, and Implications

In light of this duality, cybersecurity enterprises are now presented with a balancing act. Here are some strategic considerations, challenges, and implications below.

  1. Adopting Offensive Security Measures: AI can be used by both attackers and defenders. For example, using AI-generated deepfake audio in “red teaming” exercises, creating attack scenarios, or simulating social engineering attacks to model potential attack chains and identify vulnerabilities.
  2. Hybrid and Human-in-the-loop Systems: Fully autonomous systems have their own risks. In many cases, human intelligence and oversight will still be required. Human-AI collaboration will often be preferred by many organizations, particularly for incident response decision-making, or human-in-the-loop for making judgements about questionable content.
  3. Prioritising Third-Party and Vendor Risk Management: Enterprises should recognize that third-party and vendor platforms, whether they be outsourced contact centres, service providers, or contractors, will frequently be the points of entry. Enhanced due diligence, continuous monitoring, contractual security requirements, and ensuring vendors follow security standards comparable to in-house protocols are all crucial.
  4. Investing in Authentication and Verification Methods: Organisations should consider going beyond the basic multi-factor authentication (MFA) and use a more layered and adaptive verification process, including voice signature analysis, call origin verification, anomaly detection, use of rotating code-words for verification, and implementing real-time confirmation mechanisms, etc. to protect against voice impersonation or social engineering attempts.
  5. Policy, Governance & Regulations: Policy on use of AI systems, data use, consumer rights, model safety, audit, traceability, regulatory adherence and reporting, etc. are needed. Organizations must keep up with how threats are developing by cooperating with regulators and complying with best practices.
  6. Investing in Defensive AI tools: Defenders will have to play catch up, making investments in AI-powered detection systems, threat intelligence, and anomaly detection, in addition to secure code generation and other defensive tools. Constant monitoring and updating defences as attack methodologies improve is also important.

Challenges & Risks

  • False Positives / False Negatives: AI models can produce a large number of false positives (false alarms that waste resources) or false negatives (missed detections).
  • Model Vulnerabilities: AI models themselves have their own set of vulnerabilities, such as data poisoning, adversarial input, prompt injection, etc. If these vulnerabilities are not understood and mitigated, defenders can be caught off guard.
  • Human Element: Social engineering, trust, and psychological manipulation are still highly effective attack vectors. AI can amplify these vectors, particularly through impersonation, synthetic voices, etc.
  • Third-Party Complexity: Functions that are outsourced or vendor systems that are integrated typically do not have consistent oversight or control, leaving a gap in the defences.
  • Regulatory Lag & Compliance: The fast pace of technology outstrips legal, regulatory, and ethical standards. This can make it challenging to enforce or even agree on standard defences.
  • Resource & Capability Gaps: Many organisations still lack the staff, skills, or budget to implement, monitor, and maintain AI-driven cybersecurity defences.

Conclusion

AI creates a paradox for cybersecurity enterprises. On one hand, AI increases the capacity to defend by finding vulnerabilities faster, enabling detection at scale, and providing governance tools. On the other, it multiplies the number and sophistication of attackers, creates new forms of threat, and accelerates the expansion of risk surfaces. The recent Qantas breach is a timely reminder that even “bells and whistles” in defences can be undermined by human-targeted AI techniques, and vulnerabilities in third-party or vendor systems.

For cybersecurity enterprises, the way forward is to embrace AI, but with careful governance: human oversight, adversarial testing, continuous monitoring, stronger authentication, and investment in defences must be applied to any deployment. With the right balance, AI can be a true force multiplier for cybersecurity, however, without the right checks and balances, it can increase the scale and sophistication of threats with equal impact.

AI is reshaping the cyber battlefield. Gain the upper hand by adopting intelligent defence strategies. Talk to our specialists and turn AI into your strongest ally.

Enjoyed reading this blog? Stay updated with our latest exclusive content by following us on Twitter and LinkedIn.

Ampcus Cyber
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.