DJ Wong
  • Home
  • Go to top
WELCOME
    • About

  • 00-Domain Knowledge
  • 01-Setting Up Virtual Environments

  • 01-CREST 2022

  • 01-Getting Started
  • 02-Reconnaissance
  • 03-Enumeration
  • 04-First Foothold
  • 05-Privilege Escalation
  • 06-Lateral Movement
  • 07-Active Directory
  • 08-Cloud

  • 01-Python Basics

  • 01-AI in Cyber

© DJ Wong 2026 | This site is powered by me.


Artificial Intelligence and its relationship to Cybersecurity

Latest update: 24-Nov-2025

  • Artificial Intelligence and its relationship to Cybersecurity
  • What is AI
    • Machine Learning (ML)
    • LLM Usage Pitfallss
  • AI in Cybersecurity
    • Software Development
      • Vulnerabilities
    • Defence and Blue Teams
  • Securing the ML Model Itself
    • Attacks
      • Deepfakes
    • Guardrails
    • Cases of AI and DeepFakes

What is AI

In recent years, the rise of what the layman calls "Artificial Intelligence (AI)" largely refers to the multi-modal Large Language Models (LLM), where a textual, or hybrid, inputs will result in text, images, and even videos.

There can be multi-model inputs that does not require textual inputs. For example: image-to-video such as Sora, video-to-video and audio-to-audio for deepfakes for both face and voices.

Machine Learning (ML)

Underneath the label of AI, the main technical terminology is called "Machine Learning" and it has its roots in statistics and mathematics.

For example, to have a prediction model, we can use a simple straight line equation such as:

y = mx + c

If current x is equal to 10, and we want to predict what the y value is at x=21, we simply plug in the the x value to get the predicted y value. However, things can change in between 10 < x < 21 due to other parameters changing. So whenever current x is changed, the actual new y,m,and c values are updated, meaning the line is now different. The next prediction of x=21 might become different again.

This automated way of updating the parameters and giving a prediction is called "machine learning".
Other than a simple straight line, there are many ways of modeling such as classification, regression, clustering, and more.

To learn about mathing learning, here are some resources:
- https://scikit-learn.org/stable/index.html
- https://www.kaggle.com/

LLM Usage Pitfallss

Fundamentally, LLMs takes a textual input and gives a best prediction of what the text is asking. The problems arise when users take the output text as a source of truth, without further verification or modification. This can lead to bad code, misinformation, and there are cases where the LLM pushed a teenager to suicide. (Aug 26, 2025, https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit)

AI in Cybersecurity

In today's context, machine learning and LLMs are starting to play a big role in IT and cybersecurity.

Software Development

ChatGPT, Claude, and Deepseek are popular LLMs that are being bundled with software development tools. Visual Studio Code, a popular Integrated Development Environment(IDE), has also integrated Claude Sonnet LLM that users can prompt, and even build your code for you. This kind of AI is also called Agentic AI.

Vulnerabilities

If software developers rely on these systems without reviewing code, it is inevitable that vulnerabilities will arise. For example, the code may not have included input sanitation.

Defence and Blue Teams

Machine Learning has been integrated with cyber defence tools such as Intrusion Detection/Prevention Systems(IDS,IPS), Security Information and Event Management (SIEM), and Extended Detection and Response Systems (XDR).

  • https://www.crowdstrike.com/en-us/cybersecurity-101/endpoint-security/ai-native-xdr/
  • https://learn.microsoft.com/en-us/azure/ai-services/anomaly-detector/overview

Phishing email detection is also another example of applied AI in cybersecurity.

Securing the ML Model Itself

https://www.cisco.com/c/en/us/products/collateral/security/ai-defense/ai-defense-ds.html

Attacks

  • AI Model Poisoning
    • Poisons the training data
  • Prompt Leaking
    • https://layerxsecurity.com/generative-ai/prompt-leak/
  • Vibe Hacking
    • https://www.anthropic.com/news/disrupting-AI-espionage
    • https://assets.anthropic.com/m/ec212e6566a0d47/original/Disrupting-the-first-reported-AI-orchestrated-cyber-espionage-campaign.pdf

Deepfakes

  • Real-time Face Cloner
    • https://github.com/hacksider/Deep-Live-Cam
  • Real-time Voice Cloner
    • https://github.com/CorentinJ/Real-Time-Voice-Cloning

Guardrails

Guardrails are some contextual prompt and output sanitation that can put in to the chat agent. For example, if the first output is passed through another prompt by the system that checks for any "bad" text. If there is, the new output can be sanitized further or simply blocked.

Cases of AI and DeepFakes

  • https://edition.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec
  • https://thehackernews.com/2025/11/researchers-find-chatgpt.html
  • https://www.wired.com/story/mcdonalds-ai-hiring-chat-bot-paradoxai/

Start
End

Table of Contents