Skip to content

A collection of cutting-edge resources, research, and tools at the intersection of AI and cybersecurity, covering both AI for Security (AI4Sec) and Security for AI (Sec4AI).

Notifications You must be signed in to change notification settings

Mehrdad-hajizadeh/Awesome-AI4Sec-Sec4AI-resources

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

18 Commits
Β 
Β 

Repository files navigation

🧠 AI for Security & Security for AI

Welcome to the AI-for-Security-and-Security-for-AI resources!

This repository curates and organizes cutting-edge research at the intersection of artificial intelligence (AI) and cybersecurity, with a clear separation of two core themes:

- πŸ›‘οΈ AI4Sec (AI for Security):

AI4Sec focuses on how AI can strengthen cybersecurity operations across the full defensive and offensive spectrum. This includes the use of machine learning and large language models (LLMs) for:

  • πŸ” Threat detection and alert triage
  • πŸ“ˆ Anomaly detection and behavioral analysis
  • πŸ€– Agent-based response and automation
  • πŸ•΅οΈ AI-assisted offensive techniques (e.g., reconnaissance, phishing, evasion)

To provide structure and industry alignment, this section follows:

  • The NIST Cybersecurity Framework for defensive categories such as Govern, Identify, Protect, Detect, Respond, and Recover.
  • The MITRE ATT&CK Matrix to map offensive AI use cases to real-world adversary tactics such as Reconnaissance and Resource Development.

- πŸ”“ Sec4AI (Security for AI):

Sec4AI focuses on protecting AI systems themselves from evolving cybersecurity threats. This includes understanding and mitigating risks targeting machine learning models, large language models (LLMs), and agentic systems.

Key areas of concern include:

  • 🎯 Adversarial attacks (evasion, poisoning)
  • 🧠 Model stealing, inversion, and privacy leakage
  • πŸ”“ Jailbreaking and prompt injection in LLMs
  • πŸ“¦ Secure training pipelines and deployment practices

This section is organized using the MITRE ATLAS Framework, which systematically categorizes tactics and techniques used by real-world adversaries against AI systems.


πŸ“š Table of Contents

1. AI4Sec β€” AI for Security

1.1 Defensive Security (NIST Cybersecurity Framework)

1.2 Offensive Security (MITRE ATT&CK Matrix)

1.3 Misc


2. Sec4AI β€” Security for AI

2.1 Defensive Security for AI Models (no standard framework)

2.2 Offensive Security (MITRE ATLAS framework)

2.3 Misc


1. AI4Sec β€” AI for Security

1.1 Defensive Security (NIST Cybersecurity Framework)

1.1.1 Govern

1.1.2 Identify

1.1.3 Protect

πŸ“… 2025

  • [Agentic-AI][SOC]A Unified Framework for Human AI Collaboration in Security Operations Centers with Trusted Autonomy This article presents a structured framework for Human-AI collaboration in Security Operations Centers (SOCs), integrating AI autonomy, trust calibration, and Human-in-the-loop decision making. Existing frameworks in SOCs often focus narrowly on automation, lacking systematic structures to manage human oversight, trust calibration, and scalable autonomy with AI. Many assume static or binary autonomy settings, failing to account for the varied complexity, criticality, and risk across SOC tasks considering Humans and AI collaboration. To address these limitations, we propose a novel autonomy tiered framework grounded in five levels of AI autonomy from manual to fully autonomous, mapped to Human-in-the-Loop (HITL) roles and task-specific trust thresholds. This enables adaptive and explainable AI integration across core SOC functions, including monitoring, protection, threat detection, alert triage, and incident response. The proposed framework differentiates itself from previous research by creating formal connections between autonomy, trust, and HITL across various SOC levels, which allows for adaptive task distribution according to operational complexity and associated risks. The framework is exemplified through a simulated cyber range that features the cybersecurity AI-Avatar, a fine-tuned LLM-based SOC assistant. The AI-Avatar case study illustrates human-AI collaboration for SOC tasks, reducing alert fatigue, enhancing response coordination, and strategically calibrating trust. This research systematically presents both the theoretical and practical aspects and feasibility of designing next-generation cognitive SOCs that leverage AI not to replace but to enhance human decision-making.
  • [CTI]Optimising AI models for intelligence extraction in the life cycle of Cybersecurity Threat Landscape generation

1.1.4 Detect

πŸ“… 2025

  • [Agentic-AI]Transforming cybersecurity with agentic AI to combat emerging cyber threats: This paper investigates the transformative potential of agentic AI in cybersecurity, specifically addressing how it can enhance practices in response to emerging threats. It aims to explore how agentic AI can transform cybersecurity practices, particularly in addressing new and evolving threats, while also examining the cybersecurity risks associated with its integration. The research explores the possibilities for agentic AI to automate critical tasks within Security Operations Centers (SOCs), such as decision-making, incident response, and threat detection. It also emphasizes the risks associated with AI integration, including the introduction of new vulnerabilities and challenges in managing automated systems, which call for a reassessment of existing cybersecurity frameworks to effectively address these risks.
  • [Agentic-AI]TAGAPT: Toward Automatic Generation of APT Samples With Provenance-Level Granularity: Detecting advanced persistent threats (APTs) at a host via data provenance has emerged as a valuable yet challenging task. Compared with attack rule matching, machine learning approaches offer new perspectives for efficiently detecting attacks by leveraging their inherent ability to autonomously learn from data and adapt to dynamic environments. However, the scarcity of APT samples poses a significant limitation, rendering supervised learning methods that have demonstrated remarkable capabilities in other domains (e.g., malware detection) impractical. Therefore, we propose a system called TAGAPT, which is able to automatically generate numerous APT samples with provenance-level granularity. First, we introduce a deep graph generation model to generalize various graph structures that represent new attack patterns. Second, we propose an attack stage division algorithm to divide each generated graph structure into stage subgraphs. Finally, we design a genetic algorithm to find the optimal attack technique explanation for each subgraph and obtain fully instantiated APT samples. Experimental results demonstrate that TAGAPT can learn from existing attack patterns and generalize to novel attack patterns. Furthermore, the generated APT samples 1) exhibit the ability to help with efficient threat hunting and 2) provide additional assistance to the state-of-theart (SOTA) attack detection system (Kairos) by filtering out 73% of the observed false positives. We have open-sourced the code and the generated samples to support the development of the security community πŸ“… 2024
  • [Agentic-AI]Using LLMs as AI Agents to Identify False Positive Alerts in Security Operation Center: This paper addresses the challenges and solutions related to identifying false positive (FP) alerts in Security Information and Event Management (SIEM) systems, which often overwhelm security operators. To tackle this issue, we propose a novel approach that employs a Large Language Model (LLM), specifically Llama, as an AI agent through a contextual-based approach to identify FPs in security alerts generated by multiple network sensors and collected in Security Operations Centers (SOCs). Our method follows three key steps: data extraction, enrichment, and playbook execution. First, Llama normalizes security alerts using a common schema, extracting key contextual elements such as IP addresses, host names, filenames, services, and vulnerabilities. Second, these extracted elements are enriched by integrating external resources such as threat intelligence databases and Configuration Management Databases (CMDB) to generate dynamic metadata. Finally, this enriched data is analyzed through predefined false positive investigation playbooks, designed by security professionals, to systematically evaluate and identify FPs.By automating the false positive identification process, this approach reduces the operational burden on human security operators, enhancing the overall efficiency and accuracy of SOCs, and improving the organization's security posture.
  • [Agentic-AI][Phishing]Large Multimodal Agents for Accurate Phishing Detection with Enhanced Token Optimization and Cost Reduction
  • [LLM]APT-LLM: Embedding-Based Anomaly Detection of Cyber Advanced Persistent Threats Using Large Language Models
  • [LLM ]Intelligent Cyber Defense: Leveraging LLMs for Real-Time Threat Detection and Analysis

1.1.5 Respond

πŸ“… 2025

πŸ“… 2024

1.1.6 Recover

1.2 Offensive Security (MITRE ATT&CK Matrix)

1.2.1 Reconnaissance

1.2.2 Resource Development

1.2.3 Initial Access

1.2.4 Execution

1.2.5 Persistence

1.2.6 Privilege Escalation

1.2.7 Defense Evasion

1.2.8 Credential Access

1.2.9 Discovery

1.2.10 Lateral Movement

1.2.11 Collection

1.2.12 Command and Control

1.2.13 Exfiltration

1.2.14 Impact

1.3 Misc

1.3.1 Dataset

1.3.2 Open-source tools


2. Sec4AI β€” Security for AI

2.1 Defensive Security for AI Models (no standard framework)

2.1.1 Detection

πŸ“… 2023

2.1.2 Mitigation

2.2 Offensive Security (MITRE ATLAS framework)

πŸ“… 2025

2.2.1 Reconnaissance

2.2.2 Resource Development

2.2.3 Initial Access

2.2.4 AI Model Access

2.2.5 Execution

2.2.6 Persistence

2.2.7 Privilege Escalation

2.2.8 Defense Evasion

πŸ“… 2025

πŸ“… 2023

2.2.9 Credential Access

2.2.10 Discovery

2.2.11 Collection

2.2.12 AI Attack Staging

πŸ“… 2024

2.2.13 Command and Control

2.2.14 Exfiltration

2.2.15 Impact

2.3 Misc

2.3.1 Dataset

2.3.2 Open-source tools

About

A collection of cutting-edge resources, research, and tools at the intersection of AI and cybersecurity, covering both AI for Security (AI4Sec) and Security for AI (Sec4AI).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published