Purple AI 2.0

Started in August 2023

The business objective of Purple AI 2 (PAI2) is to positively impact the business continuity of both Red and Blue Teaming activities within PCSI organisations through the use of innovative applications of AI technology. The two perspectives together form the Purple AI tool offering.  

https://pcsi.nl/uploads/projects/Purple-AI2.png

Once this goal is achieved, red teams will have a tool that will allow them to benefit from (1) more operational capacity, (2) higher operational speed, and (3) a better baseline in terms of quality, reach, and depth of operational results (all while employing the same number of experts). Using the same tool, blue teams will be able to improve the overall baseline of the organisation's defences against attackers who may (or may not) be using AI tools.  

Innovative aspects  

  • We are exploring a portion of what could be "AI-powered" in an attacker's kill chain; we have selected two steps of the MITRE ATT&CK kill chain that are paramount to the day-to-day activities of red teamers as use cases for the PoC: "initial access" and "reconnaissance". 
  • We are sharing these insights and tools with blue teams to help them and PCSI partner organisations better defend against AI-powered adversaries.  

The solution  

The solution we envisioned is the creation of a modular toolset for red teamers that also allows blue teamers to gain valuable insight into the possible attack scenarios where threat actors use AI-powered techniques.  

Intended outcome  

At the end of the proof of concept phase, we expect to have created two command line tools (one for initial access and one for reconnaissance) that can be used by red and blue teamers. The reconnaissance tool should be able to output which assets, roles or people have been identified by the AI algorithm as viable attack vectors in a red team exercise. The initial access tool would ideally attempt to penetrate a target environment, and most of the proof of concept activities would consist of repeating the input and training of an AI model until the AI can gain internal access to the target environment. Bonus points (i.e. nice to have) if the AI can gain initial access undetected by the blue team. We intend to work on these tools in parallel, creating a kind of tandem that facilitates the flow of information from the reconnaissance work to the initial access work. 

This project is part of the trend

28 Opportunity and threat April 2023

Growing use of AI applications

Artificial Intelligence (AI) is the ability of systems to display (human) intelligent behavior with automatic decisions or decision support as a result. Smart algorithms offer new possibilities for linking different data sources. The use of counter AI and reinforced learning for detection could be a possible way to make cyber security more effective. AI is increasingly used by defenders and attackers both, AI can be used to automatically find vulnerabilities, automatically patch, and automatically generate exploits. Explainability and responsibility must however always be taken into account.
30 Threat May 2024

Increase of malicious uses and abuses of AI

Artificial Intelligence (AI) is the ability of systems to display (human) intelligent behavior with automatic decisions or decision support as a result. Smart algorithms offer new possibilities for linking different data sources. The use of counter AI and reinforced learning for detection could be a possible way to make cyber security more effective. AI is increasingly used by defenders and attackers both, e.g. red teaming can experience significant improvements as traditional penetration testing outpaced by today’s complexity. AI can be used to automatically find vulnerabilities, automatically patch, and automatically generate exploits. Explainability and responsibility must however always be taken into account.
Beeldmerk PCSI
PCSI is a collaboration of
    ABN-AMRO Achmea ASML Belastingdienst ING TNO