PurpleAIStarted in April 2021
AI for perpetual improvement of Red Team and Blue Team activities.
Artifical Intelligence (AI) is a promising technology that has the potential to change the cyber security landscape for good. AI can be used for defensive cyber security activities, such as attack detection, but unfortunately it can also be used for launching advanced automated cyberattacks.
Moreover, the threat landscape is evolving in such a rapid pace which makes it almost unfeasible to keep up the pace. For the Red team, aiming to overcome the implemented security controls and find new attack vectors, there seem to be too many options to cover. On the other hand, for the Blue team, aiming to strengthen the organization’s security posture and resolve security incidents, it is extremely challenging to keep up the pace with limited human resources, and it is waste of resources when alerts turn out to be false positives.
In the PurpleAI project we aim to utilize the potential of AI to partially solve both the Red team and Blue team challenges. The ambition is to utilize a small-scale simulation environment, a.k.a. “the playing field”, in which we can perform reproducible agent-based “Purple” exercises, consisting of offensive (Red team) and defensive (Blue team) actions. With this method we believe we can employ AI to perpetually improve the automated Blue Team and Red Team decision making. This should result in actionable security improvement advice for the Blue team, and will most importantly learn us how to prepare for future AI-infused cyber battle scenarios.
In the explore phase of this project we have focused on validating the desirability of this idea within the PCSI partner organizations. Furthermore, we have started a landscape assessment of the state-of-the-art technologies that help us develop and implement the playing field, Blue team and Red team technologies and what added value AI technology can bring.
At the end of the Explore phase we have concluded that:
- Desirability of the PurpleAI idea is confirmed by more than 10 experts at 3 different PCSI partner organizations.
- The potential of PurpleAI is to help the Red team with increasing their coverage, but it will especially help to alleviate the Blue team both directly and indirectly.
- PurpleAI is an ambitious and long-term effort from a technical perspective. We will have to build on top of existing technologies, and potentially involve external organizations.
Activities within the PoC phase
In the PoC phase we will focus on the technological feasibility of the PurpleAI idea. Furthermore, we have to scope the idea down to a minimum viable product that still proves the concept of PurpleAI. We will continue the technological exploration that we already started in the Explore phase. We strive to have a proof-of-concept that fully shows the potential of PurpleAI, for a narrowly scoped scenario, within a simplified virtual environment.
This project is part of the trend
Growing use of AI applications
Artificial Intelligence (AI) is the ability of systems to display (human) intelligent behavior with automatic decisions or decision support as a result. Smart algorithms offer new possibilities for linking different data sources. The use of counter AI and reinforced learning for detection could be a possible way to make cyber security more effective. AI is increasingly used by defenders and attackers both, e.g. red teaming can experience significant improvements as traditional penetration testing outpaced by today’s complexity. AI can be used to automatically find vulnerabilities, automatically patch, and automatically generate exploits. Explainability and responsibility must however always be taken into account.
Increase of malicious uses and abuses of AI
Malicious uses and abuses of AI take place more often. Such as adversarial machine learning, which is a machine learning technique that attempts to fool models by supplying deceptive input. It is an increasingly important research domain to explore attacks on learning algorithms that attempt to influence predictions, and provides defense mechanisms against adversarial tampering. Also manipulation of algorithms used for data labeling can occur. AI can also be used for hacks, or AI algorithms of organsations can be manipulated.