PurpleAIStarted in April 2021
AI for perpetual improvement of Red Team and Blue Team activities.
Artifical Intelligence (AI) is a promising technology that has the potential to change the cyber security landscape for good. AI can be used for defensive cyber security activities, such as attack detection, but unfortunately it can also be used for launching advanced automated cyberattacks.
Moreover, the threat landscape is evolving in such a rapid pace which makes it almost unfeasible to keep up the pace. For threat hunters who aim to find new attack vectors, there seem to be too many options to test. On the other hand, for threat managers who aim to resolve security incidents, it is extremely challenging to keep up with all incoming alerts, and it is waste of resources when these alerts turn out to be false positives.
In the PurpleAI project we aim to utilize the potential of AI to partially solve both the threat hunter’s and the threat manager’s challenges. The ambition is to develop a small-scale simulation environment, a.k.a. “the playing field”, in which we set loose a Red Team AI agent to exploit potential weaknesses, and a Blue Team AI agent to secure the environment and resolve security incidents. With this technology we can perpetually improve the automated Blue Team and Red Team actions, e.g. by implementing a reinforcement learning loop.
In the explore phase of this project we will perform a landscape assessment of the state-of-the-art technologies that help us develop and implement the playing field, Blue Team AI agent, Red Team AI agent and a machine learning feedback loop.
Once this technology is implemented on a larger scale it will resolve most of the threat hunter’s and threat manager’s challenges. And moreover it will help us prepare for averting AI based attacks in the future.
This project is part of the trend
Growing use of AI applications
Artificial Intelligence (AI) is the ability of systems to display (human) intelligent behavior with automatic decisions or decision support as a result. Smart algorithms offer new possibilities for linking different data sources. The use of counter AI and reinforced learning for detection could be a possible way to make cyber security more effective. AI is increasingly used by defenders and attackers both, e.g. red teaming can experience significant improvements as traditional penetration testing outpaced by today’s complexity. AI can be used to automatically find vulnerabilities, automatically patch, and automatically generate exploits. Explainability and responsibility must however always be taken into account.
Increase of malicious uses and abuses of AI
Malicious uses and abuses of AI take place more often. Such as adversarial machine learning, which is a machine learning technique that attempts to fool models by supplying deceptive input. It is an increasingly important research domain to explore attacks on learning algorithms that attempt to influence predictions, and provides defense mechanisms against adversarial tampering. Also manipulation of algorithms used for data labeling can occur. AI can also be used for hacks, or AI algorithms of organsations can be manipulated.