The AI Security Institute has launched a hiring push for experts to address the rapidly rising danger of AI-empowered cybercriminals.
The UK government agency was originally formed as the AI Safety Institute, following the AI Safety Summit in 2023, to assess the safety risks of frontier AI.
Earlier this year the group was given a rebrand and handed the remit of national security, removing its focus on areas such as bias, copyright and freedom of speech concerns – a change that prompted a backlash from truth verification organisation Full Fact.
The AI Security Institute has now set its sights on cybercrime as experts agree that threat actors have become far more dangerous thanks to the availability of powerful automation tools that can be used for social engineering, phishing campaigns and even the generation of malicious code.
To that end, the organisation has started advertising roles including criminal misuse workstream lead and research engineer for multimodal and AI agent evals – a position tasked with analysing how agentic tools can assist criminal operations.
The AI Security Institute described frontier AI as providing a “new criminal toolkit” enabling “increasingly sophisticated” operations.
Its new efforts in tackling cybercrime are split into three areas, risk modelling, technical research and interventions.
The institute has not yet determined what its interventions will look like, saying that “addressing the criminal misuse of AI will require a range of policy, technical, and operational responses over time”.
In a statement, the institute added: “Given the pace of technological change, it is unlikely that any single approach will be sufficient. Our aim is to help ensure that responses are proportionate, forward-looking, and grounded in technical understanding.”
Read more: Are businesses ready for AI-enabled cyber threats?
Register for Free
Bookmark your favorite posts, get daily updates, and enjoy an ad-reduced experience.
Already have an account? Log in