Microsoft and MITRE have developed a platform to help recognise, respond to, and remedy attacks targeting machine learning ( ML) systems in cooperation with a dozen other organisations.
Over the past four years, such attacks, Microsoft reports, have risen dramatically and are projected to continue to grow. However, notwithstanding that, Microsoft notes, companies are yet to come to grips with adversarial machine learning.
In reality, a recent survey conducted by the tech giant among 28 organisations revealed that most of them (25) do not have the requisite resources and are explicitly searching for guidance to secure machine learning systems.
We also found that planning is not limited solely to smaller organisations. Microsoft says we talked to Fortune 500 corporations, states, non-profits, and small to mid-sized organisations.
Together with MITRE, IBM, NVIDIA, Airbus, Bosch, Deep Instinct, Two Six Laboratories, Cardiff University , the University of Toronto, PricewaterhouseCoopers, the Software Engineering Institute at Carnegie Mellon University, and the Berryville Institute of Machine Learning, the Adversarial ML Threat Matrix published by Microsoft is an industry-focused open platform aimed at solving this issue.
The platform includes details on the tactics used by critics when attacking ML programmes and is specifically targeted at protection analysts. The Adversarial ML Threat Matrix, structured like the ATT&CK paradigm, is focused on observed attacks that have been vetted as effective against output ML structures.
Due to inherent vulnerabilities underlying ML algorithms, attacks targeting these programmes are feasible and demand a modern approach to defence and a change in how cyber adversary activity is modelled to ensure correct reflection of emerging threat vectors, as well as the rapidly changing lifecycle of adversarial machine learning attack.
“MITRE has deep experience of multi-stakeholder topics that are strategically challenging. […] In order to thrive, we know that we need to put the expertise of a group of researchers who exchange real knowledge about risks and strengthen defences. And for it to succeed, all interested organisations and researchers need to be confident it they have a trustworthy, impartial party that can aggregate these real-world events and preserve a degree of privacy, and they have that in MITRE,’ said Charles Clancy, MITRE Labs senior vice president and general manager.
The recently published framework is a first effort to build a knowledge base about how to target ML applications and change it with feedback obtained from the security and machine learning community by the collaborating companies. The industry is also welcomed to help fill the holes in this Google Community, and to engage in discussions.
“This initiative is targeted at security researchers and the wider security community: the matrix and case studies are intended to better strategize safety and detection; the platform plants attacks on ML programmes so that related exercises can be carried out cautiously in their organisations and surveillance techniques can be tested,” notes Microsoft.