Adversarial Attacks and the Future of Secure AI

Caitlin Pintavorn
14 min readApr 8, 2020

Special thanks to Lucy Wang for her mentorship on this topic!

Startups addressing threat of adversarial attacks along the AI/ML value chain.

More than 4,300 AI startups have raised equity funding since 2014, raising over $26.6B in 2019 alone. As we continue to burn through an almost decade-long funnel of VC funding into the next “AI for ____” startup, these applications have become increasingly palpable at the consumer level — evident in controversial facial recognition plays, helpful AI physician support tools, and growing autonomous vehicle (AV) tests on the road. This unprecedented amount of VC financing has been majorly responsible for enabling AI/ML applications to become more closely intertwined with everyday consumer interactions. Yet investment into AI/ML-specific security infrastructure has not been developing in tandem with innovation. This overlooked area, however, will be a defining pillar in the new decade of AI/ML. Being able to truly achieve successful AI/ML integration across enterprises and strengthen protections for individual consumers against data breaches, ransomware threats, and other adversarial attacks is dependent on strong AI/ML security infrastructure investments.

What’s in this Post?

  1. AI/ML fundamental growth drivers: Idle repositories of data, hardware innovation, and the shift to edge computing
  2. Relevance of adversarial attacks: Real world case examples in health care, speech & audio recognition, and autonomous vehicles
  3. Solutions along the AI/ML value chain: Data optimization, algorithmic innovation, robust hardware development, MLOps
  4. AI/ML security startup market map: Overview of security startup activity and landscape

The AI/ML Gold Rush: Fundamental Growth Drivers

Projections of the global AI market size are expected to reach some $202.57B by 2026, implying a 33.1% CAGR, with applications spanning every enterprise size and sector. The key drivers — and simultaneous vulnerabilities — of this increased growth are 1) large, idle, and accessible repositories of data, 2) hardware innovation, and 3) the shift to edge computing. All of these innovations will allow for AI/ML model applications to truly scale rapidly across geolocations, industry sectors, and enterprise sizes.

Growth Driver #1: Idle Repositories of Data

To create any sort of meaningful application, large amounts of data are required to train AI/ML algorithms. Thus, the growth that the sector has experienced has largely been predicated on the increased availability of and accessibility to public and private data. This has been made possible by factors such as internet access and new data point collection methods (e.g. edge/IoT devices, smartphones, etc.).

For instance, health care alone has a data growth rate of 36% and is on track to produce over 2,314 exabytes of data in 2020. Given the sheer amount and growth of data available in the space, it comes as no surprise that health care has seen some of the most innovation and investment in AI/ML. Now more than ever, industry providers and patients are interacting with AI in their everyday processes, such as diagnostics, robotic healthcare assistants, and provider chatbots. These interactions result in self-reinforcing data loops that provide further training ground for more efficient models to improve cost savings and healthcare outcomes.

Growth Driver #2: Increasingly Advanced, Cheaper Hardware

At the foundation of any AI/ML advancements lies the actual hardware processor innovation and adoption. Though CPUs still seem to be the processor of choice to some in recent surveys, more specialized processors like Nvidia’s GPUs are growing in popularity and can perform “millions of mathematical operations in parallel,” making it an exceptionally attractive tool when it comes to more complex DL processes. Consequently, the race to develop ever out-performing and niche-use-case hardware processors has become a hot area of activity in the space.

For example, Google developed the TPU, custom for ML. Intel acquired Altera, a chip processor developer, for $16.7 billion, and Nervana, a startup building AI-specific chips, for $400 million. And now, well over 45 hardtech startups are capitalizing on these headwinds by developing AI-specific chips, receiving more than $1.5B from VCs in 2017.

Growth Driver #3: Partial Shift to Edge Computing

Though the idea of constantly connected cloud computing has become more mainstream in recent years as the ideal for quickly working with AI/ML data, there will likely be somewhat of a shift towards other compute methods as a result of concerns surrounding efficiency, latency, privacy, and security. And considering over 3.8 billion edge devices will be using AI inferencing or training by the end of 2020. AI-powered edge computing that allows for “analytics and knowledge generation to occur at the source of the data” represents an attractive alternative. At the forefront of this evolution are low-power, cheap, high-performance System on Chip (SoC) processors and middleware that stretch computational capacity.

Autonomous vehicles are a commonly referenced example to benefit from decreased latency in the transfer and analysis of information — by speeding up the processing of vehicle data, decisions can be made faster. Additionally, there are user privacy benefits, which will come in handy as increasing consumer and regulatory pressures force big tech to adopt new standards. For instance, the A11 Bionic chip on Apple’s iPhone allows for AI tasks to run locally to the device — thus, a process like facial recognition occurs native to the device without the need to store or process images on the cloud.

Relevance of Adversarial Attacks

While these growth drivers have resulted in some incredible progress in the space, it’s a double-edged sword. More data, AI-specific chips, and new computing paradigms mean increased attack surfaces. This section details how adversaries in various industries are exploiting this vulnerability.

Attack Methods: One Bad Apple

In 2016, Microsoft’s Tay AI, a Twitter chatbot, was developed for “casual and playful [two-way] conversation.” But less than 16 hours after her launch, a group of everyday Twitter users spammed Tay with racist, sexist, and profane tweets, causing Tay to publicly tweet similarly offensive remarks. Tay’s demise shows a simplified, public demonstration of how adversaries can intentionally data poison. These everyday Twitter users or professional hackers can inject “bad”, manipulated, or incorrect training data during the classifier process to render AI/ML models generally ineffective. This is just one primary way adversaries are able to exploit security vulnerabilities in AI/ML models today, in addition to other tactics such as model inversions (reverse-engineering completed AI/ML models to see sensitive, private training data) and Trojans (exploiting the way an AI system learns by bringing in perturbations in data in order to elicit a specific, desired, and incorrect response in the final model). The following industry examples showcase how these attacks can undermine important AI/ML applications:

3 Real World Case Examples

Health Care

The healthcare industry is constantly hit with data breaches, phishing attacks, and ransomware attacks, and hackers targeting AI/ML-enabled hospital diagnostics centers is a very real case scenario. One research study conducted in Israel shows exactly how these centers can be placed in extremely vulnerable security settings in an adversarial attack. Specifically, adversaries can “realistically inject and remove medical conditions with 3D CT scans” using a framework called CT-GAN. They showed that the CT-GAN fooled both, professionally trained, expert radiologists and gold-standard AI diagnostic models. The attack had an overall average rate of success of “99.2% for cancer injection” and “95.8% for cancer removal.”

By holding medical data hostage, adversaries are often able to achieve monetary gains. And even after they receive money, they are still able to have a lasting effect by modifying the scans (data poisoning) during the hostage period. Motivations behind this include everything from political gains (i.e. desire to fake or alter test results for a political candidate/affect their participation in an election) to insurance fraud (i.e. faking images and records to receive reimbursements) to research fraud (i.e. improving research results).

Speech & Audio Recognition

As trends push towards increasing the human-like qualities of AI/ML interfaces (i.e. addition of voices), the overall speech and voice recognition market has grown at a 19.63% CAGR to reach over $26.15 billion in value. Common use cases include in-home smart devices like Amazon’s Alexa or Google Home. And along with the increase in popularity of these devices, comes the threat of audio-based adversarial attacks.

One research project demonstrated targeted adversarial attacks on speech to text applications. The paper showed how they could manipulate any given audio waveform to “produce another that’s over 99.9% similar, but transcribes as any phrase [they] choose. One example clip was altering audio that said, “Without the data set, the article is useless,” to be transcribed as, “Okay Google, browse to evil.com.” This has dangerous implications when placed on top of common interactions with in-home smart devices (e.g. asking Google Home to send a message to a friend, sharing sensitive healthcare data with HIPAA-compliant Alexa apps).

Autonomous Vehicles

AVs present another interesting use case for adversarial attacks because of their dependence on sensor data input from attached cameras, which involves detecting factors like light and range (LiDAR). This allows for continuous streams of real-time data that enable AVs to take in info and quickly make appropriate decisions (e.g. left/right; recognizing icy roads).

A research study proposed realistic attack scenarios on AVs called DARTS (Deceiving Autonomous caRs with Toxic Signs) in both virtual and controlled real-world (i.e. with actual cars and signs) experiments. It focused on data poisoning to modify the AV recognition of innocuous signs and advertisement boards in the driving environment as another traffic sign. One proposed attack was called Lenticular Printing, which created images that “look different from varying heights, allowing an adversary to stealthily embed a potentially dangerous traffic sign into an innocuous one, with no access to the internals of the classifier.” This led to a difference in how the human driver or passenger and the attached camera viewed the sign, with the human seeing the appropriate sign and the camera seeing the incorrect sign.

Solutions along the AI/ML Value Chain

Despite the very real threat of adversarial attacks, less than 1% of AI funding is actually going towards AI/ML security research and infrastructure development startups. Therefore, rather than continuing to “retro-fit IT systems with security measures that are meant to address vulnerabilities…[from] the 1980s,” we should be increasing funding allocation to AI cybersecurity, since it will be a key factor in the continued advancement of AI as well as a way to save billions of dollars down the line. The following possible investment areas all fall along the AI/ML value chain: quality data, algorithmic/paradigm innovation, and computing hardware.

1. Data Preparation, Optimization, & Securitization

Most of the aforementioned adversarial attacks involve the malicious modification of source data. This is a particularly challenging issue to navigate as the amount of data generated every day is difficult to process and interpret, which creates a large, vulnerable attack surface. Two interesting solutions startups are beginning to move towards are a) quality data preparation and b) decentralized data techniques.

a) Quality Data Preparation: In order to develop a robust model, large amounts of high-quality, training data are almost always needed as a first step. Thus, in the case of preventing adversarial attacks, ensuring quality source data is the initial line of defense. Startups are approaching this in a few different ways: 1) incorporating securitization methods directly within the data and 2) conducting mass accurate data labeling/annotation. With the total market for AI/ML data preparation projected to reach over $1.2B in 2023, it’s clear that these few players are only the beginning of a new AI/ML startup wave.

Legacy player IBM introduced a method for watermarking AI/ML models in 2018, which theoretically would provide a first line of defense against adversarial attacks by verifying the ownership of data images (i.e. making it more difficult to inject poisoned data). Over the last few years, many startups have begun incorporating this method into protecting training data. Privitar, for example, similarly offers enterprise training data privacy protections in the form of watermarking images. And this technique isn’t limited to image source data — Deepfake startup Modulate attaches “audio thumbprints” inside all of its recordings to, demonstrating one possible way to determine the validity of audio/video in the era of disinformation.

Other startups have turned towards a traditionally (and currently still) highly laborious approach to ensuring quality training data. Scale AI ($122.6M total, Series C, backed by Accel, Index, and Founders Fund), the latest SV darling, offers mass labeled/annotated data services provided by over 30,000 outside contractors. Drawing diverse, high-profile customers such as Waymo, Pinterest, AirBnB, OpenAI, and Lyft, Scale draws a common thread between the most successful tech companies: a need for massive amounts of clean, secure, and labeled data.

b) Decentralized Data Techniques: In an ML world that necessitates amassing a large, central repository of training data, some major issues result from storing user data this way. In addition to more obvious user data privacy concerns (i.e., there is an entity that manages and views this data), these data centers are also at high risk of adversarial attacks. For instance, the central dataset that an AI/ML algorithm is being trained on could experience data poisoning or Trojans, thereby reducing the quality, security, and accuracy of the model in one step — it’s relatively simple to hide Trojans in a singular, massive pool of data.

As a result of these issues, as well as the increased performance of AI/ML computing hardware, startups are moving towards decentralized methods. Federated learning (FL) has become an increasingly popular way of doing so — instead of training AI/ML models in one place, models are trained batch-wise, on-premise. For example, Owkin ($18.1M total, Series A, backed by GV, F-Prime), which offers AI/ML for medical research, trains its predictive models on a network of hospital institutions, allowing data to remain onsite and only allowing encrypted algorithmic updates to move. There are two key benefits here when it comes to security: 1) data poisoning is less effective since it’s harder to attack every source of data and 2) an attack implemented and identified at a particular source can serve as adversarial training for the model (i.e., FL allows the model to be updated with the most recent attack data training from one source, helping it identify and render ineffective future attacks on other sources).

But given that FL is a nascent area, current research also highlights some security concerns. For instance, a paper published late last year discussed a novel type of backdoor attack specific to FL. And even with limited movement of data, you still have the problem of maintaining the security of data on-device/on-premise. Potential solutions to this may lie in securitizing the actual hardware, which is covered in a later section.

2. Algorithmic Innovation

Another way to implement robust defense strategies against adversarial attacks lies within algorithmic innovation. One particular area of longstanding interest that is now seeing some real-world application is explainable AI/ML (XAI). For instance, when it comes to an adversary who wants to bias the result of a model in their favor, XAI provides visibility into the AI black box. It helps ensure algorithmic accuracy and fairness by detecting abnormal perturbations in inputs and outputs.

Startups are capitalizing on XAI in a variety of ways. ArthurAI ($3.3M, total Seed, backed by Index, Work-Bench), for example, offers production model monitoring for explainability and bias detection. DarwinAI (CA$3.9M total, Seed, backed by Obvious, Inovia) is pioneering generative synthesis — it takes in any AI system and outputs a custom, leaner, and explainable version.

Other algorithmic approaches may be more niche and need to be developed on a case by case basis for each type of attack. For instance, in the case of Trojans, researchers have recommended that model owners “restrict query APIs from untrusted participants,” which makes Trojan attacks more difficult and time-consuming to implement. Some model inversion defense tactics include differential privacy, which allows for the addition of noise to hide sensitive information, and homomorphic encryption, which allows for “inference applications on untrusted participants to directly perform DNN computations on encrypted input” to prevent leakage of any sensitive data information.

3. Computing Hardware Advancements

Apart from data quality assurance and algorithmic approaches, there is a large opportunity to implement security features in the hardware designs themselves. This could include advances in the actual network architectures on chips to enable real-time security updates. One precedent example is legacy semiconductor company ARM’s TrustZone, which “establishes secure endpoints and a device root of trust.” Another player, startup Karamba Security ($27M total, Series B, backed by Fontinalis and Western Technology Investment), embeds its security solutions directly within edge devices and provides continuous threat monitoring. Other investment opportunities could involve forms of security middleware to enable system monitoring.

A Note on Secure MLOps

All three parts of the AI/ML value chain listed above touched on possible ways to prevent, detect, and defend models (whether through training data, dev strategy, or chips/hardware) against adversarial attacks. Thus, in considering the AI model environment as a whole as an asset to secure during production, there are some notable investment opportunities outside my explicitly defined value chain — specifically, startups working on production ML system monitoring. MLOps startups allow corporations increased visibility into model performance and extreme, outlier inputs in real-time, which helps facilitate secure production.

ParallelM, for instance, provides cloud and on-premise services for managing and governing ML in production and helping increase the number of successful models. It was acquired by DataRobot in June 2019. Other interesting, new startups working along the ML production line include Mona Labs ($2M total, Series A, backed by Differential Ventures and Global Founders Capital), which provides solutions such as maintaining data integrity, explaining model bias, detecting concept drift, and measuring performance. And Arize AI ($4M total, Seed, backed by Foundation Capital) recently emerged out of stealth-mode in February to offer MLOps teams observability and explainability for their production line.

AI/ML Security Startup Market Map

I’ve compiled a collection of startups addressing adversarial attacks along the AI/ML value chain I defined above. The first row demonstrates the 3 key components of the chain: 1) Data Preparation, Optimization, & Securitization; 2) Algorithmic Innovation, with XAI as a focus; and Computing Hardware Advancements, with edge/IoT security as a focus. The second row gives a general overview of Cybersecurity-as-a-Service startups that incorporate some form of defense against these attacks, separated out by industries with particularly compelling use cases.

Startups addressing threat of adversarial attacks along the AI/ML value chain.

Early-Stage Startup Highlight Reel (Pre-Seed to Series B)

Data Labeling & Optimization:

  1. Datasaur.ai ($2.4M total, Seed, GDP Venture): MLOps startup helping teams manage data labeling with one tool.
  2. Labelbox ($38.9M total, Series B, a16z & Kleiner Perkins): Data labeling tools, workforce, and automation for teams.
  3. Supervisely (Pre-Seed): Data labeling tools capable of digesting images, videos, and 3d point clouds into production-ready training data.

Interpretability/Explainability:

  1. DarwinAI (CA$3.9M total, Seed, Obvious Ventures): Pioneering generative synthesis — it takes in any AI system and outputs a custom, leaner, and explainable version.
  2. Arthur AI ($3.3M total, Seed, Index & Work-Bench): MLOps tools for monitoring, explaining, detecting bias, and measuring performance.
  3. Fiddler Labs ($13.2M total, Series A, Lux & Lightspeed): MLOps tool for understanding AI predictions, analyzing model behavior, validating compliance, and monitoring performance.

Edge Device & IoT Security:

  1. Cylera ($5.5M total, Seed, Two Sigma Ventures): Secures healthcare devices, operation tech, and enterprise IoT.
  2. Dover Microsystems ($6M total, Seed, Hyperplane Ventures): A “bodyguard” for your processor. Its “CoreGuard® technology is the only solution for embedded systems that prevents the exploitation of software vulnerabilities and immunizes processors against entire classes of network-based attacks.”

Cybersecurity-as-a-Service:

  1. Calypso AI (Pre-Seed): MLOps tool to secure AI models. Offers services such as helping build adversarial attack-resistant models, ongoing quality assurance tests, and explaining model attack vulnerabilities.
  2. Neurocat (Pre-Seed): Offers open-source AI analysis/debugging platform, AI lifecycle governance, & research/consulting services for robust model development.
  3. SafeRide Technologies (Pre-Seed): Offers multi-layer, deterministic cybersecurity for AVs and connected vehicles (e.g. anomaly detection, fleet monitoring).

Precedent Acquisitions

Data Labeling & Optimization:

  1. Datarobot acq. Paxata in Dec. 2019
  2. Uber acq. Mighty AI in June 2019

Interpretability/Explainability

  1. Tenemos acq. Logical Glue in July 2019

Edge Device & IoT Security

  1. Insight acq. Armis for $1.1B in Jan. 2020
  2. Harman acq. TowerSec for $72.5M

Cybersecurity-as-a-Service —

  1. Continental AG acq. Argus Cyber Security for $430M in Nov. 2017
  2. Sophos acq. Invincea for 100M in Feb. 2017

Concluding Thoughts

In order to hedge against adversarial attacks and allow for truly successful AI/ML integration at every level, AI/ML cybersecurity infrastructures must be developed in tandem with innovation. Whether end-all countermeasures to these problems manifest within a specialized startup, governmental body, or an extension of leading cybersecurity firms’ product sets, robust solutions will ultimately fall within the AI/ML value chain: quality data, algorithmic/paradigm innovation, and computing hardware.

--

--

Caitlin Pintavorn

Investor @ Insight Partners. Prev @ PathAI, Two Sigma Ventures, Owkin, & StartUp Health. Subscribe to my newsletter: onxyz.substack.com.