In an era marked by rapid growth, innovation, and endless ambition, the current crisis has created a need for a sharp focus on the things that matter. Enterprises must now be resilient, innovative, and efficient while staying ahead of the competition. The proliferation of technology, specifically intelligent technologies, is set to play an increasingly important role in this endeavor. We are at a juncture where the abilities of machines far exceed productivity goals, impacting even planning and decision-making processes. Our current experience of the need for robust technology means that it won’t be long before cognitive technologies permeate every aspect of human life from entertainment and healthcare to transport and, perhaps, even space travel. An average person’s life is already so intertwined with technology that they trust machines to help them navigate to a destination, report their vitals, and offer lifestyle recommendations.

For enterprises, this change is even more significant. Across departments, an increasing number of companies will automate business processes alongside a larger volume of decision-making. AI and ML, emerging technologies until recently, are now firmly in the mainstream and will drive the future of enterprise growth. The growth figures are a testament to AI’s popularity with a report1 from market research firm Fortune Business Insights stating that the global AI market is set to touch USD 202.57 billion by 2026, up from just USD 20.67 billion in 2018 at a CAGR of 33.1%. Procurement, sales, marketing, production, finance, human resources, regulation, and compliance are just some examples of departments seeing the transformative effects of AI-based technology. It is an exciting phase. These AI and Automation based technologies will augment human productivity, reduce average running costs, and all but eliminate the need for manual intervention in deterministic tasks, creating an environment of agility and continuous innovation. Adoption and implementation, however, are not the only areas for enterprises to consider.


A Question of Safety :
As enterprise dependency on machine-driven cognitive capabilities increases, it is pertinent to explore AI system security. Are AI systems tamper-proof? How can they be made secure? From a more fundamental standpoint, can the prevailing narrative of security and control, adequately address the potentially unique threats that AI-based systems can pose? To answer these questions, we need to understand the broad themes that determine the resilience and effectiveness of AI implementations. These include:

  • The vulnerabilities of AI systems and disruptions
  • AI’s value drivers and how they can be protected from malicious actors
  • The unknown threat actors and attack surface for AI systems
  • The differences between traditional security controls and those of AI systems

Before we delve into these topics, it may be pertinent to understand an AI system’s life cycle.

Stages of AI Lifecycle :
AI systems have two stages in their life cycle – learning and inference. In the learning phase, a model is trained using available data. The data can either be labeled (supervised learning) or unlabelled (unsupervised learning). In the inference phase, as the name suggests, a model makes inferences based on the framework developed in the learning phase. Systems can also learn from their inferences actively through a process known as reinforcement learning. Each of these stages carries distinct security threats. Let’s understand them better.

Absolutely. Like any software system, AI-based systems are open to attack. If the interfaces to these systems are not secure, they could be susceptible to exploits such as DOS and DDOS. All the defense protocols needed for any software system also need to be in place for AI systems. The question to answer is whether AI systems have any specific vulnerabilities that need attention. Understanding the purpose of AI systems and the model building process will shed some light on the issue.
An AI system in its learning stage relies heavily on the data provided for accuracy and precision. This reliance means that the security of an AI system begins with the security of data. There are several examples of how data breaches have caused financial and reputational losses for enterprises.

In 2017, Equifax found itself in a spot of bother when a data breach, one of the largest in history, revealed the personal information of 147 million people, leading the company to pay out a USD 425 million settlement to those affected alongside other state sanctions.

While this is an example of breach, the point to be noted here is that any unscrupulous access to enterprise data can tremendously impact an organization’s plans of AI program and implementation success.

A subtle and delicate skew introduced at the right stage during the learning phrase can significantly alter the behavior of predictive or cognitive AI models. Minor variations in data, or perturbations as they are known, can fool the model into believing a contrary view. This impact is evident in areas such as computer vision, image analytics, video analytics, and security systems that rely on them. Several studies demonstrate that people can dupe AI systems designed to identify humans by wearing certain clothing or accessories. Deception like this could compromise a security system that relies on deep-learning-based object detection and computer vision models.
In reinforcement learning and continuous learning systems, where models learn from their inferences, a different set of challenges heightens the security concerns. As models consume new data on the go, they also run the risk of exposure to unsafe or unwanted knowledge. Think of them as children who learn from observation and information.

Microsoft’s Tay, an AI Twitter bot designed to learn from human interactions, was a classic example of reinforcement learning gone wrong. Since Tay couldn’t distinguish between unacceptable and appropriate social conversation, it learnt from the Twitter users who used abusive, racist and indecent language and began to deliver responses in a similar tone.

If an AI model that is part of more business-critical workflows like procurement or risk management could misbehave, perhaps even in a manner more inconspicuous, the consequences could be substantial. While the reason could still be inadequate training or deliberately poor training, I would again consider the possibility as an enterprise security risk. AI blurs the lines between functional issues and security concerns. As these threats increase, so must the need for greater vigilance. The cost of detection and repair after an incident is exponential, while prevention can be challenging, if not impossible.

So, how do these threats manifest in the enterprise context? A characteristic feature of any AI model is the reliance on data, thus protecting this data not just from them but also from manipulation and alien injection is crucial. In this regard, internal threat actors are just as important as external malicious entities.
Data is the most valuable asset, the crown jewel, of today’s enterprise. It is the key to model learning, which means that malicious access could introduce bias or skew data, inconspicuously, affecting a model’s inferences. I call this ‘wanton bias.’ The perturbations mentioned earlier are necessarily tweaks that create wanton bias in deep learning models. If an enterprise relies on a decision-making model in its automation line, the loss of consistency due to injected wanton bias that allows unintended exceptions could have disastrous implications. AI systems themselves also increase the attack surface of an organization.

An intelligent hacker, in addition to conventional DOS and DDOS attacks, could exhaust the model by crafting complicated inputs that make it unresponsive to other requests.

Exposing AI models to external interfaces can intensify this risk.
Another factor widening the attack surface is the use of AI itself to exploit vulnerabilities. In the case of deterministic algorithms, the result is usually binary. The move to stochastic, statistical, and probabilistic models, as is the case with ML, deep learning, and neural networks, creates various shades of grey. Writing tens of thousands of conditional statements to crack a problem may have been a task in the past, but is imminently possible with a neural network with weights adjusted at each layer. The same power can be used to locate an opportunity to exploit, making malicious actors just as powerful as the systems they are looking to hack.
This environment tells us that the defense mechanisms traditionally built on the application security side and perimeter are as relevant as before. It also underlines the importance of increased vigilance, specifically in areas like APT (advanced persistent threat) protection.

Non-essential, indiscriminate, and unauthorized access to data during the model learning phase is the surest way to compromise an AI system. Enterprises must prohibit non-essential access, both internally and externally, and encrypt datasets to protect against malicious injection attacks that can create wanton bias. Also, to secure the perimeter, organizations must implement strong firewalls alongside robust tools and processes for SIEM and APT. Content protection such as antivirus and antimalware software remains relevant as the scope of DLP extends beyond IP leaks.

Auditability is another key security measure. Appropriate logging and a well-maintained data-use audit trail will ensure the traceability of user and administrator actions. All data and audit files must be stored in a secure location with access control.It is also important to pay attention to the separation of concern. Access to applications and data stores should only be provided on a need-to-know and need-to-have bases. Strict role-based access rules should be enforced within the application as well as in the deployment and management environment.

Enterprises must develop more robust application security frameworks that include Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and Interactive Application Security Testing (IAST). There is no alternative to pentesting for testing application security. Furthermore, with open-source components and libraries included in most software applications, open-source security considerations are critical. Frameworks such as Tensorflow and Theano are also open source and make it imperative for enterprises to have a well-defined mechanism, tools, policies, and a certified team of security experts to deal with open source vulnerabilities.

Understanding that AI is a model based on human thinking helps us ask how we would approach security if humans were executing all enterprise functions. What controls would we have then? The following list offers some context:

  • Authorization
  • Verification and validation
  • Auditability of actions and logs
  • Justification and explanation for decisions
  • Accountability for actions

We must have these same parameters in place for AI-based systems. Following on from the list above, an effective and responsible AI system would:

  • Allow access only to authorized users through strict access controls
  • Only allow the AI to act on authorized areas by delineating boundaries of control clearly
  • Have a clear log and audit trace of actions taken
  • Offer explainability
  • Demonstrate accountability

There has been significant progress in explainable AI where a snapshot of the weights at different neurons in the network is captured for every AI-base decision. The snapshot can be analyzed further if there are anomalies to establish whether the problem lies in manipulated training data or the AI application implementation. Each AI system outcome must be accompanied by a confidence score or probability metric that influences the model’s choice. AI systems should also offer the ability to trace the cause and effect of each decision to support auditability. For enterprises, contracts must reflect their need for AI accountability. Consider including limitations in your contract to protect you in the event of AI system errors and omissions, much like you would do for humans.

A Safe Journey Ahead

The strength of AI’s value proposition means that its proliferation is inevitable, irrespective of temporary concerns from security professionals, regulatory authorities, privacy advocates, system auditors, or even the legal fraternity. Resistance to adoption would be imprudent, if not downright foolish. Moving forward requires preparedness that understands the risks and looks to mitigate them. Data protection must move beyond a mindset of purely legal considerations or GDPR compliances to one that safeguards the most valuable resource we know. Companies should develop data protection and access policies that offer extreme visibility into data use, especially at the learning stages. Enterprises that implement the necessary controls, checks, and balances will enjoy the benefits of a secure AI system that generates valuable, explainable, and justifiable outcomes.


Keeping the lights on

Infusing Resiliency in IT operations with AI

Like what you are reading ?