Is your AI responsible enough?

As banks think about applying machine learning and Artificial Intelligence (AI) technologies, they have to remember that they have a responsibility towards their customers and society – all of this in a manner that’s transparent and traceable.

Banks are also answerable to their regulators, though regulations on technologies like AI is not in par with the developments in the industry. However, banks should look at this as a measure to uphold public trust, integrity and information quality, fairness and non-discrimination, and so on, which will play a significant factor in the bank’s reputational risk. In fact, banks, that are looking to be progressive in their adoption of AI technologies, should work with regulators to make sure regulation keeps up with industry adoption.

Trust in large companies is waning across the consumer segments. Companies like Facebook, for example, have had hefty fines that have been placed on them by regulators. In a recent lawsuit, Facebook was asked to pay a $5 billion (€4.43 billion) fine by US Federal Trade Commission (FTC) to settle an investigation into its handling of user data and privacy lapses. Google was accused of abusing its market dominance in the search engine space giving unfair advantages to another Google product, its comparison shopping service. In June 2017, Google was fined €2.42 billion by European Commission for breaching EU antitrust rules.

Dealing with privacy

Adoption of AI technologies will involve managing and analysis of large amounts of data, a majority of which will involve privacy concerns. So to ensure banks treat customers fairly, they should look at measures such as hiring social scientists. These professionals will weigh in on some of the ethical and privacy concerns that organizations will have to face as they start exploring commercial applications leveraging AI. And this is not futuristic, it is already happening in companies like Google as we speak.

AI that is Ethical

Apple was in a bit of a mini-scandal in the past few months. One of the entrepreneurs in the US and his wife shared all of the bank accounts, and his wife has a higher credit score than him. And they applied for a credit card, and he got a credit limit that was 20X higher than his wife. And all hell broke loose. He tweeted it and now the New York Department of Financial Services has commenced an investigation into this. To add fuel to the fire Steve Wozniack who is the co-founder of Apple, retweeted it saying that he was also in same situation. Companies need to think deeply about the decisions that their algorithms are making.

In AI ethics, are we behaving in the right way towards our customers? Are we misusing the trust? Are we misusing the data that our customers are willingly sharing with us?

Leveraging AI with Transparency

For example, let’s take the case of a health insurance company, that now has access to social media information of a patient – Naresh. Naresh regularly posts about his social life on Facebook and Twitter.

So the insurance company now has a moral dilemma, which is ‘Naresh is a high-risk patient, should I increase his premium, or should I discard this information?’. In this case, it is important to be transparent about how the company arrives at premium of their insurance products. Customers should be made aware that public information about themselves might be tracked to arrive at a more appropriate premium for the products they choose.

AI models don’t explain how they arrive at each decision they make. Although some vendors have introduced new explainable AI capabilities, most are using it for marketing purposes. Organizations do and will continue to achieve a lot of fantastic results without the need for full transparency.
Depending on the business context, however, privacy, security, algorithmic transparency, and digital ethics requires companies to bring in transparency into their business practices.

For example:

  • AI that makes decisions about people, such as rejecting a loan application, may require transparence. By law, providers of algorithms must give the user a reason for rejection.
  • According to the EU’s GDPR, which took effect in May 2018, users affected by an algorithmic decision may ask for explanation valid reason.

So in conclusion:

  • Start with using AI to augment rather than replace human decision making. Having humans make the ultimate decision avoids some complexity of explainable AI.
  • Data biases will be questioned, but a strong governance on ethics in technology applications is likely to help solve this
  • Create data and algorithm policy review boards to track and perform periodic reviews of machine learning algorithms and data being used.
  • Continue to be transparent about business practices around technology led applications.

The Imperative of Credential Security Management (CSM) in RPA

RPA’s Staggering Growth

In 2019, Gartner labeled RPA as the fastest-growing enterprise software category and predicted that within the next two years, 72% of organizations would be working with RPA with interest growing further in 2020. Some numbers used by Gartner and Forrester to illustrate this growth include:

According to Deloitte, at the current rate of growth, adoption will become saturated – nearly every company will be using it in some form in the next five years, and RPA will reach near-universal adoption at some point in 2023. With these amazing predictions, organizations for fear of missing out are clamoring to implement RPA and expand it across their organizations.

The Ripple Effect

There is a significant challenge to highlight the excitement for RPA deployment.

RPA implementation is only as efficient as the person configuring the automation flow. If there are unexpected scenarios for which the bot has not been configured to take necessary action, it will affect the flow and could break the whole chain of automated tasks. The anticipation of all possible combinations of scenarios the bot could encounter is the key to configuring a successful automation flow using RPA. With the implementation of RPA combined with a large amount of information flowing across disparate teams, partners, devices, clouds, vendors, and customers, the old school human-only approach to information security alone cannot scale to handle this data deluge which can lead to costly mistakes. These automation processes need an enterprise-grade security mechanism to ensure that sensitive data like automation credentials are stored securely, and also that right access privileges are granted for software bots to gain access to these credentials.

AssistEdge RPA Credential Management

With AssistEdge, organizations can deploy their RPA programs with an end-to-end solution in mind, which refers to ‘discovering’ the right processes for automation using AssistEdge Discover. Recognized as an RPA leader in the 2019 Forrester Wave Report, AssistEdge RPA includes the ability to allow software bots to manage the automation credential security. Some critical aspects of credentials and safekeeping include:

AssistEdge RPA provides an in-built feature called Credential Manager to safely store business application credentials and their assignments as well as accessibility to intended robots. Key features include:

Credential Security Management (CSM) Industry

When it comes to credential security management, some companies focus entirely on this arena. One such company is CyberArk, an information security company whose technology ensures that people in an enterprise have the appropriate access to internal resources. With the enormous interest for RPA implementation, CyberArk customers who want to deploy AssistEdge RPA can use their existing CyberArk credential security platform. AssistEdge is a CyberArk certified alliance partner product and works out-of-the-box with CyberArk in addition to other credential security management companies in the market.  

The decision on which CSM platform to deploy depends on whether your organization is already using other CSM software such as CyberArk. If so, it may make sense from a practical implementation perspective to continue leveraging your existing CSM software and integrate it with AssistEdge RPA. If your organization does not have any existing credential security support, AssistEdge RPA’s out-of-the-box support is more than enough for your needs.

In the end, with RPA’s growth in implementations likely saturating every organization in the next five years, the need for automated credential security management is clear. With more individuals involved with the implementation and deployment of RPA in cases of attended automation and unattended automation, automated credential security is crucial. Bots and humans involved in automated processes will need access to different applications within an organization, and the organization needs assurance that these access rights are secure and accurate. AssistEdge has already included this feature in their RPA product. In organizations where the needs for credential management span across applications, a solution like CyberArk may be more appropriate. In these cases, AssistEdge RPA is also able to integrate with CyberArk and other similar platforms.

Sources:
www.enterpriseproject.com
www.gcn.com

Meet Albie – The cognitive engine of AssistEdge powered by Nia

Automation and AI combined possess the transformative potential to build the next generation of organizations. They are the principal vehicles for effecting change, pushing enterprises in the quest towards automation singularity. Do you feel like you’re perpetually away from your automation goal? No worries! Albie can make all this a breeze.

A couple of months back, we introduced ‘Albie’, the cognitive engine of AssistEdge, powered by Infosys Nia, that delivers pervasive intelligence across the enterprise, enabling you to traverse the intelligent automation journey.

Capabilities of Albie

Contextual Intelligence

Albie can unify the Human-Digital twin to push Automation frontier to 95%. It can empower human workers with contextual insights to identify process redesigning opportunities.

Albie Decision Workbench

Albie can observe, learn and help solve business problems, as a result resolving and reducing errors. It can seamlessly unify the Human-Digital worker.

Cognitive Services

If you wish to automate processes involving the extraction of information embedded in documents like images, handwritten notes, prints & scans using cognitive services on tap, Albie has got you covered.

Business Intelligent Dashboards

Albie can create process-specific KPI dashboards, providing you with the framework to capture business-relevant data for creating dashboards around historic information, forecasts & business impacts.

Smart Resource Management

Albie can enable intelligent management of bot workload and health, predict failures, SLA breach & dynamically scale bots.

Why Albie?

We have all experienced this in the past: You’re handed documents, receipts, invoices wherein you’re expected to extract the information manually. You wish if there’s a better to do this!

In a typical scenario, documents like invoices, cheques that are scanned or handwritten are complex to interpret by standard OCR. Organizations are sitting on a goldmine of data – waiting to be tapped. That’s where advanced OCR technologies come to the rescue.

By using Machine Learning & Computer Vision to mark specific parts, Albie hands over the task to Advanced OCR for data extraction from scanned images.

Here’s how Albie works

Albie, the cognitive engine, enables enterprises to constantly learn from their current processes, and improve as they evolve. In the past, simple processes have been optimized up to 100%; but what about complex processes? The most important part of automation is to handle exceptions. Albie learns/solve exceptions (Albie Learning Model) based on how humans are solving it. When faced with such exceptions in the future, Albie already has a decision for the digital worker/bot to implement. Only those exceptions that have not happened in the past are routed to the human worker for decisions, thereby making the entire system smarter.

Due to self-learning and smart exception management, Albie can enable enterprises to push the automation frontier to 95%.

Let’s dive into a case study to learn how we capitalized on the power of Albie to enable process automation for a retail giant.

The challenges

Our solution