Personalized Bot Monitoring and Security

What are personal bots?

“Personal bots work on employee’s machine, mostly in attended form and perform tasks for the employee — pulling data from multiple sources to create reports, storing client contact data and even creating regular presentations.

Personal bots can be looked at as a digital concierge for employees in an organization. Through advanced mobile interface and virtual assistants, employees can interact with these personal bots installed on their office machines/systems. The personal bot triggered on-demand or scheduled by the employee, can perform tasks on behalf of the employee, even in his/her absence. Just like a digital concierge, this personal bot on the employee’s machine will be well-equipped to take requests and execute.”

The management of security risks is a top-priority issue for the development of RPA. When it comes to personalized bots in RPA, one of the serious concern is to ensure that the confidential data is not misused due to the actions of a personalized bot.

Some generic processes compliant in a personalized bot would be regular business procedures such as file transferring, data processing, process related to payroll, etc. All these require that the automation platforms have access to confidential information (inventory lists, addresses, financial information, passwords, etc.) about a company’s employees, customers, and vendors.

Consequently, the issue of security can be broken down into two highly inter-connected points:

As a result, strong monitoring and security measures of the personalized bot becomes evident.

How can a personalized bot ecosystem be secured?

In a personalized bot, the starting point would be Architecture and Authentication. In a personal bot scenario, one of the issues could be ‘individual authentication’, wherein the robots use employee log-ons and, it becomes increasingly difficult to distinguish human activities from those of the bot. Security is given a boost when each robot is given its own individual log-in, and its own permissions from the system.

The below approach can help effectively monitor and secure the RPA platform:

Integrity: A person needs to ensure the results/data coming from the bot has not been modified/altered.

What monitoring and security features does the enterprise edition offer? What is needed in a personal bot?

Currently, the enterprise editions of most of the well-known RPA tools have robust monitoring and security features, which allows organizations to have full control over the security features of the bot. But, when it comes to a personalized bot, the features have to be customized to meet the needs accordingly. For example, AssistEdge 18.0 Enterprise Edition has ‘Control Tower’ component, which has monitoring and security features covered. A component of the Control Tower, i.e. the credential manager ensures the safeguarding of credentials, and end-to-end monitoring of robots can be seen through the dashboard component. Also, all the details related to each transaction executed by the bot are available in the process view component, but for personal bots, all these components will need a re-look.

One might say that by using the features mentioned above in AssistEdge 18.0, the bot is visible, traceable and secure — then why is a re-look needed? Well, automated logins are entirely traceable. The challenge arises when one part of the process is administered by humans, then handed over to robots, then back to humans. The problem here is really about simplification, and lack of straight-through processing or integration, where a single sign-on allows continuous workflows without the stop-start that characterizes handovers. Also, in case of monitoring, the enterprise concept of a dashboard might not be essential, rather personalized information for the individual handling the bot should suffice.

Fundamentally, it comes down to a thorough homework and preparation while handling personal bots to ensure a more robust and secure ecosystem. Today, with enormous opportunities in RPA, the market is exploding with providers who promise to offer quick and easy implementations in cases, where processes can be replaced by a bot. But, what needs to be looked into is how much success will the concept of ‘Robot for every person’ bring.

Sources:

https://www.ey.com/Publication/vwLUAssets/ey-how-do-you-protect-robots-from-cyber-attack/$FILE/ey-how-do-you-protect-robots-from-cyber-attack.pdf

https://medium.com/@cigen_rpa/security-risks-in-robotic-process-automation-rpa-how-you-can-prevent-them-dc892728fc5a

https://www2.deloitte.com/content/dam/Deloitte/us/Documents/public-sector/us-fed-it-security-for-the-digital-laborer.pdf

https://www.edgeverve.com/finacle/assistedge/blogs/digital-concierge-embracing-future-bots/

Licensing models for Personalized Bots

When choosing an RPA technology, it’s important to think about licensing. License models in RPA are tremendously diverse across the vendor landscape, making it impossible to compare on a similar basis.

When we talk about licensing from AssistEdge 18.0 point-of-view, the licensing concepts can be broadly classified under the below categories:

Named User: This type of license is mainly for attended robots (Robots which are triggered by a user on need basis or need a human intervention). It enables him or her to register any number of robots on any machine, as long as the same active directory username is present on all of them. The user is not allowed to use multiple robots simultaneously.

Concurrent User: A type of license that helps users that work in shifts, as licenses are consumed only when you in fact want to use a robot. Multiple robots can work on the same license provided that there is only one bot running at a given instance.

Number of Bots: This type of license is mainly for unattended robots (Robots that run independently based on the schedule, mostly on virtual machines). Primarily, only 1 robot can run on 1 license, and it can run any number of times. For simultaneous execution, multiple licenses would be required based on the number of bots that are running simultaneously.

Most of the leading RPA providers offer the below licensing models:

What do we mean by personal bots?

“Personal bots work on employee’s machine, mostly in attended form and perform tasks for the employee — pulling data from multiple sources to create reports, storing client contact data and even creating regular presentations.

Personal bots can be looked at as a digital concierge for employees in an organization. Through advanced mobile interface and virtual assistants, employees can interact with these personal bots installed on their office machines/systems. The personal bot triggered on-demand or scheduled by the employee, can perform tasks on behalf of the employee, even in his/her absence. Just like a digital concierge, this personal bot on the employee’s machine will be well-equipped to take requests and execute.”

When thinking of licensing models for personal bots, the below models can be considered:

Pay-as-you-go Licensing Model: A Consumption-based pricing model, that is, you pay for what you use can be considered as the most suitable. A mix of named user and concurrent user licensing would be an ideal prospect for personalized bot licensing model i.e. to calculate the runtime hours of a robot and charge on per hour basis. In case more than 1 personal bot is required to run concurrently, then charge on per hour basis for each bot. Since a personal bot is not expected to run unattended or run 24 X 7, it’s only meant to do tasks such as pulling data from multiple sources to create reports, storing client contact data and even creating regular presentations, which require it to work for a specific period of time.

Per bot/per user Licensing Model: In per bot licensing model, charges are based on the number of bots utilized to execute a process i.e. 1 license for 1 personal bot whereas, in per user licensing model, license is allocated based on the number of users, in which a user can use any number of bots provided, multiple robots do not run simultaneously.

Enterprise Licensing Model: An enterprise-wide license for personal bots can be considered as one of the models. For illustration purpose, let’s say a client is interested in implementing personal bots at a large-scale but is not sure about the number of personal bots required. In such an instance, an enterprise-wide license can be purchased by the client, and he can up-scale as per requirement at no extra cost.

Cloud-based Licensing Model – The cloud service offers a quick and convenient way to begin using robotics. Introduction of a pre-built and optimized cloud service is easier and faster than using an on-premise solution. If the client already uses public cloud, the next step is to take the RPA service also to the public cloud, in which case, the cost level will further decrease. This functionality can be extended to personal bots as well. When a client starts using the cloud service, the personal bots will be quickly available at the user’s service with some pre-built functionalities. Also, personal bots can be used even if the person is offline or wants to run it remotely from a mobile device, provided the data is exposed to the cloud.

Also, everyday work may vary each day for a person. Sometimes the workload can be high, which might result in a need for scaling up the number of bots, and other times, there could be a lower workload, which might result in low utilization of the bot. The licensing model should be such that it should cover the above-mentioned aspects as well. The model should provide flexibility to scale up and scale down the bot usability, and the license charges should be such that the person should pay only for the time he has utilized the bot.

Deep thought should be given to the cost of the license for the personal bots. A lot of factors need to considered when calculating the cost, given the fact that it’s a personal bot doing tasks on employees machine and not residing on an application server like the traditional digital worker which is developed under enterprise licensing cost.

We should remember that the personal bot is not to replace the human but to replace the simple repetitive and manually intensive tasks that a human performs in day-to-day life.

Sources:

http://images.abbyy.com/India/market_guide_for_robotic_pro_319864%20(002).pdf

https://www.edgeverve.com/finacle/assistedge/blogs/digital-concierge-embracing-future-bots/

Personalized bots in the IoT environment

RPA market trends have witnessed an ever-evolving trend of constant innovation on how the solutions address critical business problems. What started as a solution that automates repetitive rule-based tasks through unattended/attended automation, now boasts of an intelligent platform which plunges itself from the core ‘deterministic’ offering to a ‘cognitive’ one. AssistEdge is already aiming for the stars with Automation Singularity by providing a scalable and secure offering, right from identifying the appropriate use cases through our in-built Process Discovery tool to seamlessly automating the identified processes and orchestrating this entire gamut of complex activities through a digital-human concierge.

While AI and RPA solutions are already making strides and are the most recognized buzzwords for businesses attempting to catapult their efficiency into the next orbit; Internet of Things (IoT) will be the next big wave which will work in tandem with automation. With increased data sharing brought in by IoT, organizations are now able to experience seamless and efficient streamlining of business processes. This makes IoT a major candidate amongst emerging technologies to pair with RPA. But, before we jump into that bandwagon, let’s understand how these are related in modern-day businesses.

To understand how all these emerging technologies will work in sync, let’s dissect how we humans function on a day-to-day basis. The reason we are choosing a modern-day human as a reference is because at the end of the day the purpose of these technologies is to replicate what humans have been doing for the past decade or two and ensure that an organization’s productivity is prioritized in the right place. A human body works in a fascinating way that we almost take for granted. We use our mind to analyze a situation and make decisions real-time; our limbs to react to the analysis and enact the decision that the mind has made. Every moment of our life, the human body is performing three steps: observation, decision and action. Enterprise operations are fairly similar operationally where organizations need to analyze what situation they are in, make a decision accordingly and take actions based on the decisions.

With data being ‘the new oil’ in the current century, Artificial Intelligence and Machine Learning are helping entities make complex decisions day in and day out. Not that a human can’t perform the same task but something that a human workforce won’t be efficient in w.r.t ‘time’ being one of the most important metrics for business success. Artificial Intelligence is nothing but a machine’s mind which analyzes data patterns, makes guesstimates with minimal error and prescribes a plan of action. But, someone needs to execute what the AI/ML offering suggests. This is where RPA comes into the picture. RPA can pick up action triggers based on events and execute pre-programmed repetitive tasks with ease. RPA and AI will always work in tandem, similar to how our limbs and mind are connected. But, for them to make a decision and enact on it, these technologies are reliant on another aspect of the mind — Observation. Data needs to be fed into an AI model in order to gain insights and take subsequent actions. This is where IoT comes in handy. Internet of Things is a concept of intercommunicating devices which can identify each other, transmit data and pass triggers for pre-programmed business/personal needs. In other words, if AI is the mind and RPA the limbs of an organization, then IoT is the eye and ear, which helps an organization observe events around their universe.

According to GE, IoT will contribute roughly $10-15 trillion to the global GDP by the next decade or two. Global consulting giant McKinsey and Company has visualized IoT transformation involving interconnected devices into two broad categories:

A simple example of ‘Information and Analysis’ would be how IoT has already made its presence felt in supply chain with industrial automation. With the help of sensors and actuators, organizations can track products in the supply chain lifecycle and monitor health of the industrial equipment, etc. IoT Gateway devices are the communication bridge between the endpoint sensors, and the Cloud server helping democratize the information. ‘Automation and Control’ is where IoT is making rapid strides with the advent of ‘Home Automation’ and ‘Smart Living’ intelligent infrastructure. Users can now have every home electronic devices connected, controlled by a standalone application on a personal mobile phone. This is where personalized bots in RPA can help revolutionize the industry as a whole as it receives actionable insights from IoT sensors and transform them into meaningful engagements as per business/customer needs.

RPA and IoT will primarily elevate operational efficiencies by providing innovative ways to capture business information and leverage it with the help of personalized bots managing it. Autonomous responses to events will experience a transformative journey where personalized bots will be configured to take instantaneous actions without any human intervention whenever an IoT device signals a trigger. This will drastically improve the quality of output for both home and industrial automation scenarios. The future lies in interconnected devices exchanging gazillion data points every second and it needs an ecosystem of products and solutions to ensure a seamless transformation from today’s as-is state. The future lies with IoT, AI and RPA coexisting and forming a holistic solution suite to make daily lives of an enterprise and its staff hassle-free, thereby channelizing the productivity levels on other pressing needs.

Digitization of core banking

Truly digital is all about transformation spanning experience, engagement and business engine layers. Focusing only on enabling new channels or touch points is not enough. There are a set of basic characteristics that a Core banking solution must demonstrate to enable banks to achieve the intended results:

This article describes some of the most critical basic characteristics across three dimensions for core transformation.

Before getting into the details, let us look at the change of context w.r.t

In short, it can be referred to as “A.I.M”. (Access, Infrastructure and Mass customization)

Accessing banking services

In the first phase of automation banking services were accessed

In the next phase, which can be referred to as the “Access” phase, banking services were made available as

Phase Where When Who
Automation Designated Places Designated Time Designated People
Access Any where Any time Any one

Hence there is a fundamental shift of context from “Designated*” to “Any*”. This clearly carves out the path for the digital engagement layer to take care of personalized data, control and engagement and at the same time clearly emphasizes the significance of core banking.

Infrastructure

There is a considerable change in infrastructural aspects for computing power, storage, network, monitoring, IaaS, PaaS etc. Considering that transaction volumes are growing exponentially (both read and write), solutions like Core banking should leverage such infrastructures.

Mass customization

Due to various factors and predominantly change in customer expectation, there is a shift from mass production to mass customization. From the bank’s perspective, this can also be referred to as “What I have” to “What you need”. It is a well-known fact that personalization is the key to achieving mass customization. Let’s have a look at how it plays out:

While personalization for interaction, convenience, control can be dealt with at the digital engagement layer, product, services and functionality-level personalization should be dealt with at the business engine level as well.

Based on these changing paradigms, business engines like core banking should have some critical attributes that are mapped below to the three dimensions of (A.I.M):

Digitization core banking

Conclusion

To realize the benefits of digital transformation, digitization is required across experience, engagement and business engine layers. For supporting digitization of core, it should exhibit a set of characteristics across three dimensions (A.I.M). To mention a few:

Is your core banking ready for Digital age? Does it exhibit these characteristics? Do share your views in the comments section below.

Prepare your data for demand planning: Your step-by-step data-transformation guide

Not all data is ready to give you insights. But it can be. In this blog post, we draw from our experience working with a global FMCG player to shed light on the processes that can power your data transformation.

To put simply, demand planning is the process of forecasting the demand of a product—which is then used to inform the manufacturing and distribution strategies of said product. Among retailers, demand planning is an integral part of supply chain management. Demand planning, when done well, has multi-fold benefits: It helps save money, manage product lifecycle better, improve marketing effectiveness, even heighten customer satisfaction.

But in today’s scale of operations, volume of data that is available, and the need for real-time insights, it is impossible to perform demand planning manually—over 54% of the companies surveyed in 2018 by the Sourcing Journal say that they frequently experience inventory imbalances. In fact, 13% say they do so ‘all the time’.

Technologies like artificial intelligence and machine learning can offer great value for demand planners, by crunching numbers meaningfully and at scale. It enables demand planners to work on real-time data and react to real-time market forces. But, given the maturity of AI tech today, it’s best to approach with caution.

In this blog post, we’ll outline the things to keep in mind while developing demand forecasting models for your AI engine.

Consolidate your data

The first step to building a demand forecasting model is to bring all your data to one place. Identify all data sources—explore all dimensions across geographies, channel partners, third party consolidators, products.

Once you’ve identified where you can source your data from, understand their tech maturity. Are all sources capturing data digitally? Even among digital data capture systems, there is a wide range from spreadsheets and emails to ERPs and CRMs. If your data is disparate, consolidate them. You might be able to leverage APIs and data pipes to achieve this.

Standardize your data

Once you’ve collated your data, you need to make sure the information is standardized. Procurement officers surveyed in the 2019 CPO survey by Deloitte complain that “poor master data quality, standardization and governance are the biggest problems to master digital complexity”.

To standardize your data, you need to identify the variance in data formats and use a unifying taxonomy like the United Nations Standard Products and Services Code (UNSPSC), Harmonized System (HS) or Central Product Classification (CPC).

Validate your data

Once you have your data standardized, but before syncing them, perform a thorough master data validation. Perform data cleansing first—for both historical and real-time data—without this, analysis would be tedious and possibly inaccurate as well. Once the data is validated, map them to your product hierarchy.

Synchronize your data

Set up clear, actionable processes for your data sync. For instance, outline what level of data (geo, region, distributor, store) needs to be synced, at what frequency does data need to sync with the master data, how the data is to be transferred (AS2, sFTP, XML) etc.

Set up data extraction and reporting

Once your data is synchronized, probably in a data lake on the cloud, enable reporting. Identify what kind of reports are necessary and build corresponding dashboards. For instance, if you need a demand planning dashboard, identify all the data points you need extracted, and pull them into your dashboard. For instance, for one of our clients—a global FMCG player—we brought data points across retail execution, e-commerce, inventory, secondary sales etc. to build their demand planning dashboard.

Some products like TradeEdge Market Connect have built-in dashboard templates that you can customize instantly.

Expand your data footprint

Once the first set of data is ready for reporting, go back to the drawing board and explore other sources you might collect data from. Onboard all your channel partners into your demand planning system—hire the right onboarding partner to achieve this at high-accuracy at scale.

If you have all bases covered, identify other data dimensions you may have missed. Or see if you can increase the frequency or data flow. Study if there are any external factors—like seasonal changes, competitor promotions etc.—that can have an impact on demand for your products; include data from there too.

Remember that your insights are only as good as your data. Gathering, cleaning and optimizing your data is an important part of improving your demand planning.

Traditional trade networks in FMCG need to be modernized—and automation could be the gamechanger.

FMCG traditional trade networks in emerging markets often have utterly outdated operational practices. Tech transformation is the key to giving brands better channel visibility and control over distributor management.

With a growth that is 2X of traditional trade, modern trade and in particular, e-commerce, is often touted to be the future of FMCG retail. However, the backbone of the industry is still general or traditional trade, the small-scale, independent stores dotting the markets and supplied by distributors and agents. In emerging markets like India, these form a whopping 90% of FMCG sales, leading significantly in rural areas.

Yet, traditional trade is being outpaced by the rapid growth in organized retail and e-commerce. Experts ascribe many reasons for this, primarily a lack of technology investment and modernization. This poses a problem not only for customers (who turn to modern trade outlets for a better shopping experience) but also for the FMCG brands that work with general trade outlets and distributors.

Traditional/General Trade Modern Trade or Organized Retail
Independent, local stores including mom-and-pop stores and kiosks that get their goods from distributors via sales agents. Purchases are mostly consumer-driven. Supermarkets, hypermarkets, convenience store chains and e-retailers that source directly from FMCG brands or larger-scale distributors. Purchases are often promotions or marketing-driven.

Challenges in traditional trade operations

In our experience working with clients from emerging markets in south east Asia, we have come across a few fundamental issues that have a bigger impact along the length of the supply chain.

Outdated distributor business processes

Surprisingly, even large-scale general trade distributors in emerging markets often conduct their business the old-fashioned way, with order taking, invoicing, bookkeeping, etc. still done using pen and paper. In other cases, the most basic on-premise desktop systems are used. All this makes the process not only slow and inefficient, but also inconsistent and error-prone. FMCG brands also struggle to get end-to-end channel visibility about sales and stock numbers.

Incommunicado systems

With the more tech-savvy of distributors, we have observed the opposite problem: that of too many disparate systems. Tech projects are often undertaken and systems deployed piecemeal, with the result that distributors have their data sitting in spreadsheets, ERPs and CRMs that do not work together. This makes it impossible to collate or send data upstream for any meaningful analysis. This was one of the problems faced by Mondelez International, which we helped solve through our TradeEdge Market Connect application. More on that a little later.

Lack of POS data

For FMCG companies who want to help their distributors (and in turn, small scale retailers) optimize their stocking plan and develop promotional strategies for their customers, point-of-sale data is crucial to have. However, not all of the important data is captured. And what is captured is often not in any readily shareable or analysable format.

Poor resource allocation

The primary responsibility of sales agents employed by distributors is to make the rounds of their target markets and ensure that their retailers have the right stock at the right time. However, in the absence of a tech-powered order management system, they spend a great deal of their time in manual data entry and verification, leading to inefficiencies, poor performance and a drop in job satisfaction. As a result, there is higher employee churn and increased costs.

The solution? Technology transformation

With Mondelez, we helped improve employee productivity by 65-70% through TradeEdge and the employee bandwidth thus freed up was directed towards sales effort.

Technology for improved channel visibility

To respond to distributor needs quickly, effectively, and in a timely manner, FMCG brands need extended visibility of business data including stock status, orders pending fulfilment, point-of-sale data, returns, etc. This data also gives a mid to long term view of customer preferences and market trends and flags early warning of problem areas that the brand can help the distributor address.

EdgeVerve’s TradeEdge Market Connect is a two-way data exchange platform that does exactly this. It enables seamless automated data exchange and processing between several trade partners and provides cleansed, validated, transformed and enriched data for better business decision making, analytics and reporting.

Technology for smarter distribution management

In our experience, some of the most common problems faced by FMCG brands dealing with a traditional trade network include a lack of control over products, prices and promotions, poor replenishment practices at the distributor’s end (resulting in 8-10% out of stock), and low market reach (~30-35% outlet coverage).

A unified tech interface like TradeEdge DMS that offers verifiable data and provides better visibility can help tackle all of these problems. Intuitive and highly configurable, TradeEdge DMS is an order management and fulfilment system that helps brands stay abreast of sales and inventory data through an easy-to-use interface and intelligent reporting.

Technology for better data harmonization

Gigabytes of data are generated on a daily basis throughout the general trade supply chain—but FMCG brands struggle to leverage it. Riddled with inaccuracies, in widely varying formats (think CRM, ERP, spreadsheets, email, etc), and often not even available online, data dumps that hold a wealth of customer and market insights often remain unused.

TradeEdge Data Harmonization is a cloud-based solution that leverages automation and machine learning to make data capture simple, consistent, accurate and analytics-friendly. It helps prepare and correct the data through features like language identification and mixed language translation, is able to classify and score data based on defined parameters, and contextualize it for the brand, making it insight-ready.

Digitize, scale, transform: TradeEdge in action

Let me delve a little into how TradeEdge helped Mondelez International, an S&P 500 snack food and beverage company that owns brands such as Cadbury, Oreo, Swedish Fish, and Trident among others, streamline their distributor operations with TradeEdge.

A number of their distributors in emerging markets were still using outdated pen-and-paper processes. For consistency of operations, Mondelez wanted to move these online. They wished to use technology not only to improve channel visibility and efficiency, but also to help distributors modernize their operations and signal to them the brand was investing in their growth.

In this massive transformation process that took place over four months, they chose to work with us, using our TradeEdge application. The result? Efficiency improved by a whopping 70%. Previously, Mondelez was employing two people per distributor for manual data entry; after TradeEdge implementation, these employees were redeployed to the sales team where they started contributing to business development and brand reach. With distributor sales data available almost in real-time on the cloud, the quality of sales reporting and insights also went up significantly.

What’s more, about a year after the TradeEdge implementation, all of Mondelez’ systems were hit by a malware attack—with the exception of the cloud-based TradeEdge. Seeing that none of the TradeEdge data was affected or lost, Mondelez decided to adopt a cloud-first strategy org-wide.

If you are considering a tech transformation for your distributor operations, we can help you maximize channel visibility, improve retail execution, and reach new markets faster. Speak to our experts.

How to build a seamless data acquisition system for 400+ GB data/month across 35 countries and 2300+ partners?

From our experience working with multi-national retailers, we’ve learned that everyone understands the value of data. But where they all struggle is in acquiring quality data, completely and consistently.

The first step to any business analytics initiative is acquiring data. Not just any data—but data that is of good quality, complete and in context, and can be received consistently over a period of time. This is difficult due to various reasons: Sources are many, formats are disparate, availability is inconsistent and synchronization sub-optimal. Overcoming these challenges and setting up a seamless data acquisition mechanism that functions at scale is an enormous endeavor.

In this blog post, we’ll walk you through the work we did for one of our customers—multi-billion-dollar global consumer goods company—explore ways in which you can apply them to your business.

Step 1: Streamline your processes

If you’re going to scale your data acquisition program to your global footprint, it’s best to begin with a smaller pilot. It helps conduct assessments, build frameworks, test in the real world and learn quickly from mistakes. This is what we did for our client—we began with market assessment and pilot for one of their key markets.

We built the data acquisition process and the corresponding models to set up TradeEdge as the default consolidator of all their secondary data across emerging markets. TradeEdge also expanded its footprint in developed markets like Europe and Japan.

Step 2: Implement your data acquisition strategy

Technology for data acquisition

TradeEdge Market Connect can be customized to acquire information from across myriad sources and formats. For instance, for this client, we implemented TradeEdge Market Connect to acquire near real-time data from distributors, consolidators, retailers, customer DMS and online retailers. This includes data about secondary sales, inventory, retailer POS, retail execution and e-commerce analytics across formats such as spreadsheets, documents, xml, emails etc.

Checkpoints for data validation

Once the data is acquired, it’s important to validate that data. TradeEdge Market Connect enabled master data validation, mapping data over AS2 / sFTP / email, transforming external data from across partner ecosystems to seamlessly fit into our client’s context.

Hosted data exchange

Entirely on secure cloud, TradeEdge Market Connect served as the single-pane view—a hosted data exchange—enabling flexible data acquisition though a configurable rules engine, performing data cleansing, validation, and transformation.

Step 3: Monitoring and scale

Once the pilot program is successful comes your bigger challenge—scale. Getting all your global channel partners onboarded to your new system often ends up being a complex endeavor. Infosys’ award-winning managed services teams help retailers scale their data acquisition program by onboarding channel partners effectively.

For this client, Infosys’ managed services team offered 8×5 operations support for on-boarding distributors, issue resolution, and follow-ups with distributors for timely data submission.

Step 4: Enabling visibility

Once the data is acquired, you need dashboards that help you make sense of the data. TradeEdge extracted periodic reports—daily, weekly and monthly—for six different applications used by the client’s team, including those for business intelligence, sales and operations planning, demand planning, statistical forecasting, promotion, life-cycle planning and local market solutions

Today, TradeEdge Market Connect delivers near real-time visibility into data from 2300+ partners, across 35 countries. From 8 markets and 80GB of data per month in 2013-14, TradeEdge now supports five times the data with 96% SKU-to-data mapping. With near real-time visibility into secondary sales and inventory data, the supply chain teams optimized must have stock levels (MSL) and performed more effective demand planning.

You can read all details of this case study here.

From demand planning to on-ground ops: Five ways in which sell-through data will transform your retail business

Having a clear view of your end-customer preferences can help you in more ways than one. Today, we discuss five fundamental ways in which sell-through data can impact your supply chain and sales.

One of the biggest concerns about making business decisions using ‘sell-in’ data — i.e. data about products sold to the distributor / retailer, instead of ‘sell-through’ data — i.e. data about products purchased by the end-customer — is that you’re forecasting your demand based on your distributor’s / retailer’s forecast of their demand. This means your decision-making relies not on what your end-customer is buying, but on what your distributor / retailer guesses that your end-customer might buy. In fact, in 2018, 56% of surveyed companies told the Sourcing Journal that they keep “more than three weeks’ worth of safety stock due to lack of visibility.”

In a world without access to end-customer data, this might have been the next-best thing. But not anymore. Retailers globally are moving towards using sell-through data for making business decisions. With good reason. Here are five.

Insight-driven demand planning

In my opinion, sell-through data’s biggest transformation will be on demand planning. With near real-time visibility into what the end-customer is actually buying—when, where and how—can have significant impact on what you’re manufacturing, shipping and distributing.

More importantly, demand planning based on sell-through data empowers you to react immediately and appropriately to market fluctuations. It would help send the right inventory to the right place at the right time, ensuring the supply chain stays optimized. This not just helps retailers, but also all channel partners, who benefit from the improved inventory turn.

For instance, we helped a global sports apparel retailer gain 60% increase in partner-network and brand visibility through TradeEdge Market Connect, which resulted in a 30% increase in inventory turn. The impact, is, in fact, that tangible! You can read the entire case study here.

Profitable trade promotion planning

“33% of trade promotions is reactive,” finds Nielsen. Offering deep discounts to make up for lack of demand affects the profitability of everyone across the supply chain. Manufacturing and distribution initiatives can now be calibrated with actual end-customer demand—with sell-through data—making trade promotions more effective.

This also builds better customer loyalty. This way, end-customers will begin to see trade promotions positively, instead of looking at them with suspicion as a way to ‘clear stock’.

Effective marketing

In the absence of sell-through data, retailers will be left to rely on third-party data—like television rating points (TRPs) and gross rating points (GRPs)—which are both expensive and ineffective in correlating advertising spend to actual sales. More importantly, this doesn’t give retailers omni-channel visibility either.

Sell-through data helps break the channel silos and look at marketing effectiveness as a whole.

Improved channel sales

The most contested space in the retail supply chain is the shelf-space at a multi-brand outlet. Always at a premium, retail shelf space has direct impact on sales at that store. While brands are willing to pay good price for the space, retail outlets understand the value of that space and the need to place high-demand products there. This is also why most retail outlets place their white-labelled products at the best spot.

With actionable sell-through data, you can have a meaningful and data-driven conversation with your retail partners—and make a case to get the space. This way, you’ll be taken seriously, have better negotiating power, and perhaps even a better position to strike mutually beneficial deals.

Optimized operations

I said earlier that third-party data, like TRPs and GRPs, can be ineffective in marketing analysis. But that’s just a small part of the story. Such data is often expensive and inefficient in solving specific business problems. Most retailers use this data because that’s the only one available, in spite of it not being entirely useful.

The return on investment in capturing and analyzing sell-through data, on the other hand, can be immense. Once you have a process and system for capturing and leveraging sell-through data, your operations will take care of its execution. This also makes achieving scale simpler. With a replicable model, onboarding each partner and expanding across new markets will be easy.

Our customer—a global multi-national in sports apparel—clearly saw the benefits of capturing and leveraging ‘sell-through’ data for their demand planning. As part of their long-term strategic initiative, they brought TradeEdge Market Connect to collect data from 325+ retailers, covering 32,000+ stores across 30+ countries, harmonize that data and deliver actionable insights.

This endeavor took not just the adoption of technology, but a radical rethinking of processes across their supply chain. The results, it goes without saying, were well worth it. Read the entire story here.

Data-related challenges in demand planning and how to solve them.

In the retail supply chain, technology alone is not enough to solve data-related challenges of acquisition, synchronization and validation. You need clear processes, ably supported by technology.

For centuries, demand planners have looked into ‘historical data’ to forecast future demand. In today’s digital economy—what with its countless choices and the need of instant gratification—this approach might feel dated. To serve your customers when, where and how they want to be served, you need a radically different approach.

You need visibility into your sales and supply chain in real time.

Unlike all the centuries that have gone by, such real-time visibility is possible today. As and when a customer pays for one of your products in a remote corner of the world, your demand planners at the headquarters can get instant updates at their fingertips. You no longer have to rely on a wholesaler, or a distributor give you an unreliable spreadsheet report – weeks after the sale. You can get notified when an end-customer buys the product in real time.

If, and this is a big if, you have the processes and the technology to enable such visibility.

The data acquisition problem: Data can be captured, doesn’t mean it will

One of the biggest challenges our customers face, even in mature markets globally, is the incomplete / inaccurate capture of sales data. Today, retailers sell their products through myriad channels—for instance, wholesalers, distributors, direct-to-retailers, tele-shopping, your own website / mobile app, online retailers like Amazon and e-bay, online shops of traditional retailers like Walmart and BestBuy, the list goes on.

For starters, how do you make sure you have data from everywhere? Even if your channel partners truly wish to collaborate with you on data acquisition—which itself is a rare feat today—collecting, collating and transferring data from the ground to the dashboards of a demand planner is the fundamental problem any supply chain professional faces today.

The data synchronization problem: You need to have your data and understand it too

Let’s assume you’re one of the few lucky ones and every one of your channel partners has the data you want. Even then, across different POS systems, inventory management mechanisms, and reporting apparatus, the data you receive from them can range from pure gold to utter gibberish.

Now, imagine each of your channel partners giving you disparate data points in different structures / formats, your ability to draw insights from this data relies solely on your superpower to harmonize all this information—and present them in context.

The data validation problem: Who says the data is right?

By its very nature, retail is a widespread business, with a tall hierarchy of large teams managing operations—covering great distances from the start of demand planning to the finish line of actual sales. Without distinctly fail-proof processes and active monitoring, there is no guarantee that the data collected is timely, reliable or even accurate.

Given the scale of the retail business, the inaccuracies of data can multiply, undermining the very intent of collecting such data. It’s not without reason that many demand planners, even to this day, remain skeptical of artificial intelligence and other such ‘data-driven’ initiatives.

Solving data challenges in supply chain

From our experience, to solve the data-related challenges in your supply chain, you need two things: Processes and technology. The technology aspect of this is the easier part—there are plenty of tools in the market you can adapt and customize for your specific needs.

For instance, our own TradeEdge Market Connect has helped global retailers and Fortune 100 companies get better visibility of their supply chain. In fact, we helped a multi-national sports apparel conglomerate gain 60% increase in partner network and brand visibility with TradeEdge Market Connect. Covering 325+ retailers, covering 32,000+ stores across 30+ countries, we helped delivery inventory turn increase by up to 30%. You can read the whole story here.

In spite of the pretty picture that success story paints, we must warn you that technology is only half the solution. The other half is in your own data-related processes. Without the right set of processes—and the meta-processes in place to ensure your data processes are followed—your data-driven initiatives are destined to fail.

For instance, with the aforementioned client, even before we implemented any technology, the first thing we did was: process definition and data standardization. We aligned each retailer’s processes with our client’s processes, without losing sight of local and market-specific needs. We defined standard file specifications (SFS) for data acquisition. And thankfully, our client ensured all vendors and partners working on this initiative adopted these SFS to ensure common standards. In parallel, our client also began the development of their new POS system according to these specifications.

Taking a holistic view of their supply chain, and by aligning their processes with the new technology they’re adopting, our client was able to overcome their data challenges. So can you.