Edge analytics can provide a new generation of data-driven, actionable insights for retail through a combination of advanced processing power and digital video. In this video, explore how to architect an edge analytics solution for a retail industry setti
- [Instructor] Most everyone these days spends at least a portion of their shopping time and dollars online, so we're all pretty familiar with the online shopping experience from a customer's perspective. And we also have a pretty good idea by now how leading-edge retailers extensively use data and analytics to personalize our shopping experiences with tailored offers, the products that we do or don't see, and even dynamic pricing. It's not just humongous databases of customers, demographics, and buying histories and other data, either.
Click stream analysis, cookies, and other techniques help online retailers steer the shopping experiences for their customers on the fly. So when it comes to the retail industry's ability to aggressively use data and analytics, for a long time now they've had a tremendous advantage in online settings versus what they were able to do in physical stores. With edge analytics, though, the playing field between online and in-store retail can now be leveled. The usage of digital video and analytics at the edge of the enterprise can allow a retailer to very closely emulate the dynamic, personalized online experience in physical stores.
Let's take a look at how this can be done. We set out to build an edge analytics solution for physical retailers that follows both our four-stage technology framework from HPE, as well as the sadder functional framework that begins with various types of stimuli that are collected, refined, and processed until we've produced appropriate responses that are driven by those stimuli and the analytics that follow. In the case of retail industry edge analytics for physical stores, digital video can play the starring role in the stage one identification of various stimuli.
Digital video at the edge begins that flow that lets us accomplish a number of important functions, such as recognizing human beings, their faces, and even identifying those faces. It can also recognize and categorize objects, detect and identify actions such as a person moving his or her arm, or perhaps walking from one spot to another in a store, maybe within the same department or perhaps moving to a new department. Suppose that within a given retail location, let's say a department store, we have a number of mounted digital cameras scattered within each department, that when taken together have all of the floor space and all of the merchandise tables and racks totally covered for purposes of surveillance and recording.
This data from all of the cameras within a department will then be collected together and integrated to set the stage for being processed in the aggregates. As we process all of that digital video information, we refine and enrich the raw data we've collected to identify people and objects, movements, and all of the other things that digital video at the edge can do. And then using that refined data, we can now make decisions about what our cameras have observed and trigger the appropriate responses. Then finally from a framework perspective, all of what has been described is happening very rapidly and very cyclically over and over, to give us a constantly-updating picture of what is happening within a given department or perhaps across all the departments in the store.
Let's go back to online shopping for a moment. Think of what I've just described as analogous to tracking a customer's clicks, looking at cookies, analyzing how long somebody stays on a webpage or how they react to certain images or texts, all of that, but now we're enabling that ability in a physical retail store, not just online. Moving on from the higher level framework perspective, we can see what actually happens from a functional perspective. Let's presume that the very first thing we need to accomplish is trying to distinguish a person from the digital images that we've captured.
We put our digital processing algorithms to work to look at shapes and sizes, colors such as flesh tone, facial features, and whatever else we need to lock into a particular segment of our video as being a person. Then if possible, we try to identify that person as a specific customer whom we know. We likely require our data lake to participate in this process alongside our edge analytics, and I'll come back to that shortly. Once we've locked onto a person, and let's stick with one person for now because it's easier to follow the workflow, what we now want to do is observe, record, and also evaluate all of that customer's actions.
We do our digital video processing magic, and by analyzing what that customer looks at and touches we can determine that person's interest in specific merchandise that we can likewise identify through shape and color recognition. We can also detect when a customer's comparing one product to another, maybe by holding two items side-by-side and moving his or her head back and forth. We could detect that a customer picks up and then puts down a product, as well as when a customer might pick up a product for a second or even third time, indicating both interest in that product, but at least some amount of indecision about whether to buy.
Again, we're emulating what we could capture online by tracking and analyzing click patterns on our webpages if we were an online retailer. We could notice that a customer moves around in a particular department and how much, indicating perhaps that the customer's interested in a limited subset of our merchandise, or conversely might be looking at a variety of products, let's say blue jeans and tops, as well as products from a variety of manufacturers. We might also want to notice that a customer moved from one department to another, indicating that he or she is now interested in different merchandise.
We could also follow head and movement patterns and determine that a customer may be looking around for assistance, basically the physical version of pressing the help button on a webpage. Next, as we continue to consume, refine, and enrich all of this data, we likely will reach the point at which we can make decisions about what the customer's behavior indicates, and then maybe communicate directly with this customer via texting or instant messaging. Doing so requires that we have not only identified this person as more than just some anonymous customer, so that we actually know who he or she is, but also that we have some current and correct way to contact that customer, a cell phone number or some sort of social media ID for example.
Presuming that's the case, and based on what our data and analytics at the edge tell us, we can offer short-term discounts on merchandise that the customer has shown interest in, or perhaps special pricing on packages or combinations of items that the customer has looked at, as well as similar products that are highly-compatible with what the customer is interested in. Let's say that the customer is looking at shirts and tops, but hasn't yet looked at pants or shorts or blue jeans. The offers can include products tailored by style and color and other factors that go well with what the customer has been eyeballing.
All of the standard customer selling techniques, such as cross-selling, upselling, down-selling, are available to physical retailers every bit as much as online retailers follow in these models. They can even detect if a customer is headed out of the store and then send some sort of last chance offer, maybe as to try to get the customer to turn around and buy an item at an even greater discounted price. Or it could be something like telling the customer that if he or she returns to the store within the next two hours or the next two days, or whatever time frame makes sense, promotional pricing will still apply.
Suppose, though, that even with facial recognition and other techniques, we haven't been able to identify this person as a known customer with demographics, shopping and buying history, and other data that we can then use as part of the selling and shopping experience. Plus we have no means to directly communicate with this customer via text or IM, because we just simply don't know who he or she is. This is where we could have our in-store salespeople come into play. Our edge analytics could quickly build out a tailored, dynamic selling plan for an in-store sales associate based on what we've analyzed from the video, about what the customer has looked at or he or she has gone in the store and all of that.
The salesperson would then be directed to the customer via some sort of in-store location model. Maybe it's sending the sales associate a still image of that customer from that video to help identify which customer should be approached. And then instead of a clean slate selling model where the sales associate knows little or perhaps nothing about the customer being approached, now the salesperson is armed with a tailored plan to hopefully gain an edge in selling products to the customer. Now, what we've succeeded in accomplishing is putting in-store retail on equal footing with online retail when it comes to using data and analytics.
And it's all because of the power of edge analytics and what we've been able to accomplish and analyze. One final point to consider is that even in this particular scenario that is almost exclusively dominated by edge analytics, we may still have a role for the data lake, and not just for the deep analysis after the in-store selling process has concluded, either. Take a look at the five high-level steps that our in-store retail selling process follows in the middle row here, beginning with recognizing that a particular object is indeed a person, and going all the way through delivering tailored offers to that customer.
For the most part, the data exchange and analytical power will occur at the edge of the enterprise, that is, within the store environment itself or maybe even within a given department. With one exception, though. We might have a scaled-down facial recognition database stored at the edge to help identify a person as a particular customer. But once we do that, we would likely need to reach out to a centralized data lake or perhaps even an older architecture data warehouse or customer relationship management, CRM, system to request and then retrieve all of the deep insights about that customer that we need, to augment what we're observing and analyzing during that customer's shopping experience.
We wouldn't want to store all of that information at the edge over and over, let's say replicated into each store. It makes a lot more sense to have a service managed by the data lake that provides the deep insights about the demographic information, buying history, and other more static information about that customer, that can then still be fed into our edge algorithms. So even though the general guiding principle is to migrate away from or avoid reaching out to centralized data lakes for edge applications, because of latency issues, costs, and other factors, doing so sparingly and prudently does make a lot of sense if you have a very solid architecture, enterprise-wide.
- Define edge analytics.
- Compare big data analytics and edge analytics.
- Identify examples of edge analytics at work in nature.
- Describe the stages of edge analytics frameworks.
- Explore the EdgeX Foundry framework for edge analytics.
- Provide examples of digital video data refinement and enrichment.
- List classes of analytics typically found in an edge analytics environment.
- Identify the objectives of edge analytics when applied to manufacturing.