A new twist on an old tale
A conversation with a UK retailer this week seemed so familiar.
“How can you make a sound prediction on the likely impact of a ‘dramatically different’ looking digital image?” AI, the argument ran, is not good when a digital image breaks from the category norms.
It’s a new twist on a well-trodden topic.
It reminded me of years ago back in the 1990’s, when I had to defend the AI score for the latest TV ad (in this case, AI being the awareness-index). They asked, how can the algorithm work with this never-seen-before creative treatment? What about this, what about that? And yet, most times out of ten, the prediction played out in post launch tracking.
It reminded me of the arguments we faced on how can an algorithm predict in-market sales volumes, when this concept is so different from the rest of the category? And yet, most times, that’s exactly what it did.
Both great challenges. We must apply intelligence to any prediction. It’s a tool, it helps, and sometimes it throws out an odd result.
Fast forward to 2022. This time the context is digital images for eCommerce. Predictions have always, and maybe will always, cause a visceral reaction.
The prediction business
This really is just the natural tension between creator and evaluator. In our blog, ‘Tech with Love’, we covered the important relationship between the data and human interpretation. To boil it down – Data without human intelligence added, it’s not enough. Equally, human intelligence without robust data can lack robust benchmarks, wider context, agility, and pace.
Of course, no prediction is perfect and for sure we have to explain the limits, but I’d argue that we’re in the ‘prediction business’ and that’s a good thing. As researchers, we make predictions because decision-makers need that guide. Predictions give us security in decision-making, moving the evidence from ‘I think’ to ‘I know’.
Predictions are here to stay.
Good research drives better prediction
We combine consumer testing and AI to assess digital images. Our prediction algorithms do the heavy lifting using AI, but that’s got to be underpinned. AI needs good quality training data too. Without it, we just make spurious comparisons and blunt predictions.
Four steps we take to make sure our predictions are sound include:
- Our database leverages ground truth data from real shoppers, in market, in category, in channel.
- Our database includes representative images from every corner of the category. Diversity creates better predictability.
- We test for model accuracy using best-in-class science.
- With retail partners, we run in-market validation.
When a highly distinctive, never before seen image is added, the algorithm isn’t troubled; it just applies the ‘rules’. Pixel by pixel, reviewing color, contrast, shape, composition, size, and many other facets of the image. It works at scale. Clients use it as a mass sorting tool and the beginning of the process. The prediction is just another KPI to guide the decision.
Fallible, yes, but matching to ground-truth data 90% of the time.
Digital images need to help the shopper
An online shopping trip, particularly for FMCG, involves a lot of repetitive decisions. Busy shoppers expect our help! It all starts with the image.
In eCommerce, it’s the first thing we see.
A good digital image works for all when it helps people process the content. All ways around – retailers have a sharp, navigable digital shelf, brands have the best potential to convert, and shoppers get the right product in the basket with fewer mistakes.
We reveal which images work best in a category. This helps getting digital images right at source. At the same time, the algorithm does not push all designs through a straitjacket. If the learning stage is diverse enough, then there are several routes to success.
Three things to check right now based on our database learning….
(1) Can the shopper easily see brand, variant, and size?
Guidelines for accessible digital images are produced by GS1 and validated by our database learning. In it, all the science and evidence you need to declutter, zoom-in, and make your images cognitively easy to process.
(2) The main visual is a powerful hook. Do you have the right messages and nothing distracting?
This is going to vary brand to brand. Leverage the distinctive assets and cut out second tier info. Use the carousel and PDP for supplementary content.
(3) Review how you convey quantity. The ‘norms’ are different by category.
Horizontal or vertically stacked? When we rotate the image to 3D, is the top and side helping convey abundance or cognitively distracting?
Try eFluence for Free
You have thousands of eCommerce images. It’s way too many to apply conventional testing efficiently.
Our algorithm references brand new images against the database of similar images from the same category. It calculates whether the image looks more like a top performer or has closer visual similarity to images that don’t work with shoppers.
The algorithm will tell us straight away and most of the time, straight away is when the decision must be made.
A game-changer in winning at the digital shelf and giving CPG’s the edge with shoppers in a hurry.
Find out more by contacting us at email@example.com.
Adrian Sanger helps insight start-ups, scale-ups, and established agencies bring winning products to market. He is a Director at eFluence™, the technology division of Behaviorally. He has helped in the build of Flash.PDP™, from the very beginning and now works with clients to shape the program and realize the value. He is a classically trained researcher, assisting C-level Insight teams in tackling their biggest growth challenges, including finding market opportunity, elevating their product strategy, and positioning, and developing winning go-to-market innovations. Connect with Adrian on LinkedIn.