I don’t mind that Netflix has me pegged as liking British police dramas or preferring series with strong female leads. But would I want my supermarket loyalty program to text me personalized product recommendations after I was “recognized” walking into the store?
Both are examples of using Big Data – our online clicks, visits and signups – to create algorithms that claim to know us and try to predict what we want. String algorithms together and you get artificial intelligence, or the science of making machines do things “intelligently” -- that is, in ways that humans would.
AI, as it’s known, is top of mind with many retailers. Consultant Boston Retail Partners predicted last year that 45 percent of retailers will utilize AI by 2020, most to optimize their online interaction with customers.
Of course, not everyone is sold on AI.
SAS, a software and analytics firm, reported last month that a survey of 500 Americans found almost half were comfortable with companies using AI in business interactions, but that AI in retail – one of three industry scenarios posed – “made consumers most uncomfortable.”
Only 44 percent said they were willing to share location information to personalize their shopping experience, according to SAS. They were more evenly split, though, 49 percent to 51 percent, on having their past purchases used to recommend new items to buy.
So I was heartened to read last week that a new Innovation Center for Artificial Intelligence is being launched in the Netherlands, where research will be done into “socially responsible algorithms.”
The first “lab” within the center, which intends to focus on joint development of AI technology with business, government and education, will be in partnership with Ahold Delhaize, the European grocery giant that is parent to a number of U.S. supermarket chains, including Hannaford in our area.
Ahold will offer up its Dutch grocery stores, Albert Heijn, and its Amazon-like marketplace, bol.com, for the research.
Ahold’s incoming CEO, Frans Muller, said in a news release that the idea was to see how AI could be used to better serve customers and also how it might optimize the companies’ supply chains.
I asked the director of the center, Maarten de Rijke, who also is a professor at the University of Amsterdam, what was meant by “socially responsible algorithms,” and he indicated by email that the goal was to develop “a set of values around algorithmic decision-making.”
He said the values are known by the acronym FACT – fairness, accuracy, confidentiality and transparency.
“Socially responsible algorithms are fair, they are accurate, they use as little personal data as possible, and we understand how they come to a certain prediction or decision,” he wrote.
Growth in the use of algorithms brings benefits but also inherent risk from entrenched biases, he said. “We believe that the FACT values are important values for everyone, including consumers.”
Marlene Kennedy is a freelance columnist. Opinions expressed in her column are her own and not necessarily the newspaper’s. Reach her at email@example.com.