SMX East 2011: A Keynote Conversation with Eli Pariser
Eli is the author of “The Filter Bubble” and this morning, Danny Sullivan and Chris Sherman will be talking with Eli about personalization. Personalization in search has long been the holy grail, however there’s a sometimes overlooked dark side to the technology. This is the filter bubble, a hidden Web that arises when algorithms, rather than people, curate the Web.
Eli wants to talk to us about the moral consequences of living in our world with relevance defined in line with Mark Zuckerberg’s thought:
A squirrel dying in font of your house may be more relevant to you right now than a child dying in Africa.
On Facebook he noticed that his conservative friends activity’s weren’t showing up in his news feed. Even though he’d said h wanted to hear from people who thought differently, he was clicking on content from friends with similar interests.
Facebook was using that to try to give him more like that, and just like that, his conservative friends were gone.
Google does this too. He asked his friends to Google “Egypt” and send him the results. Two white male friends living in NY got totally different results. One got a lot of political results, while the other got info on travel and vacations.
INCREASINGLY, THE WEB IS SHOWING US WHAT WE WANT TO SEE.
Not necessarily what we need to see.
Eric Schmidt: It will be very hard for people to watch or consume something that was not tailored for them.
Your filter bubble is your personal, unique circle of information. You don’t choose what’s included, or what’s edited out.
There’s research that shows there’s a tug of war within us: entertain me now vs. future altruist. The best media balances these two selves in a balanced information diet.
But personalization algos look at what do you click first, so instead of some info veggies and some info dessert, you end up surrounded by junk food.
Before the Internet, the media was an info gate keeper. Then the Internet opened it up so everyone could find and publish anything. But that’s not true. There are new gate keepers, and it doesn’t have any sense of civic duty.
If machines are going to curate our info circle, we need to make sure they build in other signals of relevance – things that challenge us, other points of view. We need the Internet to be that thing that introduces us to other ways of thinking. That can’t happen if we’re stuck in a bubble.
Danny asks, with search results, does he find commonality between different people’s results, along with the differences? Eli says new research has come out since his book was published. There’s a lot of variety with some phrases, and some phrases see little difference at all. Wikipedia will be very dominant, but as personalized search increases, Wikipedia drops down the ranks.
Danny says that years ago search marketers saw different results, especially in different regions. Did people in your tests expect the same results? Eli says that most people don’t know that Google does this at all.
Talking to a Google engineer, he said that there are two modalities of using search: fill in the blank and open ended research. Google’s more interested in the second modality, which is the one most affected by personalization, and therefore, perhaps the most troubling
Chris says that in talking to Google, they said personalization would be subtle and connected to long tail queries. Is that your experience? Eli says it’s hard to say in any given case because the algo is so complex. He doesn’t think even Google people know. And he doesn’t think Google is doing this maliciously, and they genuinely think it’s an improvement.
Chris asks if Google should be the one controlling this, or is Google becoming something like a utility that should be regulated? Eli says that the algorithm is making decisions for a billion+ people, and yet it’s totally opaque. There’s no sense of accountability for Google. It would be good if people at least knew what transaction they were making with Google.
The engineers he’s talked to have said they don’t want to make it too complicated since most people don’t know what’s going on. However, he thinks now people are becoming more algo literate. And the rules around personalization need to be rewritten since so much has changed since the Internet was first introduced. Resetting expectations will probably need to happen at a regulation level.
Chris says Google is fairly well known for providing tools. Th Google Dashboard shows users all their info ad they’re doing work with the Data Liberation front. But Google doesn’t tell us how it uses the info. They say they can’t because that’s their secret sauce.
Eli says the Google Dashboard is a start. He thinks Google thinks of this stuff from an ethical standpoint more than other companies in a similar position. What we need to also know is what Google is inferring about us based on what they know.
The more data you have, the more inferences you can make, and you can monetize that. Hunch, a recommendation engine, created a sexual orientation algorithm where they were able to pretty reliably predict a person’s sexual orientation based on other data.
They could then sell this info to advertisers to target, yet it’s not necessarily something a user wants made available and sold to advertisers.
Chris says that this audience is marketers, who are looking at personalization as a great way to target their audience. Eli says that for marketers personalization is a double-edged sword. It’s another hoop to try to get through. And you may miss segments of your audience without knowing it.
His message is that personalization should be done in a way that’s transparent. Google can make it more apparent what their philosophy is. There should be more research into what the consequence is. The balance comes when people can use personalization the way they want to use it.