Why am I seeing this?
Algorithms making choices for us.

There’s a lot about our digital lives that makes us feel that we’re not in control. An hour can pass by in oblivion, before we realise that we didn’t mean to be on our phones for quite so long. And as with all habits, the more we engage with our devices in this way, the harder it becomes to stop. In order to regain more personal autonomy, the authors of Mindfulness: Developing Agency In Urgent Times stress the importance of actively making conscious choices about what we pay attention to. A resounding yes! But I also worry about the number of decisions that are already made for us, long before we get to them.

Behind the scenes and screens, nearly all the information we see online is filtered for us by algorithms that carefully pre-arrange our choices for us. Almost everything we see is a “recommendation”: friends, news, adverts, movies, clothes, jobs and more. No two people see the same content because the internet is increasingly personalised based on demographics and past activity, made up of thousands of data points. Even search engines customise results based on who’s asking. Yes, we’re still in charge, but the algorithms influence us at every step.

This, in itself, is no bad thing. Given the streams of information we wade through online, it would be difficult to navigate without the help of intermediaries paring down our options. It has a healthy precedent too. Historically, we have relied upon trusted curators, advisers, and gatekeepers to make sense of the world. Thanks to machine learning, the robots assisting us today know us better than our closest confidants. Perhaps, even better than we know ourselves. It can feel good to be so well understood, to have someone by our side, finishing off our sentences.

The trouble with this cosy arrangement is that our relationship is not really with the machines, but rather with the people that create them. Filtering algorithms are imbued with a purpose, based on a business model. These are motivated not just to read our minds, but also to change our minds. The goal of social media platforms is to engage us in adverts. The goal of online shops is to have us buy more. The goal of some news portals is to change our political allegiance. These agendas – rarely stated upfront – are not objectively good or bad, but likely to be in conflict with our own.

I once tried to find out the actual criteria used by Facebook to curate my news feed, but it’s a safely guarded secret. Elsewhere, complex algorithms are simply treated as black boxes that can’t be understood entirely. However, there is a popular industry term – “relevance” – that keeps coming up in describing personalisation algorithms. As Mark Zuckerberg of Facebook explained: “A squirrel dying in front of your house may be more relevant to your interests right now than people dying in Africa.” Relevance in this context is not the same as, say, usefulness, or importance. It is simply what will engage you in the behaviour that the platform wants for you, or from you.

So what’s at stake here? Well, for one thing, we stand to lose our common reference points, and empathy for each other’s reality. In prioritising so-called relevant content, everything else slips away into the background. And the more limited our worldview, the more vulnerable we become to polarisation. Technology activist Eli Pariser coined the term “filter bubbles” to describe this phenomenon. Others call it an “‘echo chamber” in which you can only hear your own voice. These metaphors point to an underlying crisis: a disconnection from other people and their perspectives.

On these private islands, all our biases and preferences are strengthened and solidified. The algorithm observes, for example, any negativity bias that inclines us towards reading and sharing negative news. It observes the confirmation bias that draws us towards content that concurs with our existing beliefs. It notes our taste for certain kinds of movies or music. And because historical data is diligently baked into our profiles, past choices influence our present options, making it harder to make different choices now. Harder to change and grow as people. In this way, data shapes our destiny.

Mindfulness practice can help disentangle us from these forces, because it enables us step back from automatic behaviours and act with intention. Capacities such as awareness, decentering and cognitive flexibility are instrumental in avoiding the automations described. But I’m wondering: how might we need to expand and adapt our mindfulness practices to meet this new challenge more directly? Here are some ideas.

  • Noticing our virtual selves.
    We need mindfulness not just of our minds and bodies, but also of the structure of our lives in terms of our informational landscape and relationship with technology. We could go so far as to say that in addition to being physical, psychological and social beings we are also technological beings, and the online informational spaces we inhabit can be mindfully observed and inquired into as an extension of ourselves. Perhaps we need a digital equivalent of a body scan.

  • Studying our own biases.
    It has become the norm for technology companies to hire psychologists and neuroscientists into their teams, and user experience designers are trained in cognitive biases. If we want to understand how this new world of algorithms impacts us, we too need to get to know our biases better. A simple starting point is to examine the personalised content on our screens. At times the mirror won’t be totally accurate, but at other times the algorithms may have picked up on some unconscious biases that we ourselves have yet to acknowledge.

  • Clicking ‘out loud’.
    When every click, swipe and play is tracked and has a consequence, clicking becomes a lot like speaking out loud. Treating our clicks as an external expression could make us more cautious, and if we did regret engaging with specific content, for instance, a conspiracy theory video, we might make the effort to set the record straight. Some actions can be deleted or undone, such as a Google Search history or an Instagram like. We could also deliberately click on the opposite kind of content to recalibrate. Because, if we don’t clear up a misunderstanding, the algorithms won’t know that we didn’t really mean it.

  • Radical, mindful curiosity.
    When was the last time you saw something truly surprising online? When everything starts to look familiar, there is nothing more to learn. If we don’t want to be trapped in a stale information landscape we must diversify what we search for, who we follow, and what we click on. This requires an epistemic humility and mindful curiosity. Every time we actively get curious about a different culture, ideology or perspective, we retrain the algorithms, creating a positive feedback loop.

  • Adding friction by design.
    A habit is an action that has become so easy, so frictionless, that conscious intention and willpower are no longer required. If we want to change our digital habits we need to make some actions much harder for ourselves, inviting our brain’s executive control function to get involved. For example, by simply switching off the autoplay function on Youtube we create a mindful pause after one video has ended, giving us an opportunity to reconnect with our intentions.

  • Being more private.
    While we may not be able to switch off tracking and filtering entirely, there are a number of steps we can each take to increase the anonymity of our behaviours online, such that our web experience does not become as tightly personalised. For example, switching off the customisation feature in Google Search or going YouTube “incognito” will expand what
    we see, and diversify our possibilities.

All these responses rely on individual action, and there’s a risk here of misplacing burden. We also need to lobby collectively for structural changes including regulations for companies and government bodies that employ algorithms, asking them to provide greater transparency, accountability and controls in how their algorithms are trained, and how they make decisions about, and for, us.

If you’re still on the fence about how important this is, let me leave you with a near-future scenario to consider. Imagine wearing glasses that visually prioritise or recommend certain people as you walk down the road – reinforcing your own biases, or worse, marketing a third-party political or commercial agenda. Certain people, more “relevant” ones perhaps, might appear brighter or more vivid in this augmented reality, offering you a tailored experience of the physical world.

Anyone who has tried to meditate knows how fundamental, but difficult, it is to remain detached from thoughts, and to treat thoughts as mental events rather than facts. But when our thoughts are driven by our visual sense, it is extremely hard not to “believe our eyes”. Mindfulness may not be able to defend us from these kinds of distortions of our reality, but it will certainly give us more of a fighting chance to protect some parts of our inner lives from the influence of machines.


This essay was published in June 2021 in a collection by The Mindfulness Initiative. The whole collection can be downloaded here.