On the 4th page of Netflix’s most recent letter to shareholders (4th quarter, 2020 earnings report), there is a mention of a new feature that they plan to roll out globally in the first half of 2021.
The name of the feature is not mentioned in the letter, but given that this is in testing for quite a while, we already know what it is. It is called Shuffle Play (for now; it may be called something else during the full-fledged launch) and when clicked (from the side-bar), it’d start playing something on its own (selected based either on anything saved in your ‘My List’ or based on something that Netflix’s algorithm assumes you may like to watch based on your viewing history).
Think of it as Netflix’s equivalent of Google’s ‘I’m Feeling Lucky’ button (that I confess I have never clicked all my life till I started writing this post and tested it once).
This is yet another ‘system’ making a decision for us. There are already so many ways the ‘systems’ (or machines, or algorithms) make decisions for us.
When you open YouTube, the videos it showcases/suggests are based on YouTube’s algorithm.
Almost every digital device has an autocorrect feature to ‘check’ and ‘autocorrect’ what you type. Many autocorrect as you type, while many others highlight it as a way for you to notice it and take an action (check options and pick one). This is a machine doing the thinking for you.
Most e-commerce sites show you other products that you may want to buy based on your purchase history.
Most dating websites decide on a match based on our own inputs and other signs they pick from us.
All social media platforms decide what to show you based on your history of watching/viewing content on the same platform. TikTok has nailed this to the level of art, and their algorithm is lauded for this reason (from a technology and business point of view).
The entire Google Discover Feed on android phones is based on machines deciding what is good/appropriate for you based on two kinds of signals – your own inputs (selecting a few topics) and your actions on the larger Google ecosystem.
There is a difference between a machine/system/algorithm offering us options to choose from and directly showing us what it thinks is appropriate for us. Though, you may argue that even in the former, the limited set of options too is simply a slightly bigger version of the latter (singular choice – default selection by the machine).
For instance, Google’s I’m Feeling Lucky button requires an affirmative action – you need to consciously click it to get that single result. The default is a list of results (running into multiple pages, as usual).
Netflix’s Shuffle Play too is a ‘Surprise Me’ button that requires a conscious click – you could still choose to browse and select, applying thought yourself. Even the other choices listed are based on your viewing habits anyway – so, Netflix simply saved you some time by reducing multiple choices into one 🙂
Before technology became smart enough to learn about us, we gathered choices and made decisions based on our human impulses.
We picked which movies to watch in theaters by reading about them and asking around for views.
We chose books based on reviews and peer recommendations.
We corrected spellings by learning the language.
We still (probably) do all this, though the spelling part doesn’t seem that necessary anymore 🙂 Learning the language, though, helps us realize what exactly we need to communicate – but even here, there is AI progress that can write entire paragraphs based on a brief.
There is already a long-running debate about human choice (or free will). One school of thought believes that we humans do not have any choice anyway – this is determinism (the theory that all events, including moral choices, are completely determined by previously existing causes). Another school believes that we humans do have choices, however limited (that we are free to choose among alternatives and to put such choices into action).
But increasingly, it seems more likely that we are handing over our limited choices to the algorithms to do as they please, based on their understanding of who we are and what we like.
I intend to continue resisting the algorithms as much as possible. What we consume in the name of content matters significantly in what we end up sharing or airing, either online or offline. A good part of my personal branding workshops revolve around building content pipelines – mainly for news/opinions, but personally, I extended it (for myself) also to music and fiction. All these come via firehoses to us, via algorithms. And because of the fire-hose effect, they are intimidating and overwhelming – a LOT of news, views, new music and new fictional content. Hence we end up submitting to the algorithms in the hope that they choose what we should/could consume.
