Are Large Language Models furthering our loss of agency?

Welcome back everyone, if you're joining here for the first time, perhaps it could be valuable to read up on Part-1 discussing the implications of LLMs becoming more pervasive in society.

Truth in the age of social media and LLMs

"Truth", "accuracy", and "objectivity" are cornerstones of journalism and as such have always played a balancing role between the information needs and the advertising-driven business model of distribution of getting more eyeballs on your content. Unfortunately this balance has been tested with the rise of social media over the past 15 years. Facebook has notoriously been criticized for its role as an editor of information (which they dispute). They have always claimed to distribute and not curate information**.** Sure, there may not be a human in the loop (they have an editorial team...) but the Facebook feed you interact with is effectively a curation algorithm powered by ranking.

Social media opened Pandora's box by splitting curation of factual information and distribution. The ethical boundary that journalists may have had with the slower news cycles is being tested, prompting the rise of "fake news" and disinformation at an unprecedented scale.

It's not just your b00mer relatives who are suffering from "fake news" on their Facebook feeds anymore. The Fondation Jean Jaurès shared a shocking study on misinformation, social media and the impact it has on the youth. It surfaced that the more youngsters were using Tiktok as a search engine, the higher the probability of them believing in counter-factual scientific facts (evolution, sorcery, conspiracy theories).

πŸ“±
41% of people who use Tiktok as a search engine believe that having a large following is a strong indicator of trustworthiness.

Distribution is encroaching on curation and people are becoming too lazy to do the work. Ain't nobody got time for intellectual curiosity and fact checking, a "breaking news" article is calling for your attention and you want to be up to speed on it before your water-cooler discussion with Janet from Accounting on Monday. The speed of news cycles enabled by the information age is only accelerating, pressuring our ability to have agency on the information we are bombarded with. 24/7 news channels are the embodiment of this: how can you claim to provide reliable information when your sole raison d'Γͺtre is distributing information with next to no curation?

We have become accustomed to being spoon-fed information with little care for curation and I have reason to believe that conversational agents leveraging LLMs could reinforce this trend. With LLMs, information retrieval is condensed into a single output when queried. This single output format has constraints: whether with humans or algorithms, you don't have your say on what sources informed the answer being relayed to you.

The above has broad implications on our perception of sources and the way the future of search will be monetised, I think it warrants another separate article so I'll keep this topic for later and continue exploring the "loss of agency".

Many observers and futurists have written about how AI and the digital age have made us dependent on the tools we use. Whether it's in our personal or professional lives, we can't deny that automation and digital tools have us hooked. More than a century ago, E.M Forster, a British novelist, wrote about a society that has built an over-reliance upon, and eventual subordination to, modern technology. In "The Machine Stops", he depicts a world where humans have been forced to live underground and are completely reliant on an all-powerful Machine for their physical and spiritual needs. In contrast to Orwell's 1984 dystopia of tyranny and misery that is often referenced in the context of Artificial Intelligence and surveillance societies, the characters living in the world of Forster are perfectly content with their lives despite living in, what we readers would view as, a nightmare. To quote Forster:

πŸ“–
"No one confessed the Machine was out of hand. Year by year it was served with increased efficiency and decreased intelligence. The better a man knew his own duties upon it, the less he understood the duties of his neighbor, and in all the world there was not one who understood the monster as a whole. Those master brains had perished."

Forster's world sometimes has an uncanny resemblance to our world. I tend to see myself as a tech-optimist rather than a tech-solutionist who believes that tech will solve for everything. In this context I'm concerned that history hasn't given us much reason for optimism, time will tell.

A natural evolution?

Those topics of information retrieval and automation have always been controversial. From Plato arguing against the invention of writing to Luddites destroying textile machinery it seems that LLMs are taking us further along the abstraction path described by the Forster quote above.

Plato's discussion on writing is an interesting read:

🧠
"If men learn this, it will implant forgetfulness in their souls; they will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks.What you have discovered is a recipe not for memory, but for reminder.And it is no true wisdom that you offer your disciples, but only its semblance, for by telling them of many things without teaching them you will make them seem to know much, while for the most part they know nothing, and as men filled, not with wisdom but with the conceit of wisdom, they will be a burden to their fellows."

Fast forward to today, a 2011 study on the impact of information availability thanks to search engines seems to prove Plato right. The study shows that when participants were anticipating access to knowledge in the future, their recollection of the information itself was diminished while their memory of how to obtain it was strengthened.

πŸ€–
"We are becoming symbiotic with our computer tools, growing into interconnected systems that remember less by knowing information than by knowing where the information can be found."Β 

We gain access to near limitless information at the expense of memory.

Is that necessarily bad though? Memory is today defined in psychology as the faculty of encoding, storing, and retrieving information (Squire, 2009). Sounds like a fairly good deal to me assuming we have ways of recreating memory programatically. That being said, there is a blurry line between "at the expense of memory" and "at the expense of independent thinking". At the risk of stating the obvious, what matters at the end of the day is making use of good judgment or else "tHe MaChiNe KnOoOoOws" from The Office could soon become a reality. πŸ™‚

In our next article we'll dig into how LLMs are reshaping the foundations of the interwebs (indexing, monetizing, navigating).

If you're curious about LLMs and their impact on society, consider subscribing as I've got a bunch of articles in the pipeline on the topic. If you aren't yet curious about LLMs, well, that's only a matter of time. I cover other topics around tech & entrepreneurship in the meantime.

Don't hesitate to share your thoughts with me, I'm very open to discussing the topic, even more so if you disagree with points I've made!