April 10, 2023 ☼ The Intersection ☼ Information Age ☼ Cognitive Autonomy
To protect cognitive autonomy governments should mandate labeling of AI-generated information goods
This is an unedited draft of The Intersection column that appears every other Monday in Mint.
One of the many mysteries of recent weeks is why hundreds of extremely intelligent and rich people think that a moratorium on further development of artificial intelligence is feasible, and further that a six month hiatus is sufficient for us to figure out what to do about it. Technology development is a prisoner’s dilemma with millions of competiting participants making it impossible to get everyone to cooperate. Top tier competitors are more likely to cheat on the moratorium in the expectation that others will do so rendering the moratorium useless and worse, driving the industry underground.
Yet it shows that even Silicon Valley’s techno-determinists are worried about the social consequences of artificial intelligence. They just don’t know what to do about it.
One of the biggest challenges of public policy in recent years has been to decide how to govern an area broadly called “tech”. This includes trans-national technology platforms, social media, algorithm-driven information delivery and artificial intelligence. The advantages are clear, immediate and popular. The risks are not readily obvious and manifest themselves before we have had the time to properly assess them. Countries committed to liberal democracy and rule-of-law face a stiffer challenge: how do you mitigate risks without damaging basic freedoms and civil liberties? The answer, in many countries, is to abridge freedoms in the pursuit of national security and public order. The inability of public policy to catch up with technology is the big story of our times.
The answer is not to slow technological development down, even if that were even possible. It is to speed up public policy: by throwing the world’s best minds at the problem. Instead of calling for moratoria, tech billionaires would do better to direct massive amount of financial resources into technology policy research. It would appear to be well, a no-brainer, to suggest that accelerating human intelligence is a good way to manage the artificial variety.
What do we do in the meantime? The place to start is to identify what it is that we should protect. That, in my view, is cognitive autonomy or the human freedom to think. The reason social media platforms have so much political power is that can influence what individuals and entire societies believe. More than general artificial intelligence enslaving us, we should be more concerned about some humans using artificial intelligence to accumulate power over others: Power that is undeserved, unaccountable and unchallengeable. So protecting the mind from being influenced without its consent, and without societal safeguards, should be our first step.
In recent years, I am sure you have been irritated by websites asking permission to drop cookies into your browser. After the European Union mandated this under the GDPR, the once free information highway has been riddled with turnstiles and speed breakers, slowing down the flow of information. I am still irritated by these cookie warnings, but no longer resent them. That is because I realised that whatever might be their value in protecting privacy, they warn consumers of something ought to concern them. And they have this effect precisely because they get in the way and irritate us.
This suggests a way forward for technology that uses influence algorithms and artificial intelligence: present users with a clear warning and obtain consent before delivering those messages, videos or chat sessions. Give users the ability to opt-out and settle for a non-algorithmic, non AI-enhanced digital life. After all, caveat emptor is one of the oldest concomitants of market capitalism. It needs to be made real for the Information Age. And if sellers don’t voluntarily disclose information, market regulators must require them to do so.
Such an obligation to declare and obtain positive content should be accompanied with penalties for non-consensual, covert or coercive information delivery. Indeed, the entire information supply chain can be secured in this manner. Upstream information providers must inform their downstream counterparts of algorithmic or AI generated content, triggering the requirement to get ultimate end-user consent. Yes, this adds to the compliance burden of all information providers on the web, from individual blogs and websites to massive global platforms. The stakes are so consequential to human civilisation that the additional costs are worth it.
It’s a bit like labelling on food products. Displaying health warnings on tobacco and alcohol products strikes a balance between public health and individual choice. Nutritional information allows people to choose how much and what kind of food they wish to consume. We could do the same for information products. Indeed we should do the same for any technology that has the potential to impact cognitive autonomy.
There are many more The Intersection columns here
As I wrote in my previous column, we should not allow dire predictions of future apocalypses to get in the way of doing what we can to manage immediate risks. Despite the wonderful achievements and potential of ChatGPT and generative artificial intelligence, many of the dramatic threats attached to AI are speculative. This causes us to ignore the immediate and extant threat: to our epistemology. While we do not know the full answer yet, we know where to start.
© Copyright 2003-2024. Nitin Pai. All Rights Reserved.