TikTok Has Started to Let People Think For Themselves - 4 minutes read
TikTok recently announced that its users in the European Union will soon be able to switch off its infamously engaging content-selection algorithm. The EU’s Digital Services Act (DSA) is driving this change as part of the region’s broader effort to regulate AI and digital services in accordance with human rights and values.
TikTok’s algorithm learns from users’ interactions—how long they watch, what they like, when they share a video—to create a highly tailored and immersive experience that can shape their mental states, preferences, and behaviors without their full awareness or consent. An opt-out feature is a great step toward protecting cognitive liberty, the fundamental right to self-determination over our brains and mental experiences. Rather than being confined to algorithmically curated For You pages and live feeds, users will be able to see trending videos in their region and language, or a “Following and Friends” feed that lists the creators they follow in chronological order. This prioritizes popular content in their region rather than content selected for its stickiness. The law also bans targeted advertisement to users between 13 and 17 years old, and provides more information and reporting options to flag illegal or harmful content.
In a world increasingly shaped by artificial intelligence, Big Data, and digital media, the urgent need to protect cognitive liberty is gaining attention. The proposed EU AI Act offers some safeguards against mental manipulation. UNESCO’s approach to AI centers human rights, the Biden Administration’s voluntary commitments from AI companies addresses deception and fraud, and the Organization for Economic Cooperation and Development has incorporated cognitive liberty into its principles for responsible governance of emerging technologies. But while laws and proposals like these are making strides, they often focus on subsets of the problem, such as privacy by design or data minimization, rather than mapping an explicit, comprehensive approach to protecting our ability to think freely. Without robust legal frameworks in place worldwide, the developers and providers of these technologies may escape accountability. This is why mere incremental changes won't suffice. Lawmakers and companies urgently need to reform the business models on which the tech ecosystem is predicated.
A well-structured plan requires a combination of regulations, incentives, and commercial redesigns focusing on cognitive liberty. Regulatory standards must govern user engagement models, information sharing, and data privacy. Strong legal safeguards must be in place against interfering with mental privacy and manipulation. Companies must be transparent about how the algorithms they’re deploying work, and have a duty to assess, disclose, and adopt safeguards against undue influence.
Much like corporate social responsibility guidelines, companies should also be legally required to assess their technology for its impact on cognitive liberty, providing transparency on algorithms, data use, content moderation practices, and cognitive shaping. Efforts at impact assessments are already integral to legislative proposals worldwide, including the EU’s Digital Services Act, the US's proposed Algorithmic Accountability Act and American Data Privacy and Protection Act, and voluntary mechanisms like the US National Institute of Standards and Technology’s 2023 Risk Management Framework. An impact assessment tool for cognitive liberty would specifically measure AI’s influence on self-determination, mental privacy, and freedom of thought and decisionmaking, focusing on transparency, data practices, and mental manipulation. The necessary data would encompass detailed descriptions of the algorithms, data sources and collection, and evidence of the technology's effects on user cognition.
Tax incentives and funding could also fuel innovation in business practices and products to bolster cognitive liberty. Leading AI ethics researchers emphasize that an organizational culture prioritizing safety is essential to counter the many risks posed by large language models. Governments can encourage this by offering tax breaks and funding opportunities, such as those included in the proposed Platform Accountability and Transparency Act, to companies that actively collaborate with educational institutions in order to create AI safety programs that foster self-determination and critical thinking skills. Tax incentives could also support research and innovation for tools and techniques that surface deception by AI models.
Technology companies should also adopt design principles embodying cognitive liberty. Options like adjustable settings on TikTok or greater control over notifications on Apple devices are steps in the right direction. Other features that enable self-determination—including labeling content with “badges” that specify content as human- or machine-generated, or asking users to engage critically with an article before resharing it—should become the norm across digital platforms.
The TikTok policy change in Europe is a win, but it’s not the endgame. We urgently need to update our digital rulebook, implementing new laws, regulations, and incentives that safeguard user’s rights and hold platforms accountable. Let’s not leave the control over our minds to technology companies alone; it’s time for global action to prioritize cognitive liberty in the digital age.
WIRED Opinion publishes articles by outside contributors representing a wide range of viewpoints. Read more opinions here. Submit an op-ed at ideas.com.
Source: Wired
Powered by NewsAPI.org