Generative AI is repeating all of Web 2.0’s mistakes - 3 minutes read
If 2022 was the year the generative AI boom started, 2023 was the year of the generative AI panic. Just over 12 months since OpenAI released ChatGPT and set a record for the fastest-growing consumer product, it appears to have also helped set a record for fastest government intervention in a new technology. The US Federal Elections Commission is looking into deceptive campaign ads, Congress is calling for oversight into how AI companies develop and label training data for their algorithms, and the European Union passed its new AI Act with last-minute tweaks to respond to generative AI.
But for all the novelty and speed, generative AI’s problems are also painfully familiar. OpenAI and its rivals racing to launch new AI models are facing problems that have dogged social platforms, that earlier era-shaping new technology, for nearly two decades. Companies like Meta never did get the upper hand over mis- and disinformation, sketchy labor practices, and nonconsensual pornography, to name just a few of their unintended consequences. Now those issues are gaining a challenging new life, with an AI twist.
“These are completely predictable problems,” says Hany Farid, a professor at the UC Berkeley School of Information, of the headaches faced by OpenAI and others. “I think they were preventable.”
Well-Trodden Path
In some cases, generative AI companies are directly built on problematic infrastructure put in place by social media companies. Facebook and others came to rely on low-paid, outsourced content moderation workers—often in the Global South—to keep content like hate speech or imagery with nudity or violence at bay.
That same workforce is now being tapped to help train generative AI models, often with similarly low pay and difficult working conditions. Because outsourcing puts crucial functions of a social platform or AI company administratively at arms length from its headquarters, and often on another continent, researchers and regulators can struggle to get the full picture of how an AI system or social network is being built and governed.
Outsourcing can also obscure where the true intelligence inside a product really lies. When a piece of content disappears, was it taken down by an algorithm or one of the many thousands of human moderators? When a customer service chatbot helps out a customer, how much credit is due to AI and how much to the worker in an overheated outsourcing hub?
There are also similarities in how AI companies and social platforms respond to criticism of their ill or unintended effects. AI companies talk about putting “safeguards” and “acceptable use” policies in place on certain generative AI models, just as platforms have their terms of service around what content is and is not allowed. As with the rules of social networks, AI policies and protections have proven relatively easy to circumvent.
Source: Wired
Powered by NewsAPI.org