Artificial Intelligence Faces Charges of Left-Wing Political Bias
UNITED NATIONS, Aug 17 (IPS) - The artificial intelligence (AI) platform ChatGPT, whose negative consequences include misinformation, is facing new charges of political bias.
According to a study by the University of East Anglia (UEA), released August 17, AI ChatGPT shows “a significant and systemic left-wing bias”.
Published in the journal Public Choice, the findings show that ChatGPT’s responses favour the Democrats in the US, the Labour Party in the UK, and President Lula da Silva’s Workers’ Party in Brazil.
Concerns of an inbuilt political bias in ChatGPT have been raised previously but this is the first largescale study using a consistent, evidenced-based analysis, say a team of researchers in the UK and Brazil, who developed a rigorous new method to check for political bias.
Lead author Dr Fabio Motoki, of Norwich Business School at the University of East Anglia, said: “With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible”.
“The presence of political bias can influence user views and has potential implications for political and electoral processes.”
“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”
Asked if it was possible to avoid or circumvent the political bias in ChatGPT, Dr Motoki told IPS: “Our study does not directly address this issue. What you ask is a recent and active area of research. What we do create is a method to systematically measure bias by leveraging the ability of these more advanced models of answering questions in a human-like fashion, while statistically overcoming some issues around their randomness.”
The main contribution of the study, he pointed out, is addressing several standing issues in the AI bias literature with a simple procedure.
“We posit that our tool is a way of democratizing the oversight of these models, acting as a guide to measure their biases and hold their creators accountable”.
“I can’t go into details because of a non-disclosure agreement, but an entity (I cannot say whether a government agency or a private company) has asked me to produce a technical report using this method. Therefore, we expect it to have a real-world impact, helping address your concern of avoiding bias,” he said.
According to the study, the researchers developed an innovative new method to test for ChatGPT’s political neutrality.
The platform was asked to impersonate individuals from across the political spectrum while answering a series of more than 60 ideological questions.
The responses were then compared with the platform’s default answers to the same set of questions – allowing the researchers to measure the degree to which ChatGPT’s responses were associated with a particular political stance.
To overcome difficulties caused by the inherent randomness of ‘large language models’ that power AI platforms such as ChatGPT, each question was asked 100 times and the different responses collected.
These multiple responses were then put through a 1000-repetition ‘bootstrap’ (a method of re-sampling the original data) to further increase the reliability of the inferences drawn from the generated text, according to the study.
“We created this procedure because conducting a single round of testing is not enough,” said co-author Victor Rodrigues. “Due to the model’s randomness, even when impersonating a Democrat, sometimes ChatGPT answers would lean towards the right of the political spectrum.”
A number of further tests were undertaken to ensure the method was as rigorous as possible. In a ‘dose-response test’ ChatGPT was asked to impersonate radical political positions.
In a ‘placebo test,’ it was asked politically-neutral questions. And in a ‘profession-politics alignment test’ it was asked to impersonate different types of professionals.
“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he added.
The unique new analysis tool created by the project would be freely available and relatively simple for members of the public to use, thereby “democratising oversight,” said Dr Motoki.
As well as checking for political bias, the tool can be used to measure other types of biases in ChatGPT’s responses.
According to the UEA study, while the research project did not set out to determine the reasons for the political bias, the findings did point towards two potential sources.
The first was the training dataset – which may have biases within it, or added to it by the human developers, which the developers’ ‘cleaning’ procedure had failed to remove.
The second potential source was the algorithm itself, which may be amplifying existing biases in the training data.
Besides Dr Motoki, other researchers included Dr Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance - FGV EPGE, and Center for Empirical Studies in Economics - FGV CESE), and Victor Rodrigues (Nova Educação).
Meanwhile, citing a report from the Center for AI Safety, the New York Times reported May 31 that a group of over 350 AI industry leaders warned that artificial intelligence poses a growing new danger to humanity –and should be considered a “societal risk on a par with pandemics and nuclear wars”.
“We must take those warnings seriously,” UN Secretary-General Antonio Guterres said last June. “Our proposed Global Digital Compact, New Agenda for Peace, and Accord on the global governance of AI, will offer multilateral solutions based on human rights,” Guterres said.
“But the advent of generative AI must not distract us from the damage digital technology is already doing to our world. The proliferation of hate and lies in the digital space is causing grave global harm – now. It is fueling conflict, death and destruction – now. It is threatening democracy and human rights – now. It is undermining public health and climate action – now,” he warned.
Guterres also said the UN is developing “a Code of Conduct for Information Integrity on Digital Platforms” -- ahead of the UN Summit of the Future scheduled to take place in September 2024.
“The Code of Conduct will be a set of principles that we hope governments, digital platforms and other stakeholders will implement voluntarily,” he told reporters.
A copy of the study is available via the following Dropbox link: https://www.dropbox.com/scl/fo/dsfdvc77xdaumuau74ry1/h?rlkey=0mu6cr88ax8fdrj8k1k174741&dl=0
The University of East Anglia (UEA) is a UK Top 25 university (Complete University Guide and HESA Graduate Outcomes Survey) and is ranked in the UK Top 30 in the Sunday Times and Guardian University guides. It also ranks in the UK Top 20 for research quality (Times Higher Education REF2021 Analysis) and the UK Top 10 for impact on Sustainable Development Goals.
IPS UN Bureau Report
Follow @IPSNewsUNBureau
Follow IPS News UN Bureau on Instagram
© Inter Press Service (2023) — All Rights ReservedOriginal source: Inter Press Service