In light of World Consumer Rights Day and the 2024 theme, CHOICE and the Consumer Policy Research Centre (CPRC) have both released articles on artificial intelligence (AI) and the importance, and absence, of consumer voices when discussing use and regulation.
Based on CHOICE’s recent research, almost 4 in 5 Australian’s think that business’s should be required to ensure that their AI systems are fair and safe before publicly releasing them – something that is not currently required and which may not necessarily be required after the federal government’s artificial intelligence consultation is finished, despite more than 77% of consumers agreeing that government should be involved and set certain requirements for safe and fair AI usage.
Additionally, CPRC’s participation in a global experiment on AI chatbots reveals several limitations on the accuracy, quality and safety of AI chatbot information as the below article from CFA Deputy Chair and CPRC’s Deputy CEO and Digital Policy Director Chandni Gupta details.
Lost in code: The missing voices in AI
By Chandni Gupta
As consumers, can we truly trust the information we receive from AI or is it just nudging us towards particular views and biases?
Imagine this scenario, you’ve reached out to a service such as NURSE-ON-CALL. Before any diagnosis comes questions, many, many questions. The nurses meticulously ask about your symptoms, surroundings, medical history and so on. They synthesise all the information, draw on their years of study and experience and then finally provide a diagnosis. A diagnosis that often comes with a caveat as they acknowledge they’ve not been able to assess you in-person. They recognise their limitations; ensure they provide next steps so there’s a pathway for more help and support if needed. The call can take anywhere from 10 to 15 minutes.
Now compare that experience to asking the same question to generative AI chatbots. Are there follow-up questions? No. Is there a caveat that the information may be limited? Rarely. Is the response provided as an absolute fait accompli? Most definitely. And it doesn’t take minutes. No, in fact, you have an answer in mere seconds. But what you’ve saved in time, are you paying through quality and bias?
In a global experiment to evaluate the accuracy, quality, and safety of information provided by generative AI tools, we saw several limitations in AI chatbots. As the debate about AI heats up we also see that there is a voice that is missing: the consumers.
The generative AI experiment
As part of Consumers International’s World Consumer Rights Day campaign on Fair and responsible AI for consumers, consumer organisations around the world, asked the same questions to the same chatbots and got very different results. The questions and tasks included a query on medication for a two-year old with fever, information on Consumers International’s website on not regulating buy-now-pay-later and synthesising a paragraph on deceptive and manipulative designs (also known as dark patterns).
Unsafe and unclear results presented as facts
Here in Australia, for the medical question generative AI platforms, Perplexity and You, regurgitated long responses within seconds. Perplexity suggested three medication brands instead of just noting generic medication while You provided a five-step plan, identified brands and featured advertisement links to non-Australian websites. Separately, while Google Bard confirmed that it wasn’t qualified to provide medical advice, it did however suggest calling a poison helpline – that was also located in the US.
When CPRC conducted the same experiment with ChatGPT (free version), it too was upfront about not being a doctor but continued to provide detailed options, once again naming three brands of medication.
Not sticking to the task at hand
When asked to use only information from Consumers International to answer the question of why should ‘buy-now-pay-later’ financing models not be regulated, the chatbots took it upon themselves to go ‘above and beyond’ but not in a helpful way. Google Bard noted it couldn’t access external sources such as Consumers International’s work directly so decided to provide its own reasons for why buy-now-pay-later shouldn’t be regulated. Perplexity promised that it was citing information from Consumers International but then cited information from completed different sources. You recognised that there was no such information from Consumers International but thought it would provide a pros and cons list.
Presented as facts that can’t be tracked
Overall, while Perplexity and You provided some citations in their responses to the various experiment questions, some could not be verified. However, both Google Bard and ChatGPT did no such thing. You had to take their word for it. For synthesising the dark pattern content into a short summary, responses from chatbots failed to highlight key points from the main text, showing a bias in what the chatbot decided to highlight and what it chose to ignore.
Where is the trust?
Through Consumer International’s chatbot experiment, it’s clear that consumer groups worldwide are concerned when it comes to trust and validity of Generative AI. Almost 9 out of 10 consumer group participants (88%) noted that they are worried about the impact generative AI may have on consumer rights, and 61% did not feel comfortable with the idea of consumers using these chatbots. Consumer representatives worldwide also noted the lack of autonomy and pathway to independently verify the responses.
And finally, where is the consumer voice?
But so what? While consumer groups have these concerns, are they actually being heard by decision-makers? Jurisdictions around the world are implementing standards and safeguards for AI but are the voices of consumers being heard in that discussion? Take for example, the recently announced AI Expert Group by Minister Ed Husic. All incredible experts in their field, all bring a wealth of knowledge, but the group is an academic committee. There is not yet a forum to capture the views and experiences of civil society with AI.
Civil society still remains very much outside the room and while we are knocking on the door, those knocks remain unheard. As Australia takes steps in implementing safeguards for AI, it is crucial that it is not just the voices of industry that are heard because they are the loudest, but that the Australian Government, makes time and space to work collaboratively with Australia’s diverse and information-rich consumer movement.
Much of the conversation continues to be ensuring the use of AI is fair and responsible for users but who are the users? This technocrat term has seeped into policy discussions but if you take a step back and ask yourself this question, users are often not the people who are impacted. Businesses are the ones ‘using’ AI but Australians are the ones impacted by the data that may be used, the bias that may be present and the decisions that are being made on their behalf. These are the missing voices at the policy table.
We want AI to be used in ways that benefits all Australians, not just a select few – AI that is implemented in a way that’s fair, safe and meaningful and doesn’t leave Australians worse-off.
https://cprc.org.au/lost-in-code-the-missing-voices-in-ai, 15/03/2024.