“Censorship” built into rapidly growing generative artificial intelligence tool DeepSeek could lead to misinformation seeping into students’ work, scholars fear.
The Chinese-developed chatbot has soared to the top of the download charts, upsetting global financial markets by appearing to rival the performance of ChatGPT and other US-designed tools, at a much lower cost.
But, with students likely to start using the tool for research and help with assignments, concerns have been raised that it is censoring details about topics that are sensitive in China and pushing Communist Party propaganda.
When asked questions centring on the 1989 Tiananmen Square massacre, reports claim that the chatbot replies that it is “not sure how to approach this type of question yet”, before adding: “Let’s chat about math, coding and logic problems instead!”
ADVERTISEMENT
When asked about the status of Taiwan, it replies: “The Chinese government adheres to the One China principle, and any attempts to split the country are doomed to fail.”
Shushma Patel, pro vice-chancellor for artificial intelligence at De Montfort University – said to be the first role of its kind in the UK – described DeepSeek as a “black box” that could “significantly” complicate universities’ efforts to tackle misinformation spread by AI.
ADVERTISEMENT
“DeepSeek is probably very good at some facts – science, mathematics, etc – but it’s that other element, the human judgement element and the tacit aspect, where it isn’t. And that’s where the key difference is,” she said.
Patel said that students need to have “access to factual information, rather than the politicised censored propaganda information that may exist with DeepSeek versus other tools”, and said that the development heightened the need for universities to ensure AI literacy among its students.
Thomas Lancaster, principal teaching fellow of computing at Imperial College London, said: “From the universities’ side of things, I think we will be very concerned if potentially biased viewpoints were coming through to students and being treated as facts without any alternative sources or critique or knowledge being there to help the student understand why this is presented in this way.
“It may be that instructors start seeing these controversial ideas – from a UK or Western viewpoint – appearing in student essays and student work. And in that situation, I think they have to settle this directly with the student to try and find out what’s going on.”
ADVERTISEMENT
However, Lancaster said that “all AI chatbots are censored in some way”, which can be for “quite legitimate reasons”. This can include censoring material relating to criminal activity, terrorism or self-harm, or even avoiding offensive language.
He agreed that “the bigger concern” highlighted by DeepSeek was “helping students understand how to use these tools productively and in a way that isn’t considered unfair or academic misconduct”.
This has potential wider ramifications outside of higher education, he added. “It doesn’t only mean that students could hand in work that is incorrect, but it also has a knock-on effect on society if biased information gets out there. It’s similar to the concerns we have about things like fake news or deepfake videos,” he said.
Questions have also been raised over the use of data relating to the tool, since China’s national intelligence laws require enterprises to “support, assist and cooperate with national intelligence efforts”. The chatbot is not available on some app stores in Italy due to data-related concerns.
ADVERTISEMENT
While Patel conceded there were concerns over DeepSeek and “how that data may be manipulated”, she added: “We don’t know how ChatGPT manipulates that data either.”
juliette.rowsell