Can Microsoft’s New AI Model Be Trusted?
Microsoft’s model is now theoretically able to be accessed by 10,000 members of the US intelligence community. The development gives agencies like the CIA a huge leg-up compared to international equivalents that don’t currently have access to this technology, according to the assistant director of the CIA for the Transnational and Technology Mission Center Sheetal Patel.
“There is a race to get generative AI onto intelligence data”, Patel recently announced at a security conference at Vanderbilt University. “The first country to use generative AI for their intelligence would win that race. And I want it to be us.”
However, despite the exciting opportunities the tool is affording to intelligence agencies, doubts about its reliability remain. Since large multimodal language models like GPT-4 are driven by statistical probabilities, they’ve been known to ‘hallucinate’ from time to time – AKA draw false conclusions.
This only causes minor inconveniences for those using the tool for basic tasks like completing maths homework. However, using the technology to analyze classified, sensitive information could potentially carry a much higher risk. Microsoft ensures its new AI model is safer than private chatbots, but neither the software manufacturer nor the CIA have commented on how this model will be audited for accuracy.