A watchdog group is sounding an alarm about the potential abuse of artificial intelligence (AI) by government agencies in Georgia.
The Tbilisi-headquartered Institute for the Development of Freedom of Information (IDFI) conducted research into AI use by Georgian government agencies, according to Teona Turashvili, an IDFI representative who summarized the organization’s findings in a collection recently published by the National Endowment for Democracy’s International Forum for Democratic Studies. The summary noted that while AI applications can enhance the reform process, “these same applications can endanger democratic principles – especially if state accountability is already tenuous due to shortcomings in judicial independence, government transparency, or law-enforcement oversight mechanisms.”
Rights groups and Western experts have criticized the Georgian Dream government, which enjoys a parliamentary supermajority in Tbilisi, for backsliding on its democratization commitments, thus dimming prospects for the country’s long-held goal of European Union accession. In her state-of-the-nation speech in late March, Georgian President Salome Zourabichvili accused the governing party of adopting authoritarian and illiberal tactics. “Georgian Dream has been slowly drained of dissenting opinion and alternative political views,” Zourabichvili said.
Mass protests in March frustrated a government attempt to adopt so-called foreign agent legislation, which critics say could be used to muzzle non-governmental activists and independent media. The use of AI represents a different, more subtle instrument that the government could use to potentially tilt the political playing field in its favor at a time when Georgia is preparing to hold parliamentary elections in 2024.
IDFI researchers found that at least five government agencies were using AI digital systems as part of their regular operations. They determined four of the five known instances of AI use to be comparatively innocuous. The fifth case, however, involved the Ministry of Internal Affairs and raised concerns about misuse. “The Ministry of Internal Affairs’ systems … stood out for their relative complexity,” stated the report, titled Assessing the Accountability of AI Systems in Georgia. “Most important, this ministry used facial recognition systems for investigative purposes and to carry out criminal and administrative proceedings.”
Adding to the watchdog group’s apprehension about the potential for abuse, authorities withheld details about the full extent of AI use at the Interior Ministry. Investigative journalists subsequently determined the ministry was employing AI that officials had not disclosed to IDFI researchers, “including ballistics and fingerprint recognition programs of Russian and Belarusian origin.”
“Guarding against the abuse or misuse of AI tools is particularly critical in the public security context, especially since Georgia’s law enforcement agencies are criticized frequently for their opaque practices,” says the report.
More broadly, the report called attention to a general lack of transparency concerning official use of AI in Georgia. Government agencies were reluctant to share information with IDFI researchers about AI functionality, including ethical standards, personal data protection protocols and legislative guidelines governing applications. “Our ability to assess whether AI systems were being used responsibly in Georgia’s public sector was limited,” the report states.
The report is circulating at a time when scientists, technologists and developers are expressing growing concern about AI’s potential to cause harm to humanity. Geoffrey Hinton, often described as the “godfather of AI,” recently resigned from a top position at Google, expressing regret over his role in developing AI and voicing concern that AI could ultimately upend civilization. “It’s hard to see how you can prevent the bad actors from using it for bad things,” Hinton told the New York Times.
To safeguard the use of AI technology by Georgian governmental agencies, IDFI representatives recommend the development of a framework to ensure transparency, as well as the implementation of comprehensive regulations covering AI use by officials. The report also cites a need to improve the technological literacy of government employees overseeing AI applications.
“To uphold democratic principles in the use of technologies that are transforming governance, stakeholders need a clear understanding of which AI systems are being used by government institutions, and for what purposes,” the report asserts.
Eurasianet receives NED funding.
Sign up for Eurasianet's free weekly newsletter. Support Eurasianet: Help keep our journalism open to all, and influenced by none.