AI Report on Southern Africa

		

http://misa.org

Section C: AI and disinformation –
A Zimbabwean election perspective.
AI and Disinformation
The Zimbabwe Electoral Commission implemented a biometric voting system to improve the
nation’s electoral process. To vote in the country, one would need to register as a voter with
their biometric data. As a result of the process of providing one’s biometric data, many voters
were under the impression that the government could use their data to track them. During
the elections in 2023, the ruling party boasted that they had used drones to record images
and count the number of people who attended rallies held by the opposition, according to
media reports11.
One potential risk of AI is the risk of manipulation of individual voting decisions before
an election via targeted disinformation campaigns. Search engines and social networks
spread information in ways that differ from the press. Search engines and social networks
mostly present third-party material while providing little of their own. This means that even
unprofessionally generated, different information can rapidly spread.
Furthermore, algorithms screen and weigh the information to suit the interests of users and
the expectations of advertising customers. Users help to evaluate and distribute material
by using likes, retweets, sharing, and other similar features. These platforms’ economic
functionalities are particularly vulnerable to automated manipulation. During times
of crisis and in the run-up to elections, the risk of interested parties disseminating false
information via the Internet grows. Social bots based on AI processes are frequently used to
magnify disinformation operations concurrently in the press and on television. For example,
automated social bots spread postings across multiple accounts simultaneously or hide
themselves by communicating in a human-like manner.
The rapid advancement of AI has brought new challenges. Recent advances in generative
adversarial networks (GANs) and large language models (LLMs) have led to the emergence
of increasingly convincing
deep fakes, voice clones, and algorithmic influences. The use of these AI applications has
the potential to subvert democratic processes by encouraging the promotion of third-party
agendas.
The Cambridge Analytica scandal vividly illustrated the risks of unregulated algorithmic
processes. AI was used in the 2023 Nigerian elections to supposedly “prove” that Peter Obi,
the presidential candidate for the Labour Party, and David Oyedepo were conspiring to rig
the election. These messages, which were voice-cloned, were shared on many platforms
online. Ultimately, fact-checkers and AI programmers determined that the recording was a
complete hoax. The case of Obi and Oyedepo demonstrates how AI is already influencing the
way elections are conducted.
11

https://www.herald.co.zw/the-writing-is-on-the-wall-zanu-pf-is-indomitable/

29

Select target paragraph3