The incorporation of automated components is feasible. A chatbot that clarifies specific
statements with examples or responds to user inquiries could lead users through the
programme. Adding these features to the way political information is gathered is one way
that AI systems could be used in political education. It is important to make sure that the
help systems are open, safe, trustworthy, and fair for everyone. These AI systems can greatly
enhance the accessibility and accuracy of political information for users.
The ruling Zanu PF in Zimbabwe has been accused of manipulating electoral results in previous
elections. Team Pachedu, a group of data analysts, created an app called Mandla. Which
aimed to increase transparency and accountability in the electoral process. Unfortunately,
the app did not work as anticipated.

AI Tools for Electoral Content Moderation
AI systems are increasingly utilised in the realm of elections for risk-management purposes,
including “electoral content moderation.”. Content moderation is the process by which social
media users curate content. The goal is to remove deepfakes, disinformation, and hate speech.
Furthermore, social media companies are gradually incorporating AI systems to detect
suspicious patterns in content before elections and identify election-related information.
Such content is currently subject to platform-specific self-regulatory regulations on all
recognised platforms. AI systems can considerably contribute to detecting false news and
assist citizens in making educated decisions.
It is envisaged that the systems would develop and make progress, particularly in the
detection of incorrect or even tendentious reporting. As a result, it is reasonable to expect
that more technologies for detecting false news or deepfakes will become available in the
future. These technologies will play a crucial role in empowering individuals to make more
informed and critical judgments about the information they consume.
Existing algorithmic content screening approaches have been criticised for being opaque,
unaccountable, and difficult to understand. For example, the decision to remove some items
but keep others needs to be more transparent and understandable. AI-based upload filters
are an essential tool for election material management. Upload filters, for example, which
were supposed to detect pornography by default, have received widespread criticism since
their beginnings. The reasons for this include the danger of collateral damage from incorrect
filtering or erroneous incentives that can lead to censorship.
If the platform operators do not yet meet their transparency obligations, academia, or civil
society organisations can act independently and conduct their own experiments with the
algorithms’ operation modes that reconstruct which criteria content management is subject
to (reverse engineering). However, this strategy, which is reserved for experts, is typically
too difficult to allow for transparency in a current election campaign, providing only an a
posteriori explanation of automated content selection. In this regard, it is critical to note
that science- and research-based non-governmental organisations must be granted access
to social media platforms (as requested by the current draft of the European Commission’s
Digital Services Act), to conduct research on the effects and functionalities of AI on digital
platforms.

34

Select target paragraph3