States shall uphold the right to anonymity for online users as this gives them the confidence to report incidents
without the fear of double victimisation.
States shall, therefore, ensure that encryption technologies are legally permitted and not take measures to undermine them or adopt real-name registration policies.

This part outlines some steps the states and the private sector may adopt in implementing digital tools,
particularly AI and machine learning tools, to protect
users’ fundamental rights and safety.

In the development, deployment, and implementation
of AI, Southern African Governments must conduct a
thorough and comprehensive risk assessment whereby
they systematically evaluate the potential risks associated with the development, deployment, and use of AI
systems by doing the following:
Identify and analyse various risks, such as biases, security vulnerabilities, discriminatory outcomes,
privacy breaches, and adverse societal impacts that AI
may pose.
Evaluate the likelihood and potential impact of
identified risks, considering the context of the AI application and its intended use.
Prioritise risks based on severity, potential
harm, and likelihood of occurrence to effectively focus
regulatory efforts and resources.
Insofar as the Impact of AI is concerned, states must
evaluate the broader effects and consequences of AI on
individuals, communities, economies, and societies at
large by doing the following:
Identify and categorise the impact areas, such
as economic, social, ethical, legal, environmental, and
political, that AI technologies may influence.

Evaluate both positive and negative impacts of
AI, considering advancements, job displacement, privacy enhancement, fairness, and equity, among other
Analyse AI’s immediate and potential long-term
effects on societal structures, employment patterns, education systems, and public services.

Independent Audits
African governments need to support a collaborative
ecosystem of technical oversight and governance that
includes independent parties, not just companies and
governments, to bolster the trustworthiness of advances in AI innovation.
The technical oversight should include a data protection impact assessment (DPIA) and a fundamental
rights impact assessment (FRIA). States must take the
following steps to mitigate and reduce the harms of human rights violations from machine learning in public
sector systems:
Identify risks
Any state deploying machine learning technologies
must thoroughly investigate systems for their potential
to pose a risk to human rights before development or
acquisition, where possible, before use, and on an ongoing basis throughout the lifecycle and contexts of the
This investigation may include:
a) Conducting regular impact assessments before public procurement, during development, at regular milestones, and throughout the deployment and use of machine learning systems to identify potential sources of
discriminatory or other rights-harming outcomes — for
example, in algorithmic model design, in oversight processes, or data processing.
b) Taking appropriate measures to mitigate risks identified through impact assessments — for example,
mitigating the risk for misuse in amplifying tensions,
undermining privacy, and controlling information; conducting dynamic testing methods and pre-release trials; ensuring that potentially affected groups and field
experts are included as actors with decision-making

Select target paragraph3