power in the design, testing, and review phases; submitting systems for independent expert review where
appropriate.
c) Subjecting systems to live, regular tests and audits; interrogating markers of success for bias and self-fulfilling
feedback loops; and ensuring holistic independent reviews of systems in the context of human rights harms
in a live environment.
d) Disclosing known limitations of the system in question — for example, noting measures of confidence,
known failure scenarios, and appropriate use limitations.

(c)
Avoid using ‘black box systems’ and only use
systems that meet meaningful standards of accountability and transparency, and refrain from using these
systems in high-risk contexts.

(IV). ENFORCING OVERSIGHT
States must adopt oversight mechanisms that identify,
examine, resolve, and test biases in datasets and the
machine learning model throughout the designing and
development phases.
Implementing oversight mechanisms may ensure that
the datasets used are not deficient, outdated, or insufficient.
States should:
(a)
Proactively adopt diverse hiring practices and
engage in consultations to assure diverse perspectives
so that those involved in designing, implementing, and
reviewing machine learning represent a range of backgrounds and identities.
(b)
Ensure that public bodies carry out training in
human rights and data analysis for officials involved in
the procurement, development, use, and review of machine learning tools.
(c)
Create mechanisms for independent oversight,
including by judicial authorities.

(III). ENSURING TRANSPARENCY
AND ACCOUNTABILITY.
States must conduct a realistic assessment of the capabilities and limitations of AI and must ensure and
require accountability and maximum transparency
around public sector use of machine learning systems.
States should:
(a)
Publicly disclose to the public sphere use of
machine learning systems and provide information that
explains in clear and accessible terms how automated
and machine learning decision-making processes are
reached—document actions are taken to identify, document, and mitigate against human rights harming impacts.
(b)
Enable independent analysis and oversight by
using auditable systems.

(d)
Ensure that machine learning-supported decisions meet internationally accepted standards for due
process.
Any state authority procuring machine learning technologies from the private sector should maintain relevant oversight and control over the use of the system
and require the third party to carry out human rights
due diligence to identify, prevent, and mitigate against
discrimination and other biases.
Transparency requires greater engagement with digital
rights organisations and other relevant civil society sectors.
Internet platforms, mainly social media, should adhere
to open communication, follow an open and transparent decision-making process, and openly publicise findings in contested cases given internet platforms such as
Facebook’s impact on the public sphere.

Select target paragraph3