Post by account_disabled on Mar 10, 2024 5:31:27 GMT 1
Approaches and roadmaps to identify the key risks posed by AI technologies and to set out how they propose to mitigate them. Although these legislative processes are very complex, this should not delay efforts to protect people from the negative consequences, present and future, of AI, and there are crucial elements that we at Amnesty know that any proposed regulation must contain. Such regulation must be legally binding and consider the already documented risks that AI can bring to people affected by these systems. The commitments and principles of “responsible” development and use of AI – the core of the current innovation-friendly regulatory framework pursued by the UK – do not provide adequate protection against the risks of emerging technology and must be made legal. Likewise, any regulation must include broader accountability mechanisms that go beyond sector-backed technical assessments . Although these mechanisms can be a useful chain within a regulatory toolkit, especially when it comes to testing algorithmic biases, the prohibition of systems fundamentally incompatible with human rights cannot be ignored, regardless of the accuracy or the technical efficiency they intend to have.
Need for consensus for the regulation of artificial intelligence Surveillance cameras on a city street lamp. © picture alliance/dpa/dpa-Zentralbild | Paul Zinken The EU process must serve as a learning tool for others to ensure that there are no loopholes that allow public and private sector actors to avoid regulatory obligations, and remove all exemptions to the use of AI in national security or law enforcement is essential to achieve this. It is also important that, where future regulation limits or prohibits the use of certain AI systems in USA Phone Number a jurisdiction, there are no loopholes or regulatory loopholes that would allow those same systems to be exported to other countries where they could be used to undermine human rights. humans from marginalized groups . This remains a glaring gap in the UK, US and EU approaches as they fail to take into account the global power imbalance these technologies entail, especially their impact on global majority communities whose voices are not represented. in these debates.
There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools. As we enter 2024, it is time to ensure not only that AI systems are rights-respecting by design, but also that people affected by these technologies are not only meaningfully involved in making decisions about how AI should be regulated. , but also their experiences are continually brought to light and center these debates. Rather than legislator talk, what we need is binding regulation that holds companies and other key players in the sector to account , and ensures that benefits are not achieved at the expense of human rights safeguards. International, regional and national governance efforts must complement and catalyze each other, and global discussions must not come at the expense of strong national regulation or binding regulatory standards: they are not mutually exclusive. This is the level at which accountability is achieved: we must learn from past attempts to regulate the technology, which means ensuring that robust mechanisms are introduced to allow victims of rights violations caused by AI to obtain justice.
Need for consensus for the regulation of artificial intelligence Surveillance cameras on a city street lamp. © picture alliance/dpa/dpa-Zentralbild | Paul Zinken The EU process must serve as a learning tool for others to ensure that there are no loopholes that allow public and private sector actors to avoid regulatory obligations, and remove all exemptions to the use of AI in national security or law enforcement is essential to achieve this. It is also important that, where future regulation limits or prohibits the use of certain AI systems in USA Phone Number a jurisdiction, there are no loopholes or regulatory loopholes that would allow those same systems to be exported to other countries where they could be used to undermine human rights. humans from marginalized groups . This remains a glaring gap in the UK, US and EU approaches as they fail to take into account the global power imbalance these technologies entail, especially their impact on global majority communities whose voices are not represented. in these debates.
There have already been documented cases of outsourced workers being exploited in Kenya and Pakistan by companies developing AI tools. As we enter 2024, it is time to ensure not only that AI systems are rights-respecting by design, but also that people affected by these technologies are not only meaningfully involved in making decisions about how AI should be regulated. , but also their experiences are continually brought to light and center these debates. Rather than legislator talk, what we need is binding regulation that holds companies and other key players in the sector to account , and ensures that benefits are not achieved at the expense of human rights safeguards. International, regional and national governance efforts must complement and catalyze each other, and global discussions must not come at the expense of strong national regulation or binding regulatory standards: they are not mutually exclusive. This is the level at which accountability is achieved: we must learn from past attempts to regulate the technology, which means ensuring that robust mechanisms are introduced to allow victims of rights violations caused by AI to obtain justice.