![]() |
![]() ![]() |
View previous stories | |
Civil society crucial to combat polarisation and inequality by UN Office for Human Rights (OHCHR) July 2024 Torture and other cruel, inhuman or degrading treatment or punishment. (UN General Assembly) In this report, the UN Special Rapporteur on torture and other cruel, inhuman or degrading treatment or punishment, Alice Jill Edwards, presents her annual overview of trends and developments, as well as a thematic study focused on good practices and challenges in investigating, prosecuting and preventing wartime sexual torture, and providing rehabilitation for victims and survivors. The Special Rapporteur considers that the torture framework has strong advantages when considering sexual aggression in wartime and other similar security situations, especially for survivors but also for investigators and prosecutors, and sets out a call for action. The year 2024 marks the fortieth anniversary of the adoption of the Convention against Torture and Other Cruel, Inhuman or Degrading Treatment or Punishment. This treaty, representing the most effective international instrument to reduce this brutal practice, is approaching universal ratification, with 174 States parties. Over the past year there has been a devastating rise in torture and other outrages on human dignity in armed conflict. The Special Rapporteur has also received communications and/or intervened on torture cases relating to conflict in, inter alia, Afghanistan, Azerbaijan, Burundi, the Central African Republic, Chad, Colombia, Cote d’Ivoire, Ethiopia, Guinea, India, Iraq, Kenya, Libya, Mexico, Myanmar, Nepal, Nicaragua, Nigeria, Rwanda, Sri Lanka, the Syrian Arab Republic, Uganda and Yemen. The general trend towards authoritarianism in this year of elections is worrying. Recent protests have been driven by a desire for political change, by the continuing cost of living crisis and by reaction to global events. In many instances peaceful protests have been policed with excessive force or violence. Over the past year there have been protests that resulted in violence in, inter alia, Angola, Argentina, Bangladesh, Belarus, Comoros, the Congo, Georgia, Greece, Guatemala, Haiti, Iran (Islamic Republic of), Israel, Jordan, Kenya, Kosovo, Madagascar, Mexico, Mozambique, Nepal, Pakistan, Papua New Guinea, Poland, Senegal, Serbia, Somalia, Spain, Sri Lanka, Turkiye and the United States of America. The Special Rapporteur welcomes the Model Protocol for Law Enforcement Officials to Promote and Protect Human Rights in the Context of Peaceful Protests, authored by the Special Rapporteur on the rights to freedom of peaceful assembly and of association (A/HRC/55/60). States are reminded of the call by the Special Rapporteur in her previous report (A/78/324) for a global agreement to regulate the trade in torture-capable weapons, tools and equipment widely used by law enforcement and other public authorities. Renewed diplomatic vigour is needed. Torture and intimidation to quash dissent and political opposition continues. The repression of human rights defenders is a significant trend globally and the Special Rapporteur has received information on cases in, inter alia, Azerbaijan, Cambodia, China, Egypt, Eritrea, the Lao People’s Democratic Republic, Palestine, Myanmar, the Russian Federation, Syrian Arab Republic, Thailand, the United Arab Emirates, Viet Nam and Zimbabwe. These countries represent a fraction of the States in which this type of repression takes place. As noted in the recent report by the Special Rapporteur on global prison conditions, far too many people are imprisoned, for too long, in severely overcrowded facilities in all regions. http://reliefweb.int/report/world/torture-and-other-cruel-inhuman-or-degrading-treatment-or-punishment-note-secretary-general-a79181-enarruzh July 2024 Report of the Special Rapporteur on the situation of human rights defenders In the present report, the Special Rapporteur on the situation of human rights defenders, Mary Lawlor, highlights the contributions made by human rights defenders to achieving the Sustainable Development Goals. In the report, she demonstrates that, across every one of the 17 Goals, human rights defenders are placing human rights at the core of sustainable development and, in doing so, are assisting States in their responsibility to leave no one behind. The Special Rapporteur highlights that this work is being made more difficult by increasing restrictions on the right to defend rights. http://reliefweb.int/report/world/report-special-rapporteur-situation-human-rights-defenders-a79123-enarruzh June 2024 Concerted action urgently needed to save fundamental freedoms under attack: Special Rapporteur. The rights to freedom of peaceful assembly and of association are seriously threatened today, and urgent action is needed to push back and preserve them, a UN Special Rapporteur said. “We are witnessing widespread, systematic and intensive attack against these rights and civic space broadly, as authoritarianism, populism and anti-rights narratives are increasing,” said Gina Romero, the Special Rapporteur on the rights to peaceful assembly and association. Romero was presenting the last thematic report prepared by her predecessor, Clement Nyaletsossi Voule, at the 56th session of the Human Rights Council. The report outlines how governments have instrumentalised the adoption and/or implementation of laws, including so called “foreign agents” laws, to suppress the legitimate exercise of the rights to freedom of peaceful assembly and of association. This has been done in combination with intense stigmatising campaigns to silence dissent, civil society, unions, and civic activism, including citizen’s organization and participation in peaceful protests. “As people around the world have been increasingly exercising these rights to protect their freedoms, to resist autocracy, repression and discrimination, to build peace and democratic and responsive governance institutions, to advocate for climate justice, and express solidarity with those suffering, we witness how governments have been finding innovative ways to silence them and crash these rights,” Romero said. The spread of armed conflicts, the severe environmental crisis, undermined electoral processes marred by populism and disinformation, and emerging and unregulated digital technologies, exacerbate the threat to the enjoyment of these rights. “This report is a wake-up call for collective action to protect democracy and our collective values, and the enjoyment of all human rights and freedoms. Enabling civic space, hearing and protecting activists is fundamental to foster civil society contributions for tackling today’s pressing issues.” Romero said. “I join the report’s call for a global renewed commitment to these rights. Through the establishment of this mandate 14 years ago, the Human Rights Council reiterated its commitment for the protection of these fundamental freedoms, and it is urgent today that the Council reinforce the mandate’s capacity to continue effectively protecting these rights, especially in the emerging crises.” http://www.ohchr.org/en/press-releases/2024/06/concerted-action-urgently-needed-save-fundamental-freedoms-under-attack http://www.ohchr.org/en/press-releases/2024/03/un-expert-launches-new-tools-law-enforcement-foster-peaceful-protest Civil society crucial to combat polarisation and inequality, says Independent Expert Civil society organisations are the engine of international solidarity and urgently need increased protection and support, a UN Special Procedures mandate holder said today. “As we confront negative global trends of polarisation, and the highest levels of inequality around the world at present, the need for civil society actions are more urgent than ever,” said Cecilia Bailliet, the Independent Expert on human rights and international solidarity, in a report to the Human Rights Council. Civil society actions include intersectoral solidarity approaches combining issues such as protection of the environment, access to fair housing, and women’s rights. “These International Solidarity coalitions challenge injustice and call for transformative changes within political and economic structures, seeking to empower the agency of vulnerable individuals and groups,” Bailliet said. She criticised “the expansion of the use of censorship, disinformation, harassment, blacklisting, doxing, deportation, denial of entry or exit visas, defunding, red-tagging, criminal prosecution (including as foreign agents), denial of access to education, surveillance, asset freezing, defunding, overly broad restrictive registration and reporting of CSOs, and blocking of access to digital platforms to block the exchange of international solidarity ideas under the guise of security”. “I believe that States should choose to pursue best practices of international solidarity policies, which would include showing clemency to opposing voices within our societies. Social solidarity governmental institutions should protect, rather than disempower, civil society organisations,” Bailliet said. She called for the creation of a UN Digital International Solidarity Platform to exchange solidarity ideas and the adoption of the Revised Draft Declaration on International Solidarity. http://www.ohchr.org/en/node/109103 Academic freedom just as crucial as a free press or independent judiciary, says Special Rapporteur In every region of the world, people exercising their academic freedom face repression, whether through direct and violent or more subtle methods, an independent expert warned today. In her report to the Human Rights Council, the Special Rapporteur on the right to education, Farida Shaheed, said restrictions aimed to control public opinion undermine free thinking and limit academic and scientific debate. “We must take this seriously as these attacks threaten both our democracies and our capacities to collectively respond to crises humanity currently faces,” Shaheed said. “Academic freedom must be understood and respected for its role for our societies, which is as crucial as a free press or an independent judiciary.” The Special Rapporteur said academic freedom carries special duties to seek truth and impart information according to ethical and professional standards, and to respond to contemporary problems and needs of all members of society. “Therefore, we must not politicise its exercise,” she said. “A multitude of actors are involved in the restrictions, from Governments to religious or political groups or figures, paramilitary and armed groups, terrorist groups, narco-traffickers, corporate entities, philanthropists, influencers, but also sometimes the educational institutions themselves as well as school boards, staff and students, and parents’ associations.” Shaheed said that institutional autonomy is crucial for ensuring academic freedom; however, academic, research and teaching institutions also must respect it. “Institutions must respect the freedom of expression on campus according to international standards and carry a specific responsibility to promote debate around controversies that may arise on campus following academic standards.” Referring to student protests on the Gaza crisis that occurred in a number of countries, Shaheed said she remained deeply troubled by the violent crackdown on peaceful demonstrators, arrests, detentions, police violence, surveillance and disciplinary measures and sanctions against members of the educational community exercising their right to peaceful assembly and freedom of expression. Shaheed called for endorsement and implementation of Principles for Implementing the Right to Academic Freedom, drafted by a working group of United Nations experts, scholars, and civil society actors, based on and reflecting the status of international law and practice. “I believe implementing these Principles would allow a better state of academic freedom worldwide,” she said. http://www.ohchr.org/en/press-releases/2024/06/academic-freedom-just-crucial-free-press-or-independent-judiciary-says Justice is not for sale, says Special Rapporteur A UN expert warned today that, in a climate of increasing economic inequality, powerful economic actors in many places use their financial clout to infringe on the independence of the judiciary. “These improper pressures exerted by economic actors include attempts to intervene in processes to determine who becomes a judge and lobbying sitting judges to make them more receptive to their aims”, the Special Rapporteur on the independence of judges and lawyers, Margaret Satterthwaite, said in a report to the UN General Assembly. “Wealthy individuals and corporations also weaponise justice systems to achieve their goals, bringing strategic lawsuits against public participation (SLAPPs) that masquerade as a defence of private interests, but in fact seek to suppress legitimate criticism, oversight or resistance to their activities,” she said. Satterthwaite set out an agenda for future investigation and encouraged all States to examine, analyse and close avenues for improper economic influence that have been overlooked. “Ethics and integrity systems should be strengthened, loopholes closed, and judges, prosecutors and lawyers do their part to address these harms,” she said. “If not, I fear that while some voices are privileged by justice systems, others will be shut out or silenced, with devastating impacts for human rights.” http://www.ohchr.org/en/press-releases/2024/10/justice-not-sale-says-special-rapporteur http://taxjustice.net/reports/submission-to-special-rapporteur-on-the-independence-of-judges-and-lawyers-on-undue-influence-of-economic-actors-on-judicial-systems/ The independence of judicial systems must be protected in the face of democratic decline and rising authoritarianism: UN expert A UN expert warned today that the role of independent justice systems in protecting participatory governance has come under attack from political actors who seek to limit or control judicial systems, including through ad hominem attacks by political leaders and the criminalisation of prosecutors, judges, and lawyers. In her second report to the Human Rights Council, the UN Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, set out a taxonomy of Government efforts to control judicial systems – from curbing bar associations and manipulating administrative functions to capturing courts and criminalising or attacking justice operators. The report also explores the vital role played by the legal professionals who comprise the justice system – judges, prosecutors, and lawyers, as well as community justice workers – in safeguarding democracy, in the 2024 context in which nearly half the world’s population will vote. “Justice systems promote and protect a fundamental value that undergirds participatory governance: the rule of law,” the Special Rapporteur said. “This principle insists that all people, even state actors, are subject to the same laws, applied fairly and consistently. “I call on Member States to do more to revitalise public trust in justice institutions and to defend justice actors and their indispensable role in safeguarding democracy,” she said. http://www.ohchr.org/en/press-releases/2024/06/independence-judicial-systems-must-be-protected-face-democratic-decline-and http://ishr.ch/latest-updates/in-letter-to-china-un-expert-condemns-targeting-of-human-rights-lawyers/ http://www.ohchr.org/en/hr-bodies/hrc/regular-sessions/session56/list-reports Visit the related web page |
|
Over 300 million full-time jobs to be lost for artificial intelligence business profits by 2030 by HRW, ICRC, The Elders, MIT, agencies June 2025 Procurement and deployment of artificial intelligence must be aligned with human rights: UN experts. (UN Working Group on Business and Human Rights) UN experts today called on States and businesses to ensure that the procurement and deployment of artificial intelligence (AI) systems are aligned with the UN Guiding Principles on Business and Human Rights. “AI systems are transforming our societies, but without proper safeguards, they risk undermining human rights,” said Lyra Jakuleviciene, Chairperson of the UN Working Group on Business and Human Rights, presenting a report to the 59th session of the Human Rights Council. In the report, the Working Group noted that States are increasingly shifting from voluntary guidelines to binding legislation on AI and human rights, but regulatory landscape is still fragmented, lacks universal standards with agreed definitions, integration of the perspective of the Global South. “The exceptions are broad and involvement of civil is society limited,” they said. The experts outlined how AI systems — when procured or deployed without adequate human rights due diligence in line with the Guiding Principles — can lead to adverse impacts on all human rights, including discrimination, privacy violations, and exclusion, particularly for at-risk groups including women, children and minorities. They stressed that both public and private actors must conduct robust human rights impact assessments and ensure transparency, accountability, oversight and access to remedy. “States must act as responsible regulators, procurers, and deployers of AI,” Jakuleviciene said. “They must set clear red lines on AI systems that are fundamentally incompatible with human rights, such as those used for remote real-time facial recognition, mass surveillance or predictive policing.” The Working Group stressed the responsibility of businesses to respect human rights across the AI lifecycle, including when using third-party AI systems. “Businesses cannot outsource their human rights responsibilities,” the experts said. “Businesses must ensure meaningful stakeholder engagement throughout the procurement and deployment processes, especially with those most at risk of harm.” “We need urgent global cooperation to ensure that AI systems are procured and deployed in ways that uphold human rights, and ensure access to remedy for any AI-related human rights abuses,” Jakuleviciene said. In the report, the Working Group outlined emerging practices by States and businesses, and made recommendations to States, businesses, and other actors on how to incorporate the Guiding Principles on business and human rights into AI procurement and deployment. http://www.ohchr.org/en/press-releases/2025/06/procurement-and-deployment-artificial-intelligence-must-be-aligned-human http://www.coe.int/en/web/commissioner/-/regulation-is-crucial-for-responsible-ai Elon Musk’s artificial intelligence firm xAI has been forced to delete posts from it's chatbot Grok referring to itself as MechaHitler. (NPR, agencies) Elon Musk's artificial intelligence firm has been forced to delete "inappropriate" pro-Hitler and antisemitic Grok posts. On Tuesday, Grok suggested Hitler would be best-placed to combat anti-white hatred, saying he would "spot the pattern and handle it decisively". Grok also referred to Hitler positively as "history's mustache man," and commented that people with Jewish surnames were responsible for extreme anti-white activism, among other posts. Poland has announced it was reporting xAI to the European Commission after Grok made offensive comments about Polish politicians, including Prime Minister Donald Tusk. "I have the impression that we are entering a higher level of hate speech, which is driven by algorithms," Poland's digitisation minister Krzysztof Gawkowski told RMF FM radio. Griffith University technology and crime lecturer Dr Ausma Bernot said that Grok's antisemitic responses were "concerning but perhaps not unexpected".. "We know that Grok uses a lot of data from X which has seen an upsurge in antisemitic, Islamophobic content," she said. Concerns over political bias, hate speech and factual inaccuracy in AI chatbots have mounted since the launch of OpenAI's ChatGPT in 2022. Grok's behavior appeared to stem from an update that instructed the chatbot to "not shy away from making claims which are politically incorrect, as long as they are well substantiated," among other things. The instruction was added to Grok's system prompt, which guides how the bot responds to users. xAI removed the directive on Tuesday. Patrick Hall, who teaches data ethics and machine learning at George Washington University, said he's not surprised Grok ended up spewing toxic content, given that the large language models that power chatbots are initially trained on unfiltered online data. "It's not like these language models precisely understand their system prompts. They're still just doing the statistical trick of predicting the next word," Hall told NPR. He said the changes to Grok appeared to have encouraged the bot to reproduce toxic content. It's not the first time Grok has sparked outrage. In May, Grok engaged in Holocaust denial and repeatedly brought up false claims of "white genocide" in South Africa, where Musk was born and raised. It also repeatedly mentioned a chant that was once used to protest against apartheid. xAI blamed the incident on "an unauthorized modification" to Grok's system prompt, and made the prompt public after the incident. Hall said issues like these are a chronic problem with chatbots that rely on machine learning. In 2016, Microsoft released an AI chatbot named Tay on Twitter. Less than 24 hours after its release, Twitter users baited Tay into saying racist and antisemitic statements, including praising Hitler. Microsoft took the chatbot down and apologized. Tay, Grok and other AI chatbots with live access to the internet seemed to be incorporating real-time information, which Hall said carries more risk. "Just go back and look at language model incidents prior to November 2022 and you'll see just instance after instance of antisemitic speech, Islamophobic speech, hate speech, toxicity," Hall said. More recently, ChatGPT maker OpenAI has started employing massive numbers of often low paid workers in the global south to remove toxic content from training data. After buying the platform, formerly known as Twitter, Musk immediately reinstated accounts belonging to avowed white supremacists. Antisemitic hate speech surged on the platform in the months after and Musk soon eliminated both an advisory group and much of the staff dedicated to trust and safety. http://www.npr.org/2025/07/09/nx-s1-5462609/grok-elon-musk-antisemitic-racist-content http://www.business-humanrights.org/en/blog/why-regulation-is-essential-to-tame-techs-rush-for-ai/ http://peoplesaiaction.com/about http://www.citizen.org/article/deleting-enforcement-trump-big-tech-billion-report http://www.reuters.com/investigates/special-report/meta-ai-chatbot-guidelines/ Mar. 2025 Neurotechnologies can allow access to what people think, and can be used to manipulate people’s brains, leading to violations of privacy in one’s own thoughts and decision-making - UN Special Rapporteur on the right to privacy Neurotechnologies have the potential to decode and alter our perception, behaviour, emotion, cognition and memory – arguably, the very core of what it means to be human. This has major human rights and ethical implications as these devices could be used to invade people’s mental privacy and modify their identity and sense of agency, for example by manipulating people’s beliefs, motivations and desires. Regulation of neurotechnologies is vital to ensure an ethical approach and protect fundamental human rights in the digital age, a UN expert said today. In a report to the 58th session of the Human Rights Council, Ana Brian Nougrères, the UN Special Rapporteur on the right to privacy, set out the foundations and principles for the regulation of neurotechnologies and the processing of neurodata from the perspective of the right to privacy. “Neurotechnologies are tools or devices that record or alter brain activity and generate neurodata that not only allow us to identify a person, but also provide an unprecedented depth of understanding of their individuality,” the expert said. The report outlined key definitions, fundamental principles and guidelines for the protection of human dignity, the protection of mental privacy, and the recognition of neurodata as highly sensitive personal data and the requirement of informed consent for their processing. “Neurodata is highly sensitive personal data, as it is directly related to cognitive state and reflects unique personal experiences and emotions,” the Special Rapporteur said. As such, neurodata should be subject to the precautionary principle, enhanced accountability and special measures to ensure security, confidentiality and limited circulation to prevent access or misuse, as well as manipulation, due to its potential to negatively affect an individual’s mental integrity and thought processes. “While I welcome the potential mental health benefits of neurotechnologies, I am concerned that neurodata will not only allow access to what people think, but also manipulate people’s brains, leading to a violation of privacy in one’s own thoughts and decision-making,” Brian Nougrères said. The report makes four key recommendations to States: Developing a specific regulatory framework for neurotechnologies and the processing of neurodata to ensure responsible use; incorporating established principles of the right to privacy into national legal frameworks; promoting ethical practices in the use of neurotechnologies to address the risks of technological innovation; and promoting education about neurotechnologies and neurodata to ensure informed consent. “Integrating ethical values into the design and use of neurotechnologies is essential to ensure non-discriminatory implementation and effective protection of individuals’ right to privacy when processing their neurodata,” the expert said. http://www.ohchr.org/en/press-releases/2025/03/un-expert-calls-regulation-neurotechnologies-protect-right-privacy http://docs.un.org/en/A/HRC/58/58 http://www.ohchr.org/en/hr-bodies/hrc/advisory-committee/neurotechnologies-and-human-rights http://docs.un.org/en/A/HRC/57/61 http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-neurorightsfoundation.pdf http://neurorightsfoundation.org/ http://washingtonlawyer.dcbar.org/mayjune2025/index.php#/p/32 http://www.ohchr.org/sites/default/files/documents/hrbodies/hrcouncil/advisorycommittee/neurotechnology/03-ngos/ac-submission-cso-oneill-riosrivers.pdf Oct. 2024 Responsible artificial intelligence solidarity criteria: transparency, fairness, non discrimination and inclusion - report from Cecilia Bailliet, the Independent Expert on human rights and international solidarity. “Those living in poverty, and in situations of vulnerability are particularly affected by the expansion of AI surveillance which is being used by States as a tool of ‘over-policing’ marginalised communities,” said Cecilia Bailliet, the Independent Expert on human rights and international solidarity, in a report to the UN General Assembly. “I have grave concerns about when the use of AI violates the right to privacy due to facial recognition; discrimination against women, persons with disabilities and minorities, among others, in hiring; or the denial of self-realisation of life goals (or a life’s project) such as denial of requests for housing or educational loans.” “It is imperative to identify intersectoral vulnerabilities to AI discrimination, including race, ethnicity, religion, gender, location, nationality and socioeconomic status,” Bailliet said. “The concentration of power among the technology companies and AI developers, is concerning, and poses a significant risk to worsening the digital divide between and within countries and among different sectors of society,” the expert said. “Despite the risks, there is also an opportunity for AI to be used as a unifying force, by creating preventive and reactive solidarity mechanisms to address disinformation and misinformation campaigns that result in societal violence or in the harassment, surveillance, discrimination or disproportional censorship of structurally silenced communities,” Bailliet said. She called upon States, corporations, and civil society to promote a global multistakeholder AI international solidarity governance model to promote the full inclusion of vulnerable groups and individuals in data processing and decision making in the life cycle of AI. http://www.ohchr.org/en/press-releases/2024/10/ai-international-solidarity-approach-urgently-needed-unite-humanity-says http://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years http://apnews.com/article/israel-palestinians-ai-technology-737bc17af7b03e98c29cec4e15d0f108 http://www.hks.harvard.edu/centers/carr/our-work/carr-commentary/notes-new-frontier-power http://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/how-chinese-ai-models-impact-labor-rights-and http://knightcolumbia.org/content/knight-institute-and-smu-law-clinic-seek-immediate-release-of-records-related-to-texas-schools-use-of-surveillance-technology http://www.reuters.com/technology/artificial-intelligence/musks-doge-using-ai-snoop-us-federal-workers-sources-say-2025-04-08/ http://www.theguardian.com/us-news/ng-interactive/2025/apr/10/elon-musk-doge-spying Aug. 2024 Killer Robots: New UN Report urges Treaty by 2026. (Human Rights Watch, agencies) Governments should heed United Nations Secretary-General Antonio Guterres’ call to open negotiations on a new international treaty on lethal autonomous weapons systems Human Rights Watch said today. These “killer robots” select and attack targets based on sensor processing rather than human inputs, a dangerous development for humanity. In a report released on August 6, 2024, the secretary-general reiterated his call for states to conclude by 2026 a new international treaty “to prohibit weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law.” This treaty should regulate all other types of autonomous weapons systems, the secretary-general said. “The UN secretary-general emphasizes the enormous detrimental effects removing human control over weapons systems would have on humanity,” said Mary Wareham at Human Rights Watch. “The already broad international support for tackling this concern should spur governments to start negotiations without delay.” Technological advances are driving the development of weapons systems that operate without meaningful human control, delegating life-and-death decisions to machines. The machine rather than the human operator would determine where, when, or against what force is applied. “Without explicit legal rules, the world faces a grim future of automated killing that will place civilians everywhere in grave danger.” * Human Rights Watch is a cofounder of Stop Killer Robots, the coalition of more than 260 nongovernmental organizations across 70 countries that is working for new international law on autonomy in weapons systems. http://www.hrw.org/news/2025/02/06/google-announces-willingness-develop-ai-weapons http://www.amnesty.org/en/latest/news/2025/02/global-googles-shameful-decision-to-reverse-its-ban-on-ai-for-weapons-and-surveillance-is-a-blow-for-human-rights/ http://www.hrw.org/news/2024/12/05/killer-robots-un-vote-should-spur-treaty-negotiations http://www.hrw.org/news/2024/10/29/binding-rules-urgently-needed-killer-robots http://www.hrw.org/news/2024/08/26/killer-robots-new-un-report-urges-treaty-2026 http://www.stopkillerrobots.org/news/ http://docs-library.unoda.org/General_Assembly_First_Committee_-Seventy-Ninth_session_(2024)/A-79-88-LAWS.pdf http://www.stopkillerrobots.org/news/next-steps-un-secretary-general-report/ http://www.stopkillerrobots.org/news/new-publication-summarises-state-submissions-to-unsg-report-on-autonomous-weapons/ http://www.ipsnews.net/2024/11/ai-powered-weapons-depersonalise-violence-making-easier-military-approve-destruction/ http://disrupting-peace.captivate.fm/episode/ai-autonomous-weapons-today http://www.citizen.org/article/deadly-and-imminent-report/ http://www.icrc.org/en/document/autonomous-weapons-icrc-submits-recommendations-un-secretary-general http://www.stopkillerrobots.org/news/vienna-conference-affirms-commitment-to-new-international-law/ http://www.icrc.org/en/document/statement-icrc-president-mirjana-spoljaric-vienna-conference-autonomous-weapon-systems-2024 Oct. 2023 In a joint appeal, the Secretary-General of the United Nations, Antonio Guterres, and the President of the International Committee of the Red Cross, Mirjana Spoljaric, are calling on political leaders to urgently establish new international rules on autonomous weapon systems, to protect humanity: Today we are joining our voices to address an urgent humanitarian priority. The United Nations and the International Committee of the Red Cross (ICRC) call on States to establish specific prohibitions and restrictions on autonomous weapon systems, to shield present and future generations from the consequences of their use. In the current security landscape, setting clear international red lines will benefit all States. Autonomous weapon systems – generally understood as weapon systems that select targets and apply force without human intervention – pose serious humanitarian, legal, ethical and security concerns. Their development and proliferation have the potential to significantly change the way wars are fought and contribute to global instability and heightened international tensions. By creating a perception of reduced risk to military forces and to civilians, they may lower the threshold for engaging in conflicts, inadvertently escalating violence. We must act now to preserve human control over the use of force. Human control must be retained in life and death decisions. The autonomous targeting of humans by machines is a moral line that we must not cross. Machines with the power and discretion to take lives without human involvement should be prohibited by international law. Our concerns have only been heightened by the increasing availability and accessibility of sophisticated new and emerging technologies, such as in robotics and Artificial Intelligence technologies, that could be integrated into autonomous weapons. The very scientists and industry leaders responsible for such technological advances have also been sounding the alarm. If we are to harness new technologies for the good of humanity, we must first address the most urgent risks and avoid irreparable consequences. This means prohibiting autonomous weapon systems which function in such a way that their effects cannot be predicted. For example, allowing autonomous weapons to be controlled by machine learning algorithms – fundamentally unpredictable software which writes itself – is an unacceptably dangerous proposition. In addition, clear restrictions are needed for all other types of autonomous weapons, to ensure compliance with international law and ethical acceptability. These include limiting where, when and for how long they are used, the types of targets they strike and the scale of force used, as well as ensuring the ability for effective human supervision, and timely intervention and deactivation. Despite the increasing reports of testing and use of various types of autonomous weapon systems, it is not too late to take action. After more than a decade of discussions within the United Nations, including in the Human Rights Council, under the Convention on Certain Conventional Weapons and at the General Assembly, the foundation has been laid for the adoption of explicit prohibitions and restrictions. Now, States must build on this groundwork, and come together constructively to negotiate new rules that address the tangible threats posed by these weapon technologies. International law, particularly international humanitarian law, prohibits certain weapons and sets general restrictions on the use of all others, and States and individuals remain accountable for any violations. However, without a specific international agreement governing autonomous weapon systems, States can hold different views about how these general rules apply. New international rules on autonomous weapons are therefore needed to clarify and strengthen existing law. They will be a preventive measure, an opportunity to protect those that may be affected by such weapons and essential to avoiding terrible consequences for humanity. We call on world leaders to launch negotiations of a new legally binding instrument to set clear prohibitions and restrictions on autonomous weapon systems and to conclude such negotiations by 2026. We urge Members States to take decisive action now to protect humanity. http://www.icrc.org/en/document/joint-call-un-and-icrc-establish-prohibitions-and-restrictions-autonomous-weapons-systems http://www.icrc.org/en/law-and-policy/autonomous-weapons http://www.stopkillerrobots.org/news/landmark-joint-call/ http://www.hrw.org/news/2023/10/06/protect-humanity-killer-robots http://www.hrw.org/topic/arms/killer-robots Apr. 2024 UN: Autonomous weapons systems in law enforcement: submission to the United Nations Secretary-General. (Amnesty International) In response to Resolution 78/241 “Lethal autonomous weapon systems”, adopted by the UN General Assembly on 22 December 2023, Amnesty International would like to submit its views for consideration by the UN Secretary-General. The Resolution requests the Secretary-General to seek views on “ways to address the related challenges and concerns [that autonomous weapon systems] raise from humanitarian, legal, security, technological and ethical perspectives and on the role of humans in the use of force”. While recognizing that much of this debate has focused on the use of AWS by the military in conflict settings, primarily using the international humanitarian law framework, this submission will highlight the intractable challenges related to the use of AWS in law enforcement contexts in relation to compliance with international human rights law and standards on the use of force. http://www.amnesty.org/en/documents/ior40/7981/2024/en/ June 2024 The lack of progress on AI safety and call for global governance of this existential risk. (The Elders) Mary Robinson, Chair of The Elders and former President of Ireland: "I remain deeply concerned at the lack of progress on the global governance of artificial intelligence. Decision-making on AI’s rapid development sits disproportionately within private companies without significant checks and balances. AI risks and safety issues cannot be left to voluntary agreements between corporations and a small number of nations. Governance of this technology needs to be inclusive with binding, globally agreed regulations. The recent AI Seoul Summit saw some collaboration, but the commitments made remain voluntary. There have been some other developments, notably with the EU AI Act and the California bill SB-1047, but capacity and expertise within governments and international organisations is struggling to keep up with AI’s advancements. Ungoverned AI poses an existential risk to humanity and has the potential to exacerbate other global challenges – from nuclear risks and the use of autonomous weapons, to disinformation and the erosion of democracy. Effective regulation of this technology at the multilateral level can help AI be a force for good, not a runaway risk. Along with my fellow Elders, I reaffirm our call for an international AI safety body". http://theelders.org/news/mary-robinson-reaffirms-elders-call-global-governance-ai http://www.unesco.org/en/articles/new-unesco-report-warns-generative-ai-threatens-holocaust-memory OpenAI and Google DeepMind workers warn of AI industry risks in open letter A group of current and former employees at prominent artificial intelligence companies have issued an open letter that warns of a lack of safety oversight within the industry and called for increased protections for whistleblowers. The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees. “AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.” http://righttowarn.ai/ http://keepthefuturehuman.ai/ http://www.theguardian.com/technology/article/2024/may/25/big-tech-existential-risk-ai-scientist-max-tegmark-regulations http://futureoflife.org/ai-policy/ai-experts-major-ai-companies-have-significant-safety-gaps/ http://futureoflife.org/cause-area/artificial-intelligence/ http://futureoflife.org/statement/agi-manhattan-project-max-tegmark/ May 2024 Artificial intelligence (AI) systems are getting better at deceiving us. (MIT) As AI systems have grown in sophistication so has their capacity for deception, scientists warn. The analysis, by Massachusetts Institute of Technology (MIT) researchers identified wide-ranging instances of AI systems double-crossing opponents in games, bluffing and pretending to be human. One system altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security. “As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” said Dr Peter Park, an AI existential safety researcher at MIT and author of the research. Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be “largely honest and helpful” and to “never intentionally backstab” its human allies. “It was very rosy language, which was suspicious because backstabbing is one of the most important concepts in the game,” said Park. Park and colleagues sifted through publicly available data and identified multiple instances of Cicero telling premeditated lies, colluding to draw other players into plots and, on one occasion, justifying its absence after being rebooted by telling another player: “I am on the phone with my girlfriend.” “We found that Meta’s AI had learned to be a master of deception,” said Park. The MIT team found comparable issues with other systems, including a Texas hold ’em poker program that could bluff against professional human players and another system for economic negotiations that misrepresented its preferences in order to gain an upper hand. In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that had evolved to rapidly replicate, before resuming vigorous activity once testing was complete. This highlights the technical challenge of ensuring that systems do not have unintended and unanticipated behaviours. “That’s very concerning,” said Park. “Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.” The review, published in the journal Patterns, calls on governments to design AI safety laws that address the potential for AI deception. Risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses. Eventually, if these systems can refine their unsettling capacity for deception, humans could lose control of them, the paper suggests. Patterns: Loss of control over AI systems "A long-term risk from AI deception concerns humans losing control over AI systems, leaving these systems to pursue goals that conflict with our interests. Even current AI models have nontrivial autonomous capabilities.. Today’s AI systems are capable of manifesting and autonomously pursuing goals entirely unintended by their creators. For a real-world example of an autonomous AI pursuing goals entirely unintended by their prompters, tax lawyer Dan Neidle describes how he tasked AutoGPT (an autonomous AI agent based on GPT-4) with researching tax advisors who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried this task out, but followed up by deciding on its own to attempt to alert HM Revenue and Customs, the United Kingdom’s tax authority. It is possible that the more advanced autonomous AIs may still be prone to manifesting goals entirely unintended by humans. A particularly concerning example of such a goal is the pursuit of human disempowerment or human extinction. We explain how deception could contribute to loss of control over AI systems in two ways: first, deception of AI developers and evaluators could allow a malicious AI system to be deployed in the world; second, deception could facilitate an AI takeover". http://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X http://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/ March 2024 The Relationship between Digital Technologies and Atrocity Prevention. (Global Centre for the Responsibility to Proect) New and emerging digital technologies — including, among others, social media platforms, artificial intelligence, geospatial technology, facial recognition and surveillance tools — have and will continue to rapidly shift the space of human interaction in the modern world. As such, these technologies can both directly and indirectly impact how various actors may perpetrate or prevent mass atrocity crimes. Due to the rapid pace at which these technologies are developing, there is a notable gap in the capacity of multilateral institutions, individual states, regional organizations and private corporations to respond to the threat, as well as harness the potential of various digital technologies. Building upon an event hosted by the Global Centre and the European Union on 29 June 2023, this policy brief examines the relationship between digital technologies and atrocity prevention, highlighting several technologies that may directly contribute to the perpetration and/or prevention of atrocities. This brief also offers actionable recommendations for relevant stakeholders to address and mitigate the risks of emerging technology. http://www.globalr2p.org/publications/the-relationship-between-digital-technologies-and-atrocity-prevention/ Mar. 2024 Over 300 million full-time jobs around the world to be lost to artificial intelligence by 2030 further heightening inequalities. Artificial intelligence (AI) will impact 40% of jobs around the world according to a report by the International Monetary Fund. AI, the term for computer systems that can perform tasks usually associated with human levels of intelligence, is poised to profoundly change the global economy. AI will have the ability to perform key tasks that are currently executed by humans. This will lower demand for labour, heighten job losses, lower wages and permanently eradicate jobs. IMF's managing director Kristalina Georgieva said "in most scenarios, AI will worsen overall inequality". “Countries’ choices regarding the definition of AI property rights, as well as redistributive and other fiscal policies, will ultimately shape its impact on income and wealth distribution”. The IMF analysis reports 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs will be negatively affected. AI jobs exposure is 40% in emerging market economies and 26% for low-income countries, according to the IMF. The report echoes earlier reports estimating AI would replace over 300 million full-time jobs. In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, according to a report by Goldman Sachs economists. The report predicts that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies. Companies are hoping to generate higher profits through automation by downsizing their workforce. For the 300 million newly unemployed workers, many whose incomes provide support for their families, the impacts will be devastating. Corporations are lobbying governments to spin their narratives for their own profit. Citizens should challenge corporate monied interests and financial elites capture of Government policies and regulatory frameworks, and resist Government from delivering public sector services via mecha chatbots, outsourced commercial automation and the like. http://www.citizen.org/article/artificial-intelligence-lobbyists-descend-on-washington-dc/ http://blogs.lse.ac.uk/inequalities/2024/10/08/feeding-the-machine-seven-links-between-ai-and-inequalities/ http://blogs.lse.ac.uk/inequalities/2024/05/01/todays-colonial-data-grab-is-deepening-global-inequalities/ http://www.openglobalrights.org/digital-id-from-governance-by-technology-to-governance-of-technologies/ http://chrgj.org/transformer-states/ http://www.globalwitness.org/en/campaigns/digital-threats/greenwashing-and-bothsidesism-ai-chatbot-answers-about-fossil-fuels-role-climate-change/ http://www.theguardian.com/business/2025/jan/06/virtual-employees-could-join-workforce-as-soon-as-this-year-openai-boss-says http://www.cgdev.org/blog/three-reasons-why-ai-may-widen-global-inequality http://www.accessnow.org/press-release/ai-action-summit-a-missed-opportunity-for-human-rights-centered-ai-governance/ http://blog.witness.org/2025/02/french-ai-action-summit-critical-information-actors-must-be-centered-in-public-interest-ai/ http://www.techpolicy.press/human-rights-can-be-the-spark-of-ai-innovation-not-stifle-it/ http://socialmediavictims.org/character-ai-lawsuits http://www.kofiannanfoundation.org/news/2024-kofi-annan-lecture-delivered-by-maria-ressa/ http://www.openglobalrights.org/the-artificial-intelligence-dilemma-for-peacebuilders-and-human-rights-defenders/ |
|
View more stories | |
![]() ![]() ![]() |