People's Stories Freedom

View previous stories


Trump’s absurd ‘Board of Peace’ is a travesty of international law
by Jeffrey D. Sachs
The Conversation, agencies
 
Jan. 2026
 
The UN-based international order, however flawed, should be repaired through law and cooperation, not replaced by a gilded caricature.
 
The so-called “Board of Peace” being created by President Donald Trump is profoundly degrading to the pursuit of peace and to any nation that would lend it legitimacy. This is a trojan horse to dismantle the United Nations. It should be refused outright by every nation invited to join.
 
In its Charter, the Board of Peace (BoP) claims to be an “international organization that seeks to promote stability, restore dependable and lawful governance, and secure enduring peace in areas affected or threatened by conflict.” If this sounds familiar, it should, because this is the mandate of the United Nations. Created in the aftermath of World War II, the UN has as its central mission the maintenance of international peace and security.
 
It is no secret that Trump holds open contempt for international law and the United Nations. He said so himself during his September 2025 speech at the General Assembly, and has recently withdrawn from 31 UN entities.
 
Following a long tradition of US foreign policy, he has consistently violated international law, including the bombing of seven countries in the past year, none of which were authorized by the Security Council and none of which was undertaken in lawful self-defense under the Charter (Iran, Iraq, Nigeria, Somalia, Syria, Yemen, and Venezuela). He is now claiming Greenland, with brazen and open hostility towards the US allies in Europe.
 
So, what about this Board of Peace? It is, to put it simply, a pledge of allegiance to Trump, who seeks the role of world chairman and the world’s ultimate arbiter. The BoP will have as its Executive Board none other than Trump’s political donors, family members, and courtiers.
 
The leaders of nations that sign up will get to rub shoulders with, and take orders from, Marco Rubio, Steve Witkoff, Jared Kushner and Tony Blair. Hedge Fund owner and Republican Party mega-donor Marc Rowan also gets to play. More to the point, any decisions taken by the BoP will be subject to Trump’s approval.
 
If the charade of representatives isn’t enough, nations will have to pay $1 billion for a “permanent seat” on the Board. Any nation that participates should know what it is “buying.”
 
It is certainly not buying peace or a solution for the Palestinian people (as the money supposedly goes to Gaza’s reconstruction). It is buying ostensible access to Trump for as long as it serves his interests. It is buying an illusion of momentary influence in a system where Trump’s rules are enforced by personal whim.
 
The proposal is absurd not least because it purports to “solve” a problem that already has an 80-year-old global solution. The United Nations exists precisely to prevent the personalization of war and peace. It was designed after the wreckage of two world wars to base peace on collective rules and international law. The UN’s authority, rightly, derives from the UN Charter ratified by 193 member states (including the US, as ratified by the US Senate in July 1945) and grounded in international law.
 
If the US doesn’t want to abide by the Charter, the UN General Assembly should suspend the US credentials, as it once did with Apartheid South Africa.
 
Trump’s “Board of Peace” is a blatant repudiation of the United Nations. Trump has made that explicit, recently declaring that the Board of Peace “might” indeed replace the United Nations. This statement alone should end the conversation for any serious national leader.
 
Participation after such a declaration is a conscious decision to subordinate one’s country to Trump’s personalized global authority. It is to accept, in advance, that peace is no longer governed by the UN Charter, but by Trump.
 
Still, some nations, desperate to get on the right side of the US, may take the bait. They should remember the wise words of President John F. Kennedy in his inaugural address “ those who foolishly sought power by riding the back of the tiger ended up inside.”
 
The record shows that loyalty to Trump is never enough to salve his ego. Just look at the long parade of Trump’s former allies, advisers, and appointees who were humiliated, discarded, and attacked by him the moment they ceased to be useful to him.
 
For any nation, participation on the Board of Peace would be strategically foolish. Joining this body will create long-lasting reputational damage. Long after Trump himself is no longer President, a past association with this travesty will be a mark of poor judgment. It will remain as sad evidence that, at a critical moment, a national political system mistook a vanity project for statesmanship, squandering $1 billion of funds in the process.
 
Ultimately, refusal to join the “Board of Peace” will be an act of national self-respect. Peace is a global public good. The UN-based international order, however flawed, should be repaired through law and cooperation, not replaced by a gilded caricature. Any nation that values international law, and the respect for the United Nations, should decline immediately to be associated with this travesty of international law.
 
* France, Germany, Denmark, Norway, Sweden and Slovenia – have confirmed they will not join. Canada will not be joining.
 
* Jeffrey D. Sachs is a University Professor and Director of the Center for Sustainable Development at Columbia University.
 
http://www.commondreams.org/opinion/trump-board-of-peace http://www.dw.com/en/donald-trump-united-nations-un-board-of-peace-bop-israel-gaza-palestine-tony-blair/a-75580180 http://www.hrw.org/news/2026/01/27/trumps-board-of-peace-puts-rights-abusers-in-charge-of-global-order http://www.hrw.org/news/2026/02/01/will-human-rights-survive-the-donald-trump-era http://www.nytimes.com/2026/01/21/us/politics/trump-board-peace-united-nations.html http://www.bbc.com/news/articles/c4g0zx0llpzo http://www.aljazeera.com/news/2026/1/21/trumps-board-of-peace-who-has-joined-who-hasnt-and-why http://www.theguardian.com/us-news/2026/jan/20/trumps-board-of-peace-is-an-imperial-court-completely-unlike-what-was-proposed http://theconversation.com/donald-trumps-board-of-peace-looks-like-a-privatised-un-with-one-shareholder-the-us-president-273856 http://tinyurl.com/2n8btfj3 http://www.theguardian.com/us-news/2026/jan/22/australia-trump-board-of-peace-risk-analysis
 
Jan. 30, 2026
 
The United Nations said on Friday that it was facing imminent financial collapse and would run out of money by July if countries, namely the United States, did not pay their annual dues that amount to billions of dollars.
 
The United States is responsible for about 95 percent of the money owed to the United Nations, about $2.2 billion, according to a senior U.N. official who briefed reporters on the agency’s budget crisis. That amount is a combination of the U.S. annual dues for 2025, which has not been paid, and for 2026, the U.N. official said. In addition to its annual dues, the United States also owes the United Nations about $1.9 billion for active peacekeeping missions
 
The U.N. secretary general, António Guterres, sent a letter to the ambassadors of all 196 member states on Thursday warning them of “imminent financial collapse,” saying the organization’s financial straits this time were different from those in any previous periods, according to a copy of the letter seen by The New York Times. “The crisis is deepening, threatening program delivery and risking financial collapse,” Mr. Guterres wrote. “And the situation will further deteriorate in the near future. I cannot overstate the urgency of the situation we now face..”
 
http://www.nytimes.com/2026/01/30/world/americas/un-finances-collapse-debts.html


 


Accountability for harms arising from algorithmic systems
by Amnesty International, agencies
 
13 Jan. 2026
 
Malaysia and Indonesia block Elon Musk’s Grok over sexualized AI images. (agencies)
 
Malaysia and Indonesia have become the first countries to block Grok, the artificial intelligence chatbot developed by Elon Musk’s company xAI, as concerns grow among authorities that it is being misused to generate sexually explicit and nonconsensual images.
 
There is growing scrutiny of generative AI tools that can produce realistic images, sound and text, and concern that existing safeguards are failing to prevent their abuse.
 
The Grok chatbot, accessed through Musk’s social media platform X, has been criticized for generating manipulated images, including depictions of women in bikinis or sexually explicit poses, as well as images involving children.
 
“The government sees nonconsensual sexual deepfakes as a serious violation of human rights, dignity and the safety of citizens in the digital space,” Indonesian Communication and Digital Affairs Minister Meutya Hafid said in a statement.
 
Scrutiny of Grok is growing, including in the European Union, India, France and the United Kingdom, which said Monday it was moving to criminalize “nudification apps.”
 
Britain’s media regulator also launched an investigation into whether Grok broke the law by allowing users to share sexualized images of children.
 
Regulators in the two Southeast Asian nations said existing controls weren’t preventing the creation and spread of fake pornographic content, particularly involving women and children. Indonesia’s government blocked access to Grok on Saturday, followed by Malaysia on Sunday.
 
Initial findings showed Grok lacks effective safeguards to stop users from creating and distributing pornographic content based on real photos of Indonesian residents, Alexander Sabar, director-general of digital space supervision, said in a statement. He said such practices risk violating privacy and image rights when photos are manipulated or shared without consent.
 
The Malaysian Communications and Multimedia Commission noted “repeated misuse” of the tool to generate obscene, sexually explicit and nonconsensual manipulated images, including content involving women and children.
 
The regulator said notices were issued this month to X Corp. and xAI demanding stronger safeguards. “The restriction is imposed as a preventive and proportionate measure while legal and regulatory processes are ongoing,” it said, adding that access will remain blocked until effective safeguards are put in place.
 
The U.K.'s media regulator said it launched an investigation into whether Grok violated its duty to protect people from illegal content. The regulator, Ofcom, said Grok-generated images of children being sexualized or people being undressed may amount to pornography or child sexual abuse material.
 
http://www.politico.com/news/magazine/2026/01/21/elon-musk-donald-trump-social-media-laws-column-00738440 http://www.dw.com/en/eu-opens-probe-into-musks-grok-chatbot/a-75663255
 
* UN Bodies issue Joint Statement on Artificial Intelligence and the Rights of the Child: http://tinyurl.com/yp9nndha
 
States should strengthen AI governance frameworks to uphold and protect children’s rights. Global organisations are urged to integrate children’s rights across all AI-related policies and strategies. Governments and companies must ensure AI systems are transparent, accountable and designed to protect children. States must prevent and address violence and exploitation of children enabled or amplified by AI.
 
Stronger, child-centred data protection measures are needed to safeguard privacy within AI systems. AI-driven decisions should prioritise the best interests and holistic development of every child. Inclusive, bias-free AI is essential to ensure all children benefit. Children’s views and experiences should meaningfully inform AI policymaking and system design. AI development should support environmental sustainability while minimising long-term ecological harm to future generations.
 
* UN Agencies include United Nations Committee on the Rights of the Child (CRC); United Nations Children's Fund (UNICEF); United Nations Educational, Scientific and Cultural Organization (UNESCO); Office of the United Nations High Commissioner for Human Rights; Special Representative of the United Nations Secretary-General for Children and Armed Conflict; Special Representative of the United Nations Secretary-General on Violence against Children; United Nations Special Rapporteur on the sale, sexual exploitation and sexual abuse of children; United Nations Interregional Crime and Justice Research Institute (UNICRI).
 
Dec. 2025
 
Accountability for harms arising from algorithmic systems. (Amnesty International)
 
With the widespread use of Artificial Intelligence (AI) and automated decision-making systems (ADMs) that impact our everyday lives, it is crucial that rights defenders, activists and communities are equipped to shed light on the serious implications these systems have on our human rights, Amnesty International said ahead of the launch of its Algorithmic Accountability toolkit.
 
The toolkit draws on Amnesty International’s investigations, campaigns, media and advocacy in Denmark, Sweden, Serbia, France, India, United Kingdom, Occupied Palestinian Territory (OPT), the United States and the Netherlands. It provides a ‘how to’ guide for investigating, uncovering and seeking accountability for harms arising from algorithmic systems that are becoming increasingly embedded in our everyday lives specifically in the public sector realms of welfare, policing, healthcare, and education.
 
Regardless of the jurisdiction in which these technologies are deployed, a common outcome from their rollout is not “efficiency” or “improving” societies—as many government officials and corporations claim—but rather bias, exclusion and human rights abuses.
 
“The toolkit is designed for anyone looking to investigate or challenge the use of algorithmic and AI systems in the public sector, including civil society organizations (CSOs), journalists, impacted people or community organizations. It is designed to be adaptable and versatile to multiple settings and contexts.
 
“Building our collective power to investigate and seek accountability for harmful AI systems is crucial to challenging abusive practices by states and companies and meeting this current moment of supercharged investments in AI. Given how these systems can enable mass surveillance, undermine our right to social protection, restrict our freedom to peaceful protest and perpetuate exclusion, discrimination and bias across society,” said Damini Satija, Programme Director at Amnesty Tech.
 
The toolkit introduces a multi-pronged approach based on the learnings of Amnesty International’s investigations in this area over the last three years, as well as learnings from collaborations with key partners. This approach not only provides tools and practical templates to research these opaque systems and their resulting human rights violations, but it also lays out comprehensive tactics for those working to end these abusive systems by seeking change and accountability via campaigning, strategic communications, advocacy or strategic litigation.
 
One of the many case studies the toolkit draws on is Amnesty International’s investigation into Denmark’s welfare system, exposing how the Danish welfare authority Udbetaling Danmark (UDK)’s AI-powered welfare system fuels mass surveillance and risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups through its use of AI tools to flag individuals for social benefits fraud investigations.
 
The investigation could not have been possible without the collaboration with impacted communities, journalists and local civil society organisations and in that spirit, the toolkit is premised on deep collaboration between different disciplinary groups.
 
The toolkit situates human rights law as a critically valuable component of algorithmic accountability work, especially given this is a gap in the ethical and responsible AI fields and audit methods’.
 
Amnesty International’s method ultimately emphasises collaborative work, while harnessing the collective influence of a multi-method approach. Communities and their agency to drive accountability remains at the heart of the process.
 
“This issue is even more urgent today, given rampant unchecked claims and experimentation around the supposed benefits of using AI in public service delivery. State actors are backing enormous investments in AI development and infrastructure and giving corporations a free hand to pursue their lucrative interests, regardless of the human rights impacts now and further down the line,” said Damini Satija.
 
“Through this toolkit, we aim to democratize knowledge and enable civil society organizations, investigators, journalists, and impacted individuals to uncover these systems and the industries that produce them, demand accountability, and bring an end to the abuses enabled by these technologies.”
 
http://www.amnesty.org/en/latest/research/2025/12/algorithmic-accountability-toolkit/ http://www.amnesty.org/en/latest/news/2025/12/global-amnesty-international-launches-an-algorithmic-accountability-toolkit-to-enable-investigators-rights-defenders-and-activists-to-hold-powerfu/ http://www.coe.int/en/web/commissioner/-/regulation-is-crucial-for-responsible-ai http://thebulletin.org/the-ai-power-trip/ http://www.openglobalrights.org/will-human-rights-guide-technological-development/ http://www.business-humanrights.org/en/blog/why-regulation-is-essential-to-tame-techs-rush-for-ai/ http://www.nytimes.com/2025/10/21/technology/inside-amazons-plans-to-replace-workers-with-robots.html http://safe.ai/ai-risk http://ai-frontiers.org/articles/the-evidence-for-ai-consciousness-today http://www.theguardian.com/environment/2026/jan/29/gas-power-ai-climate http://globalwitness.org/en/campaigns/digital-threats/enabled-emissions-how-ai-helps-to-supercharge-oil-and-gas-production/ http://globalwitness.org/en/campaigns/digital-threats/ai-chatbots-share-climate-disinformation-to-susceptible-users/ http://globalwitness.org/en/campaigns/digital-threats
 
http://www.ohchr.org/en/press-releases/2025/06/procurement-and-deployment-artificial-intelligence-must-be-aligned-human http://www.ohchr.org/sites/default/files/documents/issues/civicspace/resources/brief-data-privacy-ai-report-rev.pdf http://carnegieendowment.org/research/2025/10/how-china-views-ai-risks-and-what-to-do-about-them http://futureoflife.org/ai-safety-index-winter-2025/ http://pwd.org.au/disability-representative-organisations-call-for-transparency-on-computer-generated-ndis-plans/ http://www.acoss.org.au/media_release/acoss-statement-on-the-robodebt-settlement/ http://theconversation.com/people-are-getting-their-news-from-ai-and-its-altering-their-views-269354 http://www.mdpi.com/2076-0760/14/6/391 http://icct.nl/publication/reading-between-lines-importance-human-moderators-online-implicit-extremist-content http://www.ipsnews.net/2025/09/unga80-lies-spread-faster-than-facts/
 
http://www.citizen.org/news/bipartisan-group-of-state-lawmakers-condemn-federal-ai-preemption-efforts/ http://www.hrw.org/news/2025/12/16/trump-administration-takes-aim-at-ai-accountability-laws http://www.citizen.org/news/trump-grants-his-greedy-big-tech-buddies-christmas-wish-with-dangerous-ai-preemption-eo/ http://www.hks.harvard.edu/centers/carr-ryan/our-work/carr-ryan-commentary/has-technology-outpaced-human-rights-frameworks http://www.democracynow.org/2026/1/1/empire_of_ai_karen_hao_on http://www.nytimes.com/2025/12/08/technology/ai-slop-sora-social-media.html http://politicsofpoverty.oxfamamerica.org/the-rise-of-the-tech-oligarchy/ http://politicsofpoverty.oxfamamerica.org/rise-of-the-tech-oligarchy-part-ii/ http://www.amnesty.org/en/latest/news/2025/08/amnesty-launches-breaking-up-with-big-tech-briefing/ http://www.amnesty.org/en/documents/POL30/0226/2025/en/ http://link.springer.com/article/10.1007/s00146-025-02371-1 http://link.springer.com/article/10.1007/s00146-025-02623-0


Visit the related web page
 

View more stories

Submit a Story Search by keyword and country Guestbook