People's Stories Freedom


Civil society crucial to combat polarisation and inequality
by UN Office for Human Rights (OHCHR)
 
June 2024
 
Concerted action urgently needed to save fundamental freedoms under attack: Special Rapporteur.
 
The rights to freedom of peaceful assembly and of association are seriously threatened today, and urgent action is needed to push back and preserve them, a UN Special Rapporteur said.
 
“We are witnessing widespread, systematic and intensive attack against these rights and civic space broadly, as authoritarianism, populism and anti-rights narratives are increasing,” said Gina Romero, the Special Rapporteur on the rights to peaceful assembly and association.
 
Romero was presenting the last thematic report prepared by her predecessor, Clement Nyaletsossi Voule, at the 56th session of the Human Rights Council.
 
The report outlines how governments have instrumentalised the adoption and/or implementation of laws, including so called “foreign agents” laws, to suppress the legitimate exercise of the rights to freedom of peaceful assembly and of association. This has been done in combination with intense stigmatising campaigns to silence dissent, civil society, unions, and civic activism, including citizen’s organization and participation in peaceful protests.
 
“As people around the world have been increasingly exercising these rights to protect their freedoms, to resist autocracy, repression and discrimination, to build peace and democratic and responsive governance institutions, to advocate for climate justice, and express solidarity with those suffering, we witness how governments have been finding innovative ways to silence them and crash these rights,” Romero said.
 
The spread of armed conflicts, the severe environmental crisis, undermined electoral processes marred by populism and disinformation, and emerging and unregulated digital technologies, exacerbate the threat to the enjoyment of these rights.
 
“This report is a wake-up call for collective action to protect democracy and our collective values, and the enjoyment of all human rights and freedoms. Enabling civic space, hearing and protecting activists is fundamental to foster civil society contributions for tackling today’s pressing issues.” Romero said.
 
“I join the report’s call for a global renewed commitment to these rights. Through the establishment of this mandate 14 years ago, the Human Rights Council reiterated its commitment for the protection of these fundamental freedoms, and it is urgent today that the Council reinforce the mandate’s capacity to continue effectively protecting these rights, especially in the emerging crises.”
 
http://www.ohchr.org/en/press-releases/2024/06/concerted-action-urgently-needed-save-fundamental-freedoms-under-attack
 
Civil society crucial to combat polarisation and inequality, says Independent Expert
 
Civil society organisations are the engine of international solidarity and urgently need increased protection and support, a UN Special Procedures mandate holder said today.
 
“As we confront negative global trends of polarisation, and the highest levels of inequality around the world at present, the need for civil society actions are more urgent than ever,” said Cecilia Bailliet, the Independent Expert on human rights and international solidarity, in a report to the Human Rights Council.
 
Civil society actions include intersectoral solidarity approaches combining issues such as protection of the environment, access to fair housing, and women’s rights.
 
“These International Solidarity coalitions challenge injustice and call for transformative changes within political and economic structures, seeking to empower the agency of vulnerable individuals and groups,” Bailliet said.
 
She criticised “the expansion of the use of censorship, disinformation, harassment, blacklisting, doxing, deportation, denial of entry or exit visas, defunding, red-tagging, criminal prosecution (including as foreign agents), denial of access to education, surveillance, asset freezing, defunding, overly broad restrictive registration and reporting of CSOs, and blocking of access to digital platforms to block the exchange of international solidarity ideas under the guise of security”.
 
“I believe that States should choose to pursue best practices of international solidarity policies, which would include showing clemency to opposing voices within our societies. Social solidarity governmental institutions should protect, rather than disempower, civil society organisations,” Bailliet said.
 
She called for the creation of a UN Digital International Solidarity Platform to exchange solidarity ideas and the adoption of the Revised Draft Declaration on International Solidarity.
 
http://www.ohchr.org/en/node/109103
 
Academic freedom just as crucial as a free press or independent judiciary, says Special Rapporteur
 
In every region of the world, people exercising their academic freedom face repression, whether through direct and violent or more subtle methods, an independent expert warned today.
 
In her report to the Human Rights Council, the Special Rapporteur on the right to education, Farida Shaheed, said restrictions aimed to control public opinion undermine free thinking and limit academic and scientific debate.
 
“We must take this seriously as these attacks threaten both our democracies and our capacities to collectively respond to crises humanity currently faces,” Shaheed said.
 
“Academic freedom must be understood and respected for its role for our societies, which is as crucial as a free press or an independent judiciary.”
 
The Special Rapporteur said academic freedom carries special duties to seek truth and impart information according to ethical and professional standards, and to respond to contemporary problems and needs of all members of society. “Therefore, we must not politicise its exercise,” she said.
 
“A multitude of actors are involved in the restrictions, from Governments to religious or political groups or figures, paramilitary and armed groups, terrorist groups, narco-traffickers, corporate entities, philanthropists, influencers, but also sometimes the educational institutions themselves as well as school boards, staff and students, and parents’ associations.”
 
Shaheed said that institutional autonomy is crucial for ensuring academic freedom; however, academic, research and teaching institutions also must respect it.
 
“Institutions must respect the freedom of expression on campus according to international standards and carry a specific responsibility to promote debate around controversies that may arise on campus following academic standards.”
 
Referring to student protests on the Gaza crisis that occurred in a number of countries, Shaheed said she remained deeply troubled by the violent crackdown on peaceful demonstrators, arrests, detentions, police violence, surveillance and disciplinary measures and sanctions against members of the educational community exercising their right to peaceful assembly and freedom of expression.
 
Shaheed called for endorsement and implementation of Principles for Implementing the Right to Academic Freedom, drafted by a working group of United Nations experts, scholars, and civil society actors, based on and reflecting the status of international law and practice.
 
“I believe implementing these Principles would allow a better state of academic freedom worldwide,” she said.
 
http://www.ohchr.org/en/press-releases/2024/06/academic-freedom-just-crucial-free-press-or-independent-judiciary-says
 
The independence of judicial systems must be protected in the face of democratic decline and rising authoritarianism: UN expert
 
A UN expert warned today that the role of independent justice systems in protecting participatory governance has come under attack from political actors who seek to limit or control judicial systems, including through ad hominem attacks by political leaders and the criminalisation of prosecutors, judges, and lawyers.
 
In her second report to the Human Rights Council, the UN Special Rapporteur on the Independence of Judges and Lawyers, Margaret Satterthwaite, set out a taxonomy of Government efforts to control judicial systems – from curbing bar associations and manipulating administrative functions to capturing courts and criminalising or attacking justice operators.
 
The report also explores the vital role played by the legal professionals who comprise the justice system ­– judges, prosecutors, and lawyers, as well as community justice workers – in safeguarding democracy, in the 2024 context in which nearly half the world’s population will vote.
 
“Justice systems promote and protect a fundamental value that undergirds participatory governance: the rule of law,” the Special Rapporteur said. “This principle insists that all people, even state actors, are subject to the same laws, applied fairly and consistently.
 
“I call on Member States to do more to revitalise public trust in justice institutions and to defend justice actors and their indispensable role in safeguarding democracy,” she said.
 
http://www.ohchr.org/en/press-releases/2024/06/independence-judicial-systems-must-be-protected-face-democratic-decline-and http://taxjustice.net/reports/submission-to-special-rapporteur-on-the-independence-of-judges-and-lawyers-on-undue-influence-of-economic-actors-on-judicial-systems/ http://www.ohchr.org/en/hr-bodies/hrc/regular-sessions/session56/list-reports


Visit the related web page
 


Over 300 million full-time jobs to be lost for artificial intelligence business profits by 2030
by The Elders, MIT, HRW, Future of Life, agencies
 
3 July 2024
 
Brazil: First Data Privacy safeguard of its kind in the Country Protects Children. (Human Rights Watch)
 
Yesterday, Brazil’s National Data Protection Authority issued a preliminary ban on Meta's (Facebook) use of personal data of users based in Brazil to train its artificial intelligence (AI) systems.
 
The decision stems from “the imminent risk of serious and irreparable damage or difficult-to-repair damage to the fundamental rights of the affected data subjects,” the agency said in announcing the ban.
 
The news follows Human Rights Watch reporting in June that personal photos of Brazilian children are used to build powerful AI tools without their knowledge or consent. In turn, others use these tools to create malicious deepfakes, putting even more children at risk of harm.
 
The National Data Protection Authority’s decision included two arguments that reflected Human Rights Watch’s recommendations. The first is the importance of protecting children’s data privacy, given the risk of harm and exploitation that results from their data being scraped and used by AI systems. The second centers on purpose limitation, and that people’s expectations of privacy when they share their personal data online—in some cases, years or decades before these AI systems were built—should be respected.
 
Meta has been using its US-based users’ publicly-posted personal data to train its AI models since last year. Last month, Meta paused its plans to do the same in Europe and the United Kingdom after objections from 11 data protection authorities. Yesterday’s decision effectively bans this practice in Brazil and imposes a daily fine of 50,000 reais, or about US$9,000, for failure to comply within five working days from notification of the decision. Following the regulator’s decision, Meta said that it “complies with privacy laws and regulations in Brazil”.
 
The Brazilian government’s decision is a powerful, proactive move to protect people’s data privacy in the face of swiftly evolving uses and misuses of AI. Yesterday’s action especially helps to protect children from worrying that their personal data, shared with friends and family on Meta’s platforms, might be used to harm them in ways that are impossible to anticipate or guard against.
 
http://www.hrw.org/news/2024/07/03/brazil-prevents-meta-using-people-power-its-ai http://www.hrw.org/news/2024/06/10/brazil-childrens-personal-photos-misused-power-ai-tools http://www.hrw.org/news/2024/07/02/australia-childrens-personal-photos-misused-power-ai-tools
 
June 2024
 
The lack of progress on AI safety and call for global governance of this existential risk. (The Elders)
 
Mary Robinson, Chair of The Elders and former President of Ireland:
 
"I remain deeply concerned at the lack of progress on the global governance of artificial intelligence. Decision-making on AI’s rapid development sits disproportionately within private companies without significant checks and balances.
 
AI risks and safety issues cannot be left to voluntary agreements between corporations and a small number of nations. Governance of this technology needs to be inclusive with binding, globally agreed regulations.
 
The recent AI Seoul Summit saw some collaboration, but the commitments made remain voluntary. There have been some other developments, notably with the EU AI Act and the California bill SB-1047, but capacity and expertise within governments and international organisations is struggling to keep up with AI’s advancements.
 
Ungoverned AI poses an existential risk to humanity and has the potential to exacerbate other global challenges – from nuclear risks and the use of autonomous weapons, to disinformation and the erosion of democracy.
 
Effective regulation of this technology at the multilateral level can help AI be a force for good, not a runaway risk. Along with my fellow Elders, I reaffirm our call for an international AI safety body".
 
http://theelders.org/news/mary-robinson-reaffirms-elders-call-global-governance-ai http://www.unesco.org/en/articles/new-unesco-report-warns-generative-ai-threatens-holocaust-memory
 
OpenAI and Google DeepMind workers warn of AI industry risks in open letter
 
A group of current and former employees at prominent artificial intelligence companies have issued an open letter that warns of a lack of safety oversight within the industry and called for increased protections for whistleblowers.
 
The letter, which calls for a “right to warn about artificial intelligence”, is one of the most public statements about the dangers of AI from employees within what is generally a secretive industry. Eleven current and former OpenAI workers signed the letter, along with two current or former Google DeepMind employees.
 
“AI companies possess substantial non-public information about the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm,” the letter states. “However, they currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily.”
 
http://righttowarn.ai/ http://www.theguardian.com/technology/article/2024/may/25/big-tech-existential-risk-ai-scientist-max-tegmark-regulations http://futureoflife.org/cause-area/artificial-intelligence/
 
May 2024
 
Artificial intelligence (AI) systems are getting better at deceiving us. (MIT)
 
As AI systems have grown in sophistication so has their capacity for deception, scientists warn. The analysis, by Massachusetts Institute of Technology (MIT) researchers identified wide-ranging instances of AI systems double-crossing opponents in games, bluffing and pretending to be human. One system altered its behaviour during mock safety tests, raising the prospect of auditors being lured into a false sense of security.
 
“As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious,” said Dr Peter Park, an AI existential safety researcher at MIT and author of the research.
 
Park was prompted to investigate after Meta, which owns Facebook, developed a program called Cicero that performed in the top 10% of human players at the world conquest strategy game Diplomacy. Meta stated that Cicero had been trained to be “largely honest and helpful” and to “never intentionally backstab” its human allies.
 
“It was very rosy language, which was suspicious because backstabbing is one of the most important concepts in the game,” said Park.
 
Park and colleagues sifted through publicly available data and identified multiple instances of Cicero telling premeditated lies, colluding to draw other players into plots and, on one occasion, justifying its absence after being rebooted by telling another player: “I am on the phone with my girlfriend.” “We found that Meta’s AI had learned to be a master of deception,” said Park.
 
The MIT team found comparable issues with other systems, including a Texas hold ’em poker program that could bluff against professional human players and another system for economic negotiations that misrepresented its preferences in order to gain an upper hand.
 
In one study, AI organisms in a digital simulator “played dead” in order to trick a test built to eliminate AI systems that had evolved to rapidly replicate, before resuming vigorous activity once testing was complete. This highlights the technical challenge of ensuring that systems do not have unintended and unanticipated behaviours.
 
“That’s very concerning,” said Park. “Just because an AI system is deemed safe in the test environment doesn’t mean it’s safe in the wild. It could just be pretending to be safe in the test.”
 
The review, published in the journal Patterns, calls on governments to design AI safety laws that address the potential for AI deception. Risks from dishonest AI systems include fraud, tampering with elections and “sandbagging” where different users are given different responses. Eventually, if these systems can refine their unsettling capacity for deception, humans could lose control of them, the paper suggests.
 
Patterns: Loss of control over AI systems
 
"A long-term risk from AI deception concerns humans losing control over AI systems, leaving these systems to pursue goals that conflict with our interests. Even current AI models have nontrivial autonomous capabilities.. Today’s AI systems are capable of manifesting and autonomously pursuing goals entirely unintended by their creators.
 
For a real-world example of an autonomous AI pursuing goals entirely unintended by their prompters, tax lawyer Dan Neidle describes how he tasked AutoGPT (an autonomous AI agent based on GPT-4) with researching tax advisors who were marketing a certain kind of improper tax avoidance scheme. AutoGPT carried this task out, but followed up by deciding on its own to attempt to alert HM Revenue and Customs, the United Kingdom’s tax authority. It is possible that the more advanced autonomous AIs may still be prone to manifesting goals entirely unintended by humans.
 
A particularly concerning example of such a goal is the pursuit of human disempowerment or human extinction. We explain how deception could contribute to loss of control over AI systems in two ways: first, deception of AI developers and evaluators could allow a malicious AI system to be deployed in the world; second, deception could facilitate an AI takeover".
 
http://www.cell.com/patterns/fulltext/S2666-3899(24)00103-X http://www.technologyreview.com/2024/05/10/1092293/ai-systems-are-getting-better-at-tricking-us/
 
Mar. 2024
 
Over 300 million full-time jobs around the world to be lost to artificial intelligence by 2030 further heightening inequalities.
 
Artificial intelligence (AI) will impact 40% of jobs around the world according to a report by the International Monetary Fund. AI, the term for computer systems that can perform tasks usually associated with human levels of intelligence, is poised to profoundly change the global economy. AI will have the ability to perform key tasks that are currently executed by humans. This will lower demand for labour, heighten job losses, lower wages and permanently eradicate jobs.
 
IMF's managing director Kristalina Georgieva said "in most scenarios, AI will worsen overall inequality". “Countries’ choices regarding the definition of AI property rights, as well as redistributive and other fiscal policies, will ultimately shape its impact on income and wealth distribution”.
 
The IMF analysis reports 60% of jobs in advanced economies such as the US and UK are exposed to AI and half of these jobs will be negatively affected. AI jobs exposure is 40% in emerging market economies and 26% for low-income countries, according to the IMF. The report echoes earlier reports estimating AI would replace over 300 million full-time jobs.
 
In the United States and Europe, approximately two-thirds of current jobs “are exposed to some degree of AI automation,” and up to a quarter of all work could be done by AI completely, according to a report by Goldman Sachs economists. The report predicts that 18% of work globally could be computerized, with the effects felt more deeply in advanced economies.
 
Companies are hoping to generate higher profits through automation by downsizing their workforce. For the 300 million newly unemployed workers, many whose incomes provide support for their families, the impacts will be devastating. Corporations are lobbying governments to spin their narratives for their own profit. Citizens should challenge corporate monied interests and financial elites capture of Government policies and regulatory frameworks, and resist Government from delivering public sector services via mecha chatbots, outsourced commercial automation and the like.
 
http://www.citizen.org/article/artificial-intelligence-lobbyists-descend-on-washington-dc/


 

View more stories

Submit a Story Search by keyword and country Guestbook