F.T.C. Chair Warns Apple Against Bias in Apple News

© Jim Wilson/The New York Times

© Jim Wilson/The New York Times

© Getty Images
Pressure is mounting on tech companies to shield users from unlawful government requests that advocates say are making it harder to reliably share information about Immigration and Customs Enforcement (ICE) online.
Alleging that ICE officers are being doxed or otherwise endangered, Trump officials have spent the last year targeting an unknown number of users and platforms with demands to censor content. Early lawsuits show that platforms have caved, even though experts say they could refuse these demands without a court order.
In a lawsuit filed on Wednesday, the Foundation for Individual Rights and Expression (FIRE) accused Attorney General Pam Bondi and Department of Homeland Security Secretary Kristi Noem of coercing tech companies into removing a wide range of content "to control what the public can see, hear, or say about ICE operations."


© Aurich Lawson | Getty Images
TikTok wants users to believe that errors blocking uploads of anti-ICE videos or direct messages mentioning Jeffrey Epstein are due to technical errors—not the platform shifting to censor content critical of Donald Trump after he hand-picked the US owners who took over the app last week.
However, experts say that TikTok users' censorship fears are justified, whether the bugs are to blame or not.
Ioana Literat, an associate professor of technology, media, and learning at Teachers College, Columbia University, has studied TikTok's politics since the app first shot to popularity in the US in 2018. She told Ars that "users' fears are absolutely justified" and explained why the "bugs" explanation is "insufficient."


© Aurich Lawson | Getty Images

© Middle East Images/Agence France-Presse — Getty Images

© Pool photo by Christophe Petit Tesson

© Eric Lee for The New York Times

© Eric Lee for The New York Times
New report: “The Party’s AI: How China’s New AI Systems are Reshaping Human Rights.” From a summary article:
China is already the world’s largest exporter of AI powered surveillance technology; new surveillance technologies and platforms developed in China are also not likely to simply stay there. By exposing the full scope of China’s AI driven control apparatus, this report presents clear, evidence based insights for policymakers, civil society, the media and technology companies seeking to counter the rise of AI enabled repression and human rights violations, and China’s growing efforts to project that repression beyond its borders.
The report focuses on four areas where the CCP has expanded its use of advanced AI systems most rapidly between 2023 and 2025: multimodal censorship of politically sensitive images; AI’s integration into the criminal justice pipeline; the industrialisation of online information control; and the use of AI enabled platforms by Chinese companies operating abroad. Examined together, those cases show how new AI capabilities are being embedded across domains that strengthen the CCP’s ability to shape information, behaviour and economic outcomes at home and overseas.
Because China’s AI ecosystem is evolving rapidly and unevenly across sectors, we have focused on domains where significant changes took place between 2023 and 2025, where new evidence became available, or where human rights risks accelerated. Those areas do not represent the full range of AI applications in China but are the most revealing of how the CCP is integrating AI technologies into its political control apparatus.
News article.