Reading view

There are new articles available, click to refresh the page.

The KOSA Internet Censorship Bill Just Passed The Senate—It's Our Last Chance To Stop It

The Senate just passed a bill that will let the federal and state governments investigate and sue websites that they claim cause kids mental distress. It’s a terrible idea to let politicians and bureaucrats decide what people should read and view online, but the Senate passed KOSA on a 91-3 vote.   

TAKE ACTION

Don't let congress censor the internet

Bill proponents have focused on some truly tragic stories of loss, and then tied these tragedies to the internet. But anxiety, eating disorders, drug abuse, gambling, tobacco and alcohol use by minors, and the host of other ills that KOSA purports to address all existed well before the internet

The Senate vote means that the House could take up and vote on this bill at any time. The House could also choose to debate its own, similarly flawed, version of KOSA. Several members of the House have expressed concerns about the bill. 

The members of Congress who vote for this bill should remember—they do not, and will not, control who will be in charge of punishing bad internet speech. The Federal Trade Commission,  majority-controlled by the President’s party, will be able to decide what kind of content “harms” minors, then investigate or file lawsuits against websites that host that content. 

Politicians in both parties have sought to control various types of internet content. One bill sponsor has said that widely used educational materials that teach about the history of racism in the U.S. causes depression in kids. Kids speaking out about mental health challenges or trying to help friends with addiction are likely to be treated the same as those promoting addictive or self-harming behaviors, and will be kicked offline. Minors engaging in activism or even discussing the news could be shut down, since the grounds for suing websites expand to conditions like “anxiety.” 

KOSA will lead to people who make online content about sex education, and LGBTQ+ identity and health, being persecuted and shut down as well. Views on how, or if, these subjects should be broached vary widely across U.S. communities. All it will take is one member of the Federal Trade Commission seeking to score political points, or a state attorney general seeking to ensure re-election, to start going after the online speech his or her constituents don’t like. 

All of these speech burdens will affect adults, too. Adults simply won’t find the content that was mass-deleted in the name of avoiding KOSA-inspired lawsuits; and we’ll all be burdened by websites and apps that install ID checks, age gates, and invasive (and poorly functioning) software content filters. 

The vast majority of speech that KOSA affects is constitutionally protected in the U.S., which is why there is a long list of reasons that KOSA is unconstitutional. Unfortunately, the lawmakers voting for this bill have hand-waved away those concerns. They’ve also blown off the voices of millions of young people who will have their free expression constricted by this bill, including the thousands who spoke to EFF directly about their concerns and fears around KOSA. 

We can’t rely solely on lawsuits and courts to protect us from the growing wave of anti-speech internet legislation, with KOSA at its forefront. We need to let the people making the laws know that the public is becoming aware of their censorship plans—and won’t stand for them.

TAKE ACTION

Our Freedom Of Speech Doesn't End Online

Victory! EFF Supporters Beat USPTO Proposal To Wreck Patent Reviews

The U.S. patent system is broken, particularly when it comes to software patents. At EFF, we’ve been fighting hard for changes that make the system more sensible. Last month, we got a big victory when we defeated a set of rules that would have mangled one of the U.S. Patent and Trademark Office (USPTO)’s most effective systems for kicking out bad patents. 

In 2012, recognizing the entrenched problem of a patent office that spewed out tens of thousands of ridiculous patents every year, Congress created a new system to review patents called “inter partes reviews,” or IPRs. While far from perfect, IPRs have resulted in cancellation of thousands of patent claims that never should have been issued in the first place. 

At EFF, we used the IPR process to crowd-fund a challenge to the Personal Audio “podcasting patent” that tried to extract patent royalty payments from U.S. podcasters. We won that proceeding and our victory was confirmed on appeal.

It’s no surprise that big patent owners and patent trolls have been trying to wreck the IPR system for years. They’ve tried, and failed, to get federal courts to dismantle IPRs. They’ve tried, and failed, to push legislation that would break the IPR system. And last year, they found a new way to attack IPRs—by convincing the USPTO to propose a set of rules that would have sharply limited the public’s right to challenge bad patents. 

That’s when EFF and our supporters knew we had to fight back. Nearly one thousand EFF supporters filed comments with the USPTO using our suggested language, and hundreds more of you wrote your own comments. 

Today, we say thank you to everyone who took the time to speak out. Your voice does matter. In fact, the USPTO withdrew all three of the terrible proposals that we focused on. 

Our Victory to Keep Public Access To Patent Challenges 

The original rules would have greatly increased expanded what are called “discretionary denials,” enabling judges at the USPTO to throw out an IPR petition without adequately considering the merits of the petition. While we would like to see even fewer discretionary denials, defeating the proposed limitations patent challenges is a significant win.

First, the original rules would have stopped “certain for-profit entities” from using the IPR system altogether. While EFF is a non-profit, for-profit companies can and should be allowed to play a role in getting wrongly granted patents out of the system. Membership-based patent defense organizations like RPX or Unified Patents can allow small companies to band together and limit their costs while defending themselves against invalid patents. And non-profits like the Linux Foundation, who joined us in fighting against these wrongheaded proposed rules, can work together with professional patent defense groups to file more IPRs. 

EFF and our supporters wrote in opposition to this rule change—and it’s out. 

Second, the original rules would have exempted “micro and small entities” from patent reviews altogether. This exemption would have applied to many of the types of companies we call “patent trolls”—that is, companies whose business is simply demanding license fees for patents, rather than offering actual products or services. Those companies, specially designed to threaten litigation, would have easily qualified as “small entities” and avoided having their patents challenged. Patent trolls, which bully real small companies and software developers into paying unwarranted settlement fees, aren’t the kind of “small business” that should be getting special exemptions from patent review. 

EFF and our supporters opposed this exemption, and it’s out of the final rulemaking. 

Third, last year’s proposal would have allowed for IPR petitions to be kicked out if they had a “parallel proceeding”—in other words, a similar patent dispute—in district court. This was a wholly improper reason to not consider IPRs, especially since district court evidence rules are different than those in place for an IPR. 

EFF and our supporters opposed these new limitations, and they’re out. 

While the new rules aren’t perfect, they’re greatly improved. We would still prefer more IPRs rather than fewer, and don’t want to see IPRs that otherwise meet the rules get kicked out of the review process. But even there, the new revised rules have big improvements. For instance, they allow for separate briefing of discretionary denials, so that people and companies seeking IPR review can keep their focus on the merits of their petition. 

Additional reading: 

Now The EU Council Should Finally Understand: No One Wants “Chat Control”

The EU Council has now passed a 4th term without passing its controversial message-scanning proposal. The just-concluded Belgian Presidency failed to broker a deal that would push forward this regulation, which has now been debated in the EU for more than two years. 

For all those who have reached out to sign the “Don’t Scan Me” petition, thank you—your voice is being heard. News reports indicate the sponsors of this flawed proposal withdrew it because they couldn’t get a majority of member states to support it. 

Now, it’s time to stop attempting to compromise encryption in the name of public safety. EFF has opposed this legislation from the start. Today, we’ve published a statement, along with EU civil society groups, explaining why this flawed proposal should be withdrawn.  

The scanning proposal would create “detection orders” that allow for messages, files, and photos from hundreds of millions of users around the world to be compared to government databases of child abuse images. At some points during the debate, EU officials even suggested using AI to scan text conversations and predict who would engage in child abuse. That’s one of the reasons why some opponents have labeled the proposal “chat control.” 

There’s scant public support for government file-scanning systems that break encryption. Nor is there support in EU law. People who need secure communications the most—lawyers, journalists, human rights workers, political dissidents, and oppressed minorities—will be the most affected by such invasive systems. Another group harmed would be those whom the EU’s proposal claims to be helping—abused and at-risk children, who need to securely communicate with trusted adults in order to seek help. 

The right to have a private conversation, online or offline, is a bedrock human rights principle. When surveillance is used as an investigation technique, it must be targeted and coupled with strong judicial oversight. In the coming EU council presidency, which will be led by Hungary, leaders should drop this flawed message-scanning proposal and focus on law enforcement strategies that respect peoples’ privacy and security. 

Further reading: 

California Lawmakers Should Reject Mandatory Internet ID Checks

California lawmakers are debating an ill-advised bill that would require internet users to show their ID in order to look at sexually explicit content. EFF has sent a letter to California legislators encouraging them to oppose Assembly Bill 3080, which would have the result of censoring the internet for all users. 

If you care about a free and open internet for all, and are a California resident, now would be a good time to contact your California Assemblymember and Senator and tell them you oppose A.B. 3080. 

Adults Have The Right To Free And Anonymous Internet Browsing

If A.B. 3080 passes, it would make it illegal to show websites with one-third or more “sexually explicit content” to minors. These “explicit” websites would join a list of products or services that can’t be legally sold to minors in California, including things like firearms, ammunition, tobacco, and e-cigarettes. 

But these things are not the same, and should not be treated the same under state or federal law. Adults have a First Amendment right to look for information online, including sexual content. One of the reasons EFF has opposed mandatory age verification is because there’s no way to check ID online just for minors without drastically harming the rights of adults to read, get information, and to speak and browse online anonymously. 

As EFF explained in a recent amicus brief on the issue, collecting ID online is fundamentally differentand more dangerousthan in-person ID checks in the physical world. Online ID checks are not just a momentary displaythey require adults “to upload data-rich, government-issued identifying documents to either the website or a third-party verifier” and create a “potentially lasting record” of their visit to the establishment. 

The more information a website collects about visitors, the more chances there are for such data to get into the hands of a criminal or other bad actor, a marketing company, or someone who has filed a subpoena for it. So-called “anonymized” data can be reassembled, especially when it consists of data-rich government ID together with browsing data like IP addresses. 

Data breaches are a fact of life. Once governments insist on creating these ID logs for visiting websites with sexual content, those data breaches will become more dangerous. 

This Bill Mandates ID Checks For A Wide Range Of Content 

The bar is set low in this bill. It’s far from clear what websites prosecutors will consider to have one-third content that’s not appropriate for minors, as that can vary widely by community and even family standards. The bill will surely rope in general-use websites that allow some explicit content. A sex education website for high-school seniors, for instance, could be considered “offensive” and lacking in educational value for young minors. 

Social media sites, online message forums, and even email lists may have some portion of content that isn’t appropriate for younger minors, but also a large amount of general-interest content. Bills like California’s that require ID checks for any site with 33% content that prosecutors deem explicit is similar to having Netflix require ID checks at login, whether a user wants to watch a G-rated movie or an R-rated movie. 

Adults’ Right To View Websites Of Their Choice Is Settled Law 

U.S. courts have already weighed in numerous times on government efforts to age-gate content, including sexual content. In Reno v. ACLU, the Supreme Court overruled almost all of the Communications Decency Act, a 1996 law that was intended to keep “obscene or indecent” material away from minors. 

The high court again considered the issue in 2004 in ACLU v. Ashcroft, when it found that a federal law of that era, which sought to impose age-verification requirements on sexual online content, was likely unconstitutional. 

Other States Will Follow 

In the past year, several other state legislatures have passed similar unwise and unconstitutional “online ID check” laws. They are being subject to legal challenges now working their way through courts, including a Texas age verification law that EFF has asked the Supreme Court to look at. 

Elected officials in many other states, however, wisely refused to enact mandatory online ID laws, including Minnesota, Illinois, and Wisconsin. In April, Arizona’s governor vetoed a mandatory ID-check bill that was passed along partisan lines in her state, stating that the bill “goes against settled case law” and insisting any future proposal must be bipartisan and also “work within the bounds of the First Amendment.” 

California is not only the largest state, it is the home of many of the nation’s largest creative industries. It has also been a leader in online privacy law. If California passes A.B. 3080, it will be a green light to other states to pass online ID-checking laws that are even worse. 

Tennessee, for instance, recently passed a mandatory ID bill that includes felony penalties for anyone who “publishes or distributes” a website with one-third adult content. Tennessee’s fiscal review committee estimated that the state will incarcerate one person per year under this law, and has budgeted accordingly. 

California lawmakers have a chance to restore some sanity to our national conversation about how to protect minors online. Mandatory ID checks, and fines or incarceration for those who fail to use them, are not the answer. 

Further reading: 

EU Council Presidency’s Last-Ditch Effort For Mass Scanning Must Be Rejected 

As the current leadership of the EU Council enters its final weeks, it is debating a dangerous proposal that could lead to scanning the private files of billions of people. 

EFF strongly opposes this proposal, put forward by the Belgian Presidency at the EU Council, which is part of the EU’s executive branch. Together with European Digital Rights (EDRi) and other groups that defend encryption, we have sent an open letter to the EU Council explaining the dangers of the proposal. The letter asks Ministers in the Council of the EU to reject all proposals that are inconsistent with end-to-end encryption, including surveillance technologies like client-side scanning. 

The Belgian proposal was debated behind closed doors, and civil society groups have only recently been able to even evaluate and discuss the proposal after it was leaked to the press

Users who don’t agree to the scanning will be forbidden from sharing images or links.

If the proposal is adopted, it would represent a significant step backwards. Since 2022, the EU has been debating a file-scanning regulation that would eviscerate end-to-end encryption. Realizing that this system of client-side scanning, which some have called “chat control,” would violate the human rights of EU residents, a key European Parliament committee agreed in November to amendments that would protect end-to-end encryption. 

How We Got Here

EFF’s advocacy has always defended the right to have a private conversation online, and the technology that can enable that: end-to-end encryption. That’s why, since 2022, we have opposed the efforts by some EU officials to put a backdoor into encrypted communications, in the name of protecting children online. 

TAKE ACTION

SIGN THE PETITION: STOP SCANNING ME!

Without major changes, the child protection proposal would have been a disaster for privacy and security online. In November, we won a victory when the EU Parliament’s civil liberties agreed to make big changes to the proposal that would make it clear that states could not engage in mass scanning of files, photos and messages in the name of fighting crime. 

The Belgian proposal, which EFF has reviewed, specifies that online services would be forced to install software so that child abuse material “should remain detectable in all interpersonal communications services.” To do this, the online services must apply “vetted technology”—in other words, government-approved software—that would allow law enforcement to scan the photos, messages and files of any user. 

The proposal actually goes on to suggest that users should be asked to “give explicit consent” for this invasion of privacy. Users who don’t agree to the scanning will be forbidden from sharing images or links. The idea of whitewashing mass surveillance with a government-approved “click-through” agreement, and banning users from basic internet functionality if they don’t agree, sounds like a dystopian novel—but it’s being seriously debated. 

We reject mass-scanning as a means of public safety. Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. Government eavesdropping in the name of crime-fighting must always be targeted, narrowly limited, and subject to judicial oversight. 

The Belgian Presidency’s proposal is the latest in a long line of attempts by governments to evade this basic human rights concept. As its details become more widely known, this colossally unpopular spying idea will be rejected not just by EFF and other NGOs, but by voting publics in the EU and beyond. 

Sunsetting Section 230 Will Hurt Internet Users, Not Big Tech 

As Congress appears ready to gut one of the internet’s most important laws for protecting free speech, they are ignoring how that law protects and benefits millions of Americans’ ability to speak online every day.  

The House Energy and Commerce Committee is holding a hearing on Wednesday on a bill that would end Section 230 (47 U.S.C. § 230) in 18 months. The authors of the bill argue that setting a deadline to either change or eliminate Section 230 will force the Big Tech online platforms to the bargaining table to create a new regime of intermediary liability. 

Take Action

Ending Section 230 Will Make Big Tech Monopolies Worse

As EFF has said for years, Section 230 is essential to protecting individuals’ ability to speak, organize, and create online. 

Congress knew exactly what Section 230 would do – that it would lay the groundwork for speech of all kinds across the internet, on websites both small and large. And that’s exactly what has happened.  

Section 230 isn’t in conflict with American values. It upholds them in the digital world. People are able to find and create their own communities, and moderate them as they see fit. People and companies are responsible for their own speech, but (with narrow exceptions) not the speech of others. 

The law is not a shield for Big Tech. Critically, the law benefits the millions of users who don’t have the resources to build and host their own blogs, email services, or social media sites, and instead rely on services to host that speech. Section 230 also benefits thousands of small online services that host speech. Those people are being shut out as the bill sponsors pursue a dangerously misguided policy.  

If Big Tech is at the table in any future discussion for what rules should govern internet speech, EFF has no confidence that the result will protect and benefit internet users, as Section 230 does currently. If Congress is serious about rewriting the internet’s speech rules, it needs to abandon this bill and spend time listening to the small services and everyday users who would be harmed should they repeal Section 230.  

Section 230 Protects Everyday Internet Users 

The bill introduced by House Energy & Commerce Chair Cathy McMorris Rogers (R-WA) and Ranking Member Frank Pallone (D-NJ) is based on a series of mistaken assumptions and fundamental misunderstandings about Section 230. Mike Masnick at TechDirt has already explained many of the flawed premises and factual errors that the co-sponsors have made. 

We won’t repeat the many errors that Masnick identifies. Instead, we want to focus on what we see as a glaring omission in the co-sponsor’s argument: how central Section 230 is to ensuring that every person can speak online.   

Let’s start with the text of Section 230. Importantly, the law protects both online services and users. It says that “no provider or user shall be treated as the publisher” of content created by another. That's in clear agreement with most American’s belief that people should be held responsible for their own speech—not that of other people.   

Section 230 protects individual bloggers, anyone who forwards an email, and social media users who have ever reshared or retweeted another person’s content online. Section 230 also protects individual moderators who might delete or otherwise curate others’ online content, along with anyone who provides web hosting services. 

As EFF has explained, online speech is frequently targeted with meritless lawsuits. Big Tech can afford to fight these lawsuits without Section 230. Everyday internet users, community forums, and small businesses cannot. Engine has estimated that without Section 230, many startups and small services would be inundated with costly litigation that could drive them offline. 

Deleting Section 230 Will Create A Field Day For The Internet’s Worst Users  

The co-sponsors say that too many websites and apps have “refused” to go after “predators, drug dealers, sex traffickers, extortioners and cyberbullies,” and imagine that removing Section 230 will somehow force these services to better moderate user-generated content on their sites.  

Nothing could be further from the truth. If lawmakers are legitimately motivated to help online services root out unlawful activity and terrible content appearing online, the last thing they should do is eliminate Section 230. The current law strongly incentivizes websites and apps, both large and small, to kick off their worst-behaving users, to remove offensive content, and in cases of illegal behavior, work with law enforcement to hold those users responsible. 

Take Action

Tell Congress: Ending Section 230 Will Hurt Users

If Congress deletes Section 230, the pre-digital legal rules around distributing content would kick in. That law strongly discourages services from moderating or even knowing about user-generated content. This is because the more a service moderates user content, the more likely it is to be held liable for that content. Under that legal regime, online services will have a huge incentive to just not moderate and not look for bad behavior. Taking the sponsors of the bill at their word, this would result in the exact opposite of their goal of protecting children and adults from harmful content online.  

Congress: Don't Let Anyone Own The Law

We should all have the freedom to read, share, and comment on the laws we must live by. But yesterday, the House Judiciary Committee voted 19-4 to move forward the PRO Codes Act (H.R. 1631), a bill that would limit those rights in a critical area. 

TAKE ACTION

Tell Congress To Reject The Pro Codes Act

A few well-resourced private organizations have made a business of charging money for access to building and safety codes, even when those codes have been incorporated into law. 

These organizations convene volunteers to develop model standards, encourage regulators to make those standards into mandatory laws, and then sell copies of those laws to the people (and city and state governments) that have to follow and enforce them.

They’ve claimed it’s their copyrighted material. But court after court has said that you can’t use copyright in this way—no one “owns” the law. The Pro Codes Act undermines that rule and the public interest, changing the law to state that the standards organizations that write these rules “shall retain” a copyright in it, as long as the rules are made “publicly accessible” online. 

That’s not nearly good enough. These organizations already have so-called online reading rooms that aren’t searchable, aren’t accessible to print-disabled people, and condition your ability to read mandated codes on agreeing to onerous terms of use, among many other problems. That’s why the Association of Research Libraries sent a letter to Congress last week (supported by EFF, disability rights groups, and many others) explaining how the Pro Codes Act would trade away our right to truly understand and educate our communities about the law for cramped public access to it. Congress must not let well-positioned industry associations abuse copyright to control how you access, use, and share the law. Now that this bill has passed committee, we urgently need your help—tell Congress to reject the Pro Codes Act.

TAKE ACTION

TELL CONGRESS: No one owns the law

EFF Seeks Greater Public Access to Patent Lawsuit Filed in Texas

You’re not supposed to be able to litigate in secret in the U.S. That’s especially true in a patent case dealing with technology that most internet users rely on every day.

 Unfortunately, that’s exactly what’s happening in a case called Entropic Communications, LLC v. Charter Communications, Inc. The parties have made so much of their dispute secret that it is hard to tell how the patents owned by Entropic might affect the Data Over Cable Service Interface Specifications (DOCSIS) standard, a key technical standard that ensures cable customers can access the internet.

In Entropic, both sides are experienced litigants who should know that this type of sealing is improper. Unfortunately, overbroad secrecy is common in patent litigation, particularly in cases filed in the U.S. District Court for the Eastern District of Texas.

EFF has sought to ensure public access to lawsuits in this district for years. In 2016, EFF intervened in another patent case in this very district, arguing that the heavy sealing by a patent owner called Blue Spike violated the public’s First Amendment and common law rights. A judge ordered the case unsealed.

As Entropic shows, however, parties still believe they can shut down the public’s access to presumptively public legal disputes. This secrecy has to stop. That’s why EFF, represented by the Science, Health & Information Clinic at Columbia Law School, filed a motion today seeking to intervene in the case and unseal a variety of legal briefs and evidence submitted in the case. EFF’s motion argues that the legal issues in the case and their potential implications for the DOCSIS standard are a matter of public concern and asks the district court judge hearing the case to provide greater public access.

Protective Orders Cannot Override The Public’s First Amendment Rights

As EFF’s motion describes, the parties appear to have agreed to keep much of their filings secret via what is known as a protective order. These court orders are common in litigation and prevent the parties from disclosing information that they obtain from one another during the fact-gathering phase of a case. Importantly, protective orders set the rules for information exchanged between the parties, not what is filed on a public court docket.

The parties in Entropic, however, are claiming that the protective order permits them to keep secret both legal arguments made in briefs filed with the court as well as evidence submitted with those filings. EFF’s motion argues that this contention is incorrect as a matter of law because the parties cannot use their agreement to abrogate the public’s First Amendment and common law rights to access court records. More generally, relying on protective orders to limit public access is problematic because parties in litigation often have little interest or incentive to make their filings public.

Unfortunately, parties in patent litigation too often seek to seal a variety of information that should be public. EFF continues to push back on these claims. In addition to our work in Texas, we have also intervened in a California patent case, where we also won an important transparency ruling. The court in that case prevented Uniloc, a company that had filed hundreds of patent lawsuits, from keeping the public in the dark as to its licensing activities.

That is why part of EFF’s motion asks the court to clarify that parties litigating in the Texas district court cannot rely on a protective order for secrecy and that they must instead seek permission from the court and justify any claim that material should be filed under seal.

On top of clarifying that the parties’ protective orders cannot frustrate the public’s right to access federal court records, we hope the motion in Entropic helps shed light on the claims and defenses at issue in this case, which are themselves a matter of public concern. The DOCSIS standard is used in virtually all cable internet modems around the world, so the claims made by Entropic may have broader consequences for anyone who connects to the internet via a cable modem.

It’s also impossible to tell if Entropic might want to sue more cable modem makers. So far, Entropic has sued five big cable modem vendors—Charter, Cox, Comcast, DISH TV, and DirecTV—in more than a dozen separate cases. EFF is hopeful that the records will shed light on how broadly Entropic believes its patents can reach cable modem technology.

EFF is extremely grateful that Columbia Law School’s Science, Health & Information Clinic could represent us in this case. We especially thank the student attorneys who worked on the filing, including Sean Hong, Gloria Yi, Hiba Ismail, and Stephanie Lim, and the clinic’s director, Christopher Morten.

❌