Normal view

There are new articles available, click to refresh the page.
Before yesterdayMain stream

Security, Surveillance, and Government Overreach – the United States Set the Path but Canada Shouldn’t Follow It

The Canadian House of Commons is currently considering Bill C-26, which would make sweeping amendments to the country’s Telecommunications Act that would expand its Minister of Industry’s power over telecommunication service providers. It’s designed to accomplish a laudable and challenging goal: ensure that government and industry partners efficiently and effectively work together to strengthen Canada’s network security in the face of repeated hacking attacks.

C-26 is not identical to US national security laws. But without adequate safeguards, it could open the door to similar practices and orders.

As researchers and civil society organizations have noted, however, the legislation contains vague and overbroad language that may invite abuse and pressure on ISPs to do the government’s bidding at the expense of Canadian privacy rights. It would vest substantial authority in Canadian executive branch officials to (in the words of C-26’s summary) “direct telecommunications service providers to do anything, or refrain from doing anything, that is necessary to secure the Canadian telecommunications system.” That could include ordering telecommunications companies to install backdoors inside encrypted elements in Canada’s networksSafeguards to protect privacy and civil rights are few; C-26’s only express limit is that Canadian officials cannot order service providers to intercept private or radio-based telephone communications.

Unfortunately, we in the United States know all too well what can happen when government officials assert broad discretionary power over telecommunications networks. For over 20 years, the U.S. government has deputized internet service providers and systems to surveil Americans and their correspondents, without meaningful judicial oversight. These legal authorities and details of the surveillance have varied, but, in essence, national security law has allowed the U.S. government to vacuum up digital communications so long as the surveillance is directed at foreigners currently located outside the United States and doesn’t intentionally target Americans. Once collected, the FBI can search through this massive database of information by “querying” the communications of specific individuals. In 2021 alone, the FBI conducted up to 3.4 million warrantless searches to find Americans’ communications.

Congress has attempted to add in additional safeguards over the years, to little avail. In 2023, for example, the Federal Bureau of Investigation (FBI) released internal documents used to guide agency personnel on how to search the massive databases of information they collect. Despite reassurances from the intelligence community about its “culture of compliance,” these documents reflect little interest in protecting privacy or civil liberties. At the same time, the NSA and domestic law enforcement authorities have been seeking to undermine the encryption tools and processes on which we all rely to protect our privacy and security.

C-26 is not identical to U.S. national security laws. But without adequate safeguards, it could open the door to similar practices and orders. What is worse, some of those orders could be secret, at the government’s discretion. In the U.S., that kind of secrecy has made it impossible for Americans to challenge mass surveillance in court. We’ve also seen companies presented with gag orders in connection with “national security letters” compelling them to hand over information. C-26 does allow for judicial review of non-secret orders, e.g. an order requiring an ISP to cut off an account-holder or website, if the subject of those orders believes they are unreasonable or ungrounded. But that review may include secret evidence that is kept from applicants and their counsel.

Canadian courts will decide whether a law authorizing secret orders and evidence is consistent with Canada’s legal tradition. But either way, the U.S. experience offers a cautionary tale of what can happen when a government grants itself broad powers to monitor and direct telecommunications networks, absent corresponding protections for human rights. In effect, the U.S. government has created, in the name of national security, a broad exception to the Constitution that allows the government to spy on all Americans and denies them any viable means of challenging that spying. We hope Canadians will refuse to allow their government to do the same in the name of “cybersecurity.”

No Country Should be Making Speech Rules for the World

9 May 2024 at 15:38

It’s a simple proposition: no single country should be able to restrict speech across the entire internet. Any other approach invites a swift relay race to the bottom for online expression, giving governments and courts in countries with the weakest speech protections carte blanche to edit the internet.

Unfortunately, governments, including democracies that care about the rule of law, too often lose sight of this simple proposition. That’s why EFF, represented by Johnson Winter Slattery, has moved to intervene in support of X, formerly known as Twitter’s legal challenge to a global takedown order from Australia’s eSafety Commissioner. The Commissioner ordered X and Meta to take down a post with a video of a stabbing in a church. X complied by geo-blocking the post so Australian users couldn’t access it, but it declined to block it elsewhere. The Commissioner asked an Australian court to order a global takedown.

Our intervention calls the court’s attention to the important public interests at stake in this litigation, particularly for internet users who are not parties to the case but will nonetheless be affected by the precedent it sets. A ruling against X is effectively a declaration that an Australian court (or its eSafety Commissioner) can prevent internet users around the world from accessing something online, even if the law in their own country is quite different. In the United States, for example, the First Amendment guarantees that platforms generally have the right to decide what content they will host, and their users have a corollary right to receive it. 

We’ve seen this movie before. In Google v Equustek, a company used a trade secret claim to persuade a Canadian court to order Google to delete search results linking to sites that contained allegedly infringing goods from Google.ca and all other Google domains, including Google.com and Google.co.uk. Google appealed, but both the British Columbia Court of Appeal and the Supreme Court of Canada upheld the order. The following year, a U.S. court held the ruling couldn’t be enforced against Google US. 

The Australian takedown order also ignores international human rights standards, restricting global access to information without considering less speech-intrusive alternatives. In other words: the Commissioner used a sledgehammer to crack a nut. 

If one court can impose speech-restrictive rules on the entire Internet—despite direct conflicts with laws a foreign jurisdiction as well as international human rights principles—the norms of expectations of all internet users are at risk. We’re glad X is fighting back, and we hope the judge will recognize the eSafety regulator’s demand for what it is—a big step toward unchecked global censorship—and refuse to let Australia set another dangerous precedent.

Related Cases: 

Congress Should Just Say No to NO FAKES

29 April 2024 at 16:21

There is a lot of anxiety around the use of generative artificial intelligence, some of it justified. But it seems like Congress thinks the highest priority is to protect celebrities – living or dead. Never fear, ghosts of the famous and infamous, the U.S Senate is on it.

We’ve already explained the problems with the House’s approach, No AI FRAUD. The Senate’s version, the Nurture Originals, Foster Art and Keep Entertainment Safe, or NO FAKES Act, isn’t much better.

Under NO FAKES, any person has the right to sue anyone who has either made, or made available, their “digital replica.” A replica is broadly defined as “a newly-created, computer generated, electronic representation of the image, voice or visual likeness” of a person. The right applies to the person themselves; anyone who has a license to use their image, voice, or likeness; and their heirs for 70 years after the person dies. It’s retroactive, meaning the post-mortem right would apply immediately to the heirs of, say, Prince, Tom Petty, or Michael Jackson, not to mention your grandmother.

Boosters talk a good game about protecting performers and fans from AI scams, but NO FAKES seems more concerned about protecting their bottom line. It expressly describes the new right as a “property right,” which matters because federal intellectual property rights are excluded from Section 230 protections. If courts decide the replica right is a form of intellectual property, NO FAKES will give people the ability to threaten platforms and companies that host allegedly unlawful content, which tend to have deeper pockets than the actual users who create that content. This will incentivize platforms that host our expression to be proactive in removing anything that might be a “digital replica,” whether its use is legal expression or not. While the bill proposes a variety of exclusions for news, satire, biopics, criticism, etc. to limit the impact on free expression, interpreting and applying those exceptions is even more likely to make a lot of lawyers rich.

This “digital replica” right effectively federalizes—but does not preempt—state laws recognizing the right of publicity. Publicity rights are an offshoot of state privacy law that give a person the right to limit the public use of her name, likeness, or identity for commercial purposes, and a limited version of it makes sense. For example, if Frito-Lay uses AI to deliberately generate a voiceover for an advertisement that sounds like Taylor Swift, she should be able to challenge that use. The same should be true for you or me.

Trouble is, in several states the right of publicity has already expanded well beyond its original boundaries. It was once understood to be limited to a person’s name and likeness, but now it can mean just about anything that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. In some states, your heirs can invoke the right long after you are dead and, presumably, in no position to be embarrassed by any sordid commercial associations. Or for anyone to believe you have actually endorsed a product from beyond the grave.

In other words, it’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. As a result, the right of publicity reaches far beyond the realm of misleading advertisements and courts have struggled to develop appropriate limits.

NO FAKES leaves all of that in place and adds a new national layer on top, one that lasts for decades after the person replicated has died. It is entirely divorced from the incentive structure behind intellectual property rights like copyright and patents—presumably no one needs a replica right, much less a post-mortem one, to invest in their own image, voice, or likeness. Instead, it effectively creates a windfall for people with a commercially valuable recent ancestor, even if that value emerges long after they died.

What is worse, NO FAKES doesn’t offer much protection for those who need it most. People who don’t have much bargaining power may agree to broad licenses, not realizing the long-term risks. For example, as Jennifer Rothman has noted, NO FAKES could actually allow a music publisher who had licensed a performers “replica right” to sue that performer for using her own image. Savvy commercial players will build licenses into standard contracts, taking advantage of workers who lack bargaining power and leaving the right to linger as a trap only for unwary or small-time creators.

Although NO FAKES leaves the question of Section 230 protection open, it’s been expressly eliminated in the House version, and platforms for user-generated content are likely to over-censor any content that is, or might be, flagged as containing an unauthorized digital replica. At the very least, we expect to see the expansion of fundamentally flawed systems like Content ID that regularly flag lawful content as potentially illegal and chill new creativity that depends on major platforms to reach audiences. The various exceptions in the bill won’t mean much if you have to pay a lawyer to figure out if they apply to you, and then try to persuade a rightsholder to agree.

Performers and others are raising serious concerns. As policymakers look to address them, they must take care to be precise, careful, and practical. NO FAKES doesn’t reflect that care, and its sponsors should go back to the drawing board. 

❌
❌