Connect with us

Social Media

Supreme Court Considers Liability for Twitter Not Removing Terrorist Content

Many of the arguments in Twitter v. Taamneh hinged on specific interpretations of the Anti-Terrorism Act.

Published

on

Photo of Justice Neil Gorsuch in September 2019 by the LBJ Library, used with permission

WASHINGTON, February 22, 2023 — In the second of two back-to-back cases considering online intermediary liability, Supreme Court justices on Wednesday sought the precise definitions of two words — “substantial” and “knowingly” — in order to draw lines that could have major implications for the internet as a whole.

The oral arguments in Twitter v. Taamneh closely examined the text of the Anti-Terrorism Act, considering whether the social media platform contributed to a 2017 terrorist attack by hosting terrorist content and failing to remove ISIS-affiliated accounts — despite the absence of a direct link to the attack. The hearing followed Tuesday’s arguments in Gonzalez v. Google, a case stemming from similar facts but primarily focused on Section 230.

Many of Wednesday’s arguments hinged on specific interpretations of the ATA, which states that liability for injuries caused by international terrorism “may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”

Seth Waxman, the attorney representing Twitter, argued that Twitter should not be held liable unless it knew that it was substantially assisting the act of terrorism that injured the plaintiff.

“But [it’s] not enough to know that you’re providing substantial assistance to a group that does this kind of thing?” Justice Ketanji Brown Jackson asked.

“Of course not,” Waxman said.

Jackson was unconvinced, saying that she did not see a clear distinction.

Justice Amy Coney Barrett questioned whether the means of communication to individuals planning a terrorist attack would be considered “substantial assistance.” Waxman replied that it would depend on how significant and explicit the communications were.

Clashing interpretations of Anti-Terrorism Act left unresolved

At one point, Justice Neil Gorsuch suggested that Waxman was misreading the law by taking the act of terrorism as the object of the “aiding and abetting” clause, rather than the person who committed the act.

The latter reading would help Twitter, the justice said, because the plaintiff would then have to prove that the company aided a specific person, rather than an abstract occurrence.

However, Waxman doubled down on his original reading.

“Are you sure you want to do that?” Gorsuch asked, drawing laughs from the gallery.

Waxman also pushed back against assertions that he claimed were “combining silence or inaction with affirmative assistance.” If Twitter said that its platform should not be used to support terrorist groups or acts, Waxman argued, the company should not be held liable for any potential terrorist content, even if it did nothing at all to enforce that rule.

Justice Elena Kagan disagreed. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism,” she said.

Justices expressed concern over broad scope of potential liability

Unlike in the Gonzalez arguments, where the government largely supported increasing platform liability, Deputy Solicitor General Edwin Kneedler defended Twitter, saying that holding the company liable could result in hindering “legitimate and important activities by businesses, charities and others.”

Several justices raised similar concerns about the decision’s potentially far-reaching impacts.

“If we’re not pinpointing cause and effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances,” Justice Clarence Thomas told Eric Schnapper, the attorney representing the plaintiffs.

Schnapper agreed that this would be the case, but proposed setting reasonable boundaries around liability by using a standard of “remoteness in time, weighed together with the volume of activity.”

Justice Samuel Alito proposed a scenario in which a police officer tells phone companies, gas stations, restaurants and other businesses to stop serving individuals who are broadly suspected of committing a crime. Would the businesses have to comply, Alito questioned, to avoid liability for aiding and abetting?

Schnapper did not answer directly. “That’s a difficult question,” he said. “But clearly, at one end of the spectrum… If you provide a gun to someone who you know is a murderer, I think you could be held liable for aiding and abetting.”

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Continue Reading
Click to comment

Leave a Reply

12 Days of Broadband

12 Days of Broadband: State Regulations and Children’s Safety Online

12 year olds (and older) having to age-verify on social media may become more common going forward.

Published

on

Illustration by DALL-E

January 3, 2024 – A nationwide push to restrict teenagers’ online actions gained ground in 2023 as several states implemented stringent laws targeting social media use among youth.

In March, Utah ventured into uncharted territory when Republican Gov. Spencer Cox signed two measures, H.B. 311 and S.B. 152, mandating parental consent for all minors – 17 and under – before they can register for platforms like TikTok and Meta’s Instagram. For decades, the default standard of the 1998 Children’s Online Privacy Protection Act has been no restrictions on social media use by kids 13 and over.

The pair of bills, which do not go into effect until March 2024, require individuals under 18 to gain parental consent to open a social media account, bar minors from accessing social media platforms between the hours of 10:30 p.m. and 6:30 a.m., and grant parents full access to their child’s social media accounts.

In October, Utah announced a lawsuit against TikTok, alleging that the app deploys addictive features to hook young users. The lawsuit raises additional concerns regarding user data and privacy, citing that TikTok’s China-based parent company, ByteDance, is legally binded with the Chinese Communist Party. 

Arkansas, Montana may be following Utah

Soon after, Arkansas took a similar step as Republican Gov. Sarah Huckabee Sanders signed Act 689, named the Social Media Safety Act, in April 2023. The newly approved act, aiming to mandate age verification and parental consent for social media users under 18, was set to come into effect on September 1. 

However, on that very day, U.S. District Judge Timothy Brooks granted a preliminary injunction following a petition from the tech trade industry group, NetChoice Litigation Center. Their contention was that the new law infringed upon the First Amendment’s freedom of expression guarantee.

In May, Montana Gov. Greg Ganforte signed legislation banning TikTok on all devices statewide, threatening fines up to $10,000 per violation for app providers like Google and Apple. Before the law took effect on January 1, Federal Judge Donald Molloy stopped the TikTok ban in late November, stating that the law exceeds state authority and violates the constitutional rights of users. 

Shortly after, TikTok filed a lawsuit against Montana. Judge Molloy found merit to numerous arguments raised by TikTok, including that TikTok has a number of safeguards in place surrounding user data.

Is Age verification a First Amendment issue?

Consumer groups, including the American Civil Liberties Union, have raised issues with the fact that many of these bills extend beyond merely mandating age verification solely for minors; they now necessitate age verification through proof of legal documents for anyone seeking to utilize social media within the states.

The issue was much discussed at a Broadband Breakfast Live Online session in November 2023, where child safety advocate Donna Rice Hughes and Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems were much more robust than from a generation ago, when the Supreme Court struck down one such scheme. They disagreed with civil liberties groups including the Electronic Frontier Foundation.

On TikTok, 13 states joined in enacting bans over the use of the Chinese-owned platform being installed on government-issued devices. That brings to 34 the total number of states that have banned TikTok on government devices due to national security concerns. Additionally, more than 40 public universities have barred TikTok from their on-campus Wi-Fi and university-owned computers in response to these state-level bans.

See “The Twelve Days of Broadband” on Broadband Breakfast

Continue Reading

Social Media

Diverse Groups File Amicus Briefs Against Florida and Texas Social Media Laws

The Supreme Court will decide whether the social media laws violate the First Amendment.

Published

on

Photo of Solicitor General Elizabeth Prelogar in June 2023 by Ryland West of ALM

WASHINGTON, December 8, 2023 –  Industry, public interest, and conservative groups filed briefs with the Supreme Court this week arguing against Texas and Florida social media laws.

Drafted to combat what state legislators saw as the unfair treatment of right-wing content online, the 2021 laws would allow residents of those states to sue social media companies for suspending their accounts. Both have been blocked from going into effect after legal challenges from tech industry trade groups. The cases were initially separate, but the Supreme Court agreed in October to hear them together because they raise similar issues.

Industry groups argue the laws violate the First Amendment by forcing platforms to host speech they normally would not. The White House agrees – Solicitor General Elizabeth Prelogar asked the Court in August to take up the issue and strike down Texas’s law.

Consumer protection group Public Knowledge filed an amicus brief on Thursday in support of the tech trade groups, arguing the laws are unconstitutional and “driven by political animus.”

Center-right think tank TechFreedom filed a similar brief on Wednesday. 

“Only the state can ‘censor’ speech,” Corbin Barthold, the group’s director of appellate litigation, said in a statement. “And these states are doing so by trying to co-opt websites’ right to editorial control over the speech they disseminate.

Both groups also pushed against the states’ move to treat social media platforms as ‘common carrier’ services, a part of both laws. The legal designation, typically applied to services like railroads or voice telephone calls, requires a carrier to serve the public at just rates without unreasonable discrimination.

The states’ move to designate social media platforms as common carriers would make it more difficult for them to refuse their service to users. But the designation, the groups argued, does not map cleanly onto the service social media provides, as the platforms make editorial decisions about content they transmit – through moderation and recommendation – in a way companies like voice providers do not.

In all, at least 40 similar briefs have been filed arguing against the laws, according to the Computer and Communications Industry Association, one of the parties in the case.

A set of 15 states with Republican-led legislatures and former president Donald Trump, who had multiple social media accounts suspended after the January 2021 attack on the Capitol, have filed amicus briefs in support of Texas and Florida. The Court is expected to hear oral arguments in the case sometime in 2024.

Continue Reading

Free Speech

Improved Age Verification Allows States to Consider Restricting Social Media

Constitutional issues leading courts to strike down age verification law are still present, said EFF.

Published

on

WASHINGTON, November 20, 2023 — A Utah law requiring age verification for social media accounts is likely to face First Amendment lawsuits, experts warned during an online panel Wednesday hosted by Broadband Breakfast.

The law, set to take effect in March 2024, mandates that all social media users in Utah verify their age and imposes additional restrictions on minors’ accounts.

The Utah law raises the same constitutional issues that have led courts to strike down similar laws requiring age verification, said Aaron Mackey, free speech and transparency litigation director at the non-profit Electronic Frontier Foundation.

“What you have done is you have substantially burdened everyone’s First Amendment right to access information online that includes both adults and minors,” Mackey said. “You make no difference between the autonomy and First Amendment rights of older teens and young adults” versus young children, he said.

But Donna Rice Hughes, CEO of Enough is Enough, contended that age verification technology has successfully restricted minors’ access to pornography and could be applied to social media as well.

“Utah was one of the first states [to] have age verification technology in place to keep minor children under the age of 18 off of porn sites and it’s working,” she said.

Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems had progressed considerably from a generation ago, when the Supreme Court in 2002’s Ashcroft v. American Civil Liberties Union, struck down the 1998 Child Online Protection Act. The law had been designed to shield minors from indecent material, but the court ruled that age-verification methods often failed at that task.

Andrew Zack, policy manager at the Family Online Safety Institute, said that his organization he welcomed interest in youth safety policies from Utah.

But Zack said, “We still have some concerns about the potential unintended consequences that come with this law,”  worrying particularly about potential unintended consequences for teen privacy and expression rights.

Taylor Barkley, director of technology and innovation at the Center for Growth and Opportunity, highlighted the importance of understanding the specific problems the law aims to address. “Policy Solutions have trade-offs.” urging that solutions be tailored to the problems identified.

Panelists generally agreed that comprehensive data privacy legislation could help address social media concerns without facing the same First Amendment hurdles.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 15, 2023 – Social Media for Kids in Utah

In March 2023, Utah became the first state to adopt laws regulating kids’ access to social media. This legislative stride was rapidly followed by several states, including Arkansas, Illinois, Louisiana, and Mississippi, with numerous others contemplating similar measures. For nearly two decades, social media platforms enjoyed unbridled growth and influence. The landscape is now changing as lawmakers become more active in shaping the future of digital communication. This transformation calls for a nuanced evaluation of the current state of social media in the United States, particularly in light of Utah’s pioneering role. Is age verification the right way to go? What are the broader implications of this regulatory trend for the future of digital communication and online privacy across the country?

Panelists

  • Andrew Zack, Policy Manager, Family Online Safety Institute
  • Donna Rice Hughes, President and CEO of Enough Is Enough
  • Taylor Barkley, Director of Technology and Innovation, Center for Growth and Opportunity
  • Tony Allen, Executive Director, Age Check Certification Scheme
  • Aaron Mackey, Free Speech and Transparency Litigation Director, Electronic Frontier Foundation
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Andrew Zack is the Policy Manager for the Family Online Safety Institute, leading policy and research work relating to online safety issues, laws, and regulations. He works with federal and state legislatures, relevant federal agencies, and industry leaders to develop and advance policies that promote safe and positive online experience for families. Andrew joined FOSI after five years in Senator Ed Markey’s office, where he worked primarily on education, child welfare, and disability policies. Andrew studied Government and Psychology at the College of William and Mary.

Donna Rice Hughes, President and CEO of Enough Is Enough is an internationally known Internet safety expert, author, speaker and producer. Her vision, expertise and advocacy helped to birth the Internet safety movement in America at the advent of the digital age. Since 1994, she has been a pioneering leader on the frontlines of U.S. efforts to make the internet safer for children and families by implementing a three-pronged strategy of the public, the technology industry and legal community sharing the responsibility to protect children online.

Taylor Barkley is the Director of Technology and Innovation at the Center for Growth and Opportunity where he manages the research agenda, strategy, and represents the technology and innovation portfolio. His primary research and expertise are at the intersection of culture, technology, and innovation. Prior roles in tech policy have been at Stand Together, the Competitive Enterprise Institute, and the Mercatus Center at George Mason University.

Tony Allen a Chartered Trading Standards Practitioner and acknowledged specialist in age restricted sales law and practice. He is the Chair of the UK Government’s Expert Panel on Age Restrictions and Executive Director of a UKAS accredited conformity assessment body specialising in age and identity assurance testing and certification. He is the Technical Editor of the current international standard for Age Assurance Systems.

Aaron Mackey is EFF’s Free Speech and Transparency Litigation Director. He helps lead cases advancing free speech, anonymity, and privacy online while also working to increase public access to government records. Before joining EFF in 2015, Aaron was in Washington, D.C. where he worked on speech, privacy, and freedom of information issues at the Reporters Committee for Freedom of the Press and the Institute for Public Representation at Georgetown Law

Breakfast Media LLC CEO Drew Clark has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending