Section 230 – Broadband Breakfast https://broadbandbreakfast.com Better Broadband, Better Lives Fri, 19 May 2023 02:02:43 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://i0.wp.com/broadbandbreakfast.com/wp-content/uploads/2021/05/cropped-logo2.png?fit=32%2C32&ssl=1 Section 230 – Broadband Breakfast https://broadbandbreakfast.com 32 32 190788586 Supreme Court Sides With Google and Twitter, Leaving Section 230 Untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/#respond Fri, 19 May 2023 02:02:43 +0000 https://broadbandbreakfast.com/?p=51050 WASHINGTON, May 18, 2023 — The Supreme Court on Thursday sided with Google and Twitter in a pair of high-profile cases involving intermediary liability for user-generated content, marking a significant victory for online platforms and other proponents of Section 230.

In Twitter v. Taamneh, the court ruled that Twitter could not be held liable for abetting terrorism by hosting terrorist content. The unanimous decision was written by Justice Clarence Thomas, who had previously signaled interest in curtailing liability protections for online platforms.

“Notably, the two justices who have been most critical of Section 230 and internet platforms said nothing of the sort here,” said Ari Cohn, free speech counsel at TechFreedom.

In a brief unsigned opinion remanding Gonzalez v. Google to the Ninth Circuit, the court declined to address Section 230, saying that the case “appears to state little, if any, plausible claim for relief.”

A wide range of tech industry associations and civil liberties advocates applauded the decision to leave Section 230 untouched.

“Free speech online lives to fight another day,” said Patrick Toomey, deputy director of the ACLU’s National Security Project. “Twitter and other apps are home to an immense amount of protected speech, and it would be devastating if those platforms resorted to censorship to avoid a deluge of lawsuits over their users’ posts.”

John Bergmayer, legal director at Public Knowledge, said that lawmakers should take note of the rulings as they continue to debate potential changes to Section 230.

“Over the past several years, we have seen repeated legislative proposals that would remove Section 230 protections for various platform activities, such as content moderation decisions,” Bergmayer said. “But those activities are fully protected by the First Amendment, and removing Section 230 would at most allow plaintiffs to waste time and money in court, before their inevitable loss.”

Instead of weakening liability protections, Bergmayer argued that Congress should focus on curtailing the power of large platforms by strengthening antitrust law and promoting competition.

“Many complaints about Section 230 and content moderation policies amount to concerns about competition and the outsize influence of major platforms,” he said.

The decision was also celebrated by Sen. Ron Wyden, D-Ore., one of the statute’s original co-authors.

“Despite being unfairly maligned by political and corporate interests that have turned it into a punching bag for everything wrong with the internet, the law Representative [Chris] Cox and I wrote remains vitally important to allowing users to speak online,” Wyden said in a statement. “While tech companies still need to do far better at policing heinous content on their sites, gutting Section 230 is not the solution.”

However, other lawmakers expressed disappointment with the court’s decision, with some — including Rep. Cathy McMorris Rodgers, R-Wash., chair of the House Energy and Commerce Committee — saying that it “underscores the urgency for Congress to enact needed reforms to Section 230.”

]]>
https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/feed/ 0 51050
White House Meets AI Leaders, FTC Claims Meta Violated Privacy Order, Graham Targets Section 230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/#respond Fri, 05 May 2023 13:40:46 +0000 https://broadbandbreakfast.com/?p=50637 May 5, 2023 — Vice President Kamala Harris and other senior officials on Thursday met with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI to discuss the risks associated with artificial intelligence technologies, following the administration’s announcement of $140 million in funding for national AI research.

President Joe Biden briefly stopped by the meeting, telling the tech leaders that “what you’re doing has enormous potential and enormous danger.”

Government officials emphasized the importance of responsible leadership and called on the CEOs to be more transparent about their AI systems with both policymakers and the general public.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said in a statement after the meeting.

In addition to the new investment in AI research, the White House announced that the Office of Management and Budget would be releasing proposed policy guidance on government usage of AI systems for public comment.

The initiatives announced Thursday are “an important first step,” wrote Adam Conner, vice president of technology policy at the Center for American Progress. “But the White House can and should do more. It’s time for President Joe Biden to issue an executive order that requires federal agencies to implement the Blueprint for an AI Bill of Rights and take other key actions to address the challenges and opportunities of AI.”

FTC claims Facebook violated privacy order

The Federal Trade Commission on Wednesday proposed significant modifications to its 2020 privacy settlement with Facebook, accusing the company of violating children’s privacy protections and improperly sharing user data with third parties.

The suggested changes would include a blanket prohibition against monetizing the data of underage users and limits on the uses of facial recognition technology, among several other constraints.

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

Although the agency voted unanimously to issue the order, Commissioner Alvaro Bedoya expressed concerns about whether the changes exceeded the FTC’s limited order modification authority. “I look forward to hearing additional information and arguments and will consider these issues with an open mind,” he said.

Meta responded to the FTC’s action with a lengthy statement calling it a “political stunt” and outlining the changes that have been implemented since the original order.

“Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” wrote Andy Stone, Meta’s director of policy communications, in a statement posted to Twitter.

Meta now has thirty days to respond to the proposed changes. “We will vigorously fight this action and expect to prevail,” Stone said.

Sen. Graham threatens to repeal Section 230 if tech lobby kills EARN IT Act

The Senate Judiciary Committee on Thursday unanimously approved the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, a controversial bill that would create new carveouts to Section 230 in an attempt to combat online child sexual abuse material.

But Sen. Lindsey Graham, R-S.C., the bill’s cosponsor and ranking member of the committee, expressed doubt about the legislation’s future, claiming that “the political and economic power of social media companies is overwhelming.”

“I have little hope that common-sense proposals like this will ever become law because of the lobbying power these companies have at their disposal,” he said in a statement on Thursday. “My next approach is going to be to sunset Section 230 liability protection for social media companies.”

If Congress fails to pass legislation regulating social media companies, Graham continued, “it’s time to open up the American courtrooms as a way to protect consumers.”

However, large tech companies are not the only critics of the EARN IT Act. The American Civil Liberties Union on Thursday urged Congress to reject the proposed legislation, alongside two other bills related to digital privacy.

“These bills purport to hold powerful companies accountable for their failure to protect children and other vulnerable communities from dangers on their services when, in reality, increasing censorship and weakening encryption would not only be ineffective at solving these concerns, it would in fact exacerbate them,” said Cody Venzke, ACLU senior policy counsel.

]]>
https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/feed/ 0 50637
Narrowing Section 230 Could Destroy Smaller Platforms, Warns Nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/?utm_source=rss&utm_medium=rss&utm_campaign=narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/#respond Tue, 04 Apr 2023 23:02:22 +0000 https://broadbandbreakfast.com/?p=50055 WASHINGTON, April 4, 2023 — Narrowing Section 230 protections for online services could have significant economic repercussions, particularly for smaller platforms that rely on content curation as a business model, according to experts at a panel hosted by the Computer & Communications Industry Association Research Center on Tuesday.

“There’s really unintended consequences for the smaller players if you take a ‘one size fits all’ approach here,” said Laura Bisesto, global head of policy, privacy and regulatory compliance for Nextdoor.

Many small to mid-sized platforms operate on a business model that relies on content moderation, Bisesto explained. For example, Reddit hosts thousands of active forums that are each dedicated to a stated topic, and consumers join specific forums for the purpose of seeing content related to those topics.

Similarly, Bisesto claimed that Nextdoor’s proximity-based content curation is what makes the platform competitive.

“We want to make sure you’re seeing relevant, very hyper-local content that’s very timely as well,” she said. “It’s really important to us to be able to continue to use algorithms to provide useful content that’s relevant, and any narrowing of Section 230 could really impede that ability.”

Algorithmic organization is also crucial for large platforms that host a broad range of content, said Ginger Zhe Jin, a professor of economics at the University of Maryland. The sheer volume of content on platforms such as YouTube — which sees 500 hours of new video uploaded each minute — would make it “impossible for consumers to choose and consume without an algorithm to sort and list.”

Without Section 230, some companies’ platforms might choose to forgo the use of algorithms altogether, which Jin argued would “undermine the viability of the internet businesses themselves.”

The alternative would be for companies to broadly remove any content that could potentially generate controversy or be misinterpreted.

“Either way, we’re going to see maybe less content creation and less content consumption,” Jin said. “This would be a dire situation, in my opinion, and would reduce the economic benefits the internet has brought to many players.”

Who should be updating Section 230?

In February, the Section 230 debate finally reached the Supreme Court in a long-awaited case centered around intermediary liability. But some industry experts — and even multiple Supreme Court justices — have cast doubt on whether the court is the right venue for altering the foundational internet law.

Bisesto argued that the question should be left to Congress. “They drafted the law, and I think if it needs to be changed, they should be the ones to look at it,” she said.

However, she expressed skepticism about whether lawmakers would be able to reach a consensus, highlighting the “fundamental disagreement” between the general Republican aim of leaving more content up and Democratic aim of taking more content down.

If the Supreme Court refrains from major changes, “pressure will increase for Congress to do something as the 50 different states are passing different statutes on content moderation,” said Sarah Oh Lam, a senior fellow at the Technology Policy Institute.

]]>
https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/feed/ 0 50055
Section 230 Shuts Down Conversation on First Amendment, Panel Hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-shuts-down-conversation-on-first-amendment-panel-hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/#respond Fri, 10 Mar 2023 02:10:38 +0000 https://broadbandbreakfast.com/?p=49171 WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/feed/ 0 49171
Congress Should Amend Section 230, Senate Subcommittee Hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-amend-section-230-senate-subcommittee-hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/#respond Thu, 09 Mar 2023 00:08:28 +0000 https://broadbandbreakfast.com/?p=49135 WASHINGTON, March 8, 2023 – Law professionals at a Senate Subcommittee on Privacy, Technology and the Law hearing on Wednesday urged Congress to amend Section 230 to specify that it applies only to free speech, rather than the promotion of misinformation.

Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. Mary Anne Franks, professor of law at the University of Miami School of Law, argued that there is a difference between protecting free speech and protecting information and the harmful dissemination of that information.

Hany Farid, professor at University of California, Berkley, argued that there should be a distinction between a negligently designed product feature and a core component to the platform’s business. For example, YouTube’s video recommendations is a product feature rather than an essential function as it is designed solely to maximize advertising revenue by keeping users on the platform, he said.

YouTube claims that the algorithm to recommend videos is unable to distinguish between two different videos. This, argued Farid, should be considered a negligently designed feature as YouTube knew or should have reasonably known that the feature could lead to harm.

Section 230, said Farid, was written to immunize tech companies from defamation litigation, not to immunize tech companies from any wrongdoing, including negligible design of its features.

“At a minimum,” said Franks, returning the statue to its original intention “would require amending the statute to make clear that the law’s protections only apply to speech and to make clear that platforms that knowingly promote harmful content are ineligible for immunity.”

In an State of the Net conference earlier this month, Frank emphasized the “good Samaritan” aspect of the law, claiming that it is supposed to “provide incentives at platforms to actually do the right thing.” Instead, the law does not incentivize platforms to moderate its content, she argued.

Jennifer Bennett of national litigation boutique Gupta Wessler suggested that Congress uphold what is known as the Henderson framework, which would hold a company liable if it materially contributes to what makes content unlawful, including the recommendation and dissemination of the content.

Unfortunately, lamented Eric Schnapper, professor of law at University of Washington School of Law, Section 230 has barred the right of Americans to get redress if they’ve been harmed by big tech. “Absolute immunity breeds absolute irresponsibility,” he said.

Senator Richard Blumenthal, R-Connecticut, warned tech companies that “reform is coming” at the onset of the hearing.

This comes weeks after the Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube. The case saw industry dissention on whether section 230 protects algorithmic recommendations. Justice Brett Kavanaugh claimed that YouTube forfeited its protection by using recommendation algorithms but was overturned in the court ruling.

]]>
https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/feed/ 0 49135
Content Moderation, Section 230 and the Future of Online Speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/?utm_source=rss&utm_medium=rss&utm_campaign=content-moderation-section-230-and-the-future-of-online-speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/#respond Wed, 08 Mar 2023 23:23:43 +0000 https://broadbandbreakfast.com/?p=49138

In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.

Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.

Members of the Breakfast Club also have access to high-resolution videos from the Big Tech & Speech Summit!

Join to receive your copy of the Breakfast Club Exclusive Report!

]]>
https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/feed/ 0 49138
Industry Experts Caution Against Extreme Politicization in Section 230 Debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=industry-experts-caution-against-extreme-politicization-in-section-230-debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/#respond Tue, 07 Mar 2023 21:59:01 +0000 https://broadbandbreakfast.com/?p=49096 WASHINGTON, March 7, 2023 — Congress should reject the heavily politicized rhetoric surrounding Section 230 and instead consider incremental reforms that are narrowly targeted at specific problems, according to industry experts at State of the Net on Monday.

“What I really wish Congress would do, since 230 has become this political football, is put the football down for a second,” said Billy Easley, senior public policy lead at Reddit.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

Instead of starting from Section 230, Easley suggested that Congress methodically identify specific problems and consider how each could best be addressed. With many issues, he claimed that there are “a slew of policy options” more effective than changing Section 230.

Much of the discussion about Section 230 is “intentionally being pitted into binaries,” said Yaël Eisenstat, head of the Anti-Defamation League’s Center for Technology and Society. In reality, she continued, many proposals exist somewhere between keeping Section 230 exactly as it is and throwing it out altogether.

Eisenstat expressed skepticism about the often-repeated claim that changing Section 230 will “break the internet.”

“Let’s be frank — the tobacco industry, the automobile industry, the oil and gas industry, the food industry also did not want to be regulated and claimed it would completely destroy them,” she said. “And guess what? They all still exist.”

Joel Thayer, president of the Digital Progress Institute, claimed that many arguments against Section 230 reform are “harkening back to a more libertarian view, which is ‘let’s not touch it because bad things can happen.”

“I think that’s absurd,” he said. “I think even from a political standpoint, that’s just not the reality.”

Potential reforms should be targeted and consider unintended consequences

While Section 230 has performed “unbelievably well” for a law dating back to 1996, it should at least be “tweaked” to better reflect the present day, said Matt Perault, director of the Center on Technology Policy at the University of North Carolina.

But Perault acknowledged that certain proposed changes would create a significant compliance burden for smaller platforms, unlike large companies with “huge legal teams, huge policy teams, huge communications teams.”

Concerns about the impact of Section 230 reform on small businesses can be addressed by drawing distinct guidelines about which types of companies are included in any given measure, Thayer said.

Easley warned that certain proposals could lead to major unintended consequences. While acknowledging Republican concerns about “censorship” of conservative content on social media platforms, he argued that removing Section 230 protections was not the best way to address the issue — and might completely backfire.

“There’s going to be less speech in other areas,” Easley said. “We saw this with SESTA/FOSTA, we’ve seen this in other sorts of proposals as well, and I just really wish that Congress would keep that in mind.”

Thayer suggested that future legislative efforts start with increasing tech companies’ transparency, building off of the bipartisan momentum from the previous session of Congress.

Easley agreed, adding that increased access to data will allow lawmakers to more effectively target other areas of concern.

]]>
https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/feed/ 0 49096
State of the Net Panelists Clash Over Section 230 Interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/?utm_source=rss&utm_medium=rss&utm_campaign=state-of-the-net-panelists-clash-over-section-230-interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/#respond Mon, 06 Mar 2023 21:19:55 +0000 https://broadbandbreakfast.com/?p=49063 Gonzalez v. Google.]]> WASHINGTON, March 6, 2023 — Experts at the State of the Net conference on Monday expressed a wide range of viewpoints about how Section 230 should be interpreted in the context of Gonzalez v. Google, an intermediary liability case recently argued before the Supreme Court.

If the justices want to understand Section 230’s original intent, NetChoice CEO Steve DelBianco said, they should turn to the law’s original co-authors — Sen. Ron Wyden, D-Ore., and former Rep. Chris Cox, now on the NetChoice board of directors. In January, Wyden and Cox filed an amicus brief urging the Supreme Court to uphold Section 230 of the Communications Decency Act.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

But Mary Anne Franks, professor at the University of Miami School of Law, argued that a modern-day interpretation of the law should be based on several factors other than the author’s explanation, such as the statute’s actual wording and its legislative history. “The law does not have to be subject to revisionist or self-serving interests of interpretations after the fact,” she said.

Franks emphasized the “Good Samaritan” aspect of Section 230, claiming that the law is supposed to “provide incentives for platforms to actually do the right thing.”

Alex Abdo, litigation director at Columbia University’s Knight First Amendment Institute, said he was sympathetic to Franks’ concerns and agreed that tech companies are generally governed by financial motivations, rather than a dedication to free speech or the public interest. Not only can online platforms be exploited to cause harm, he said, they often amplify sensationalized and provocative speech by design.

However, Abdo maintained that Section 230 played a key role in protecting unpopular online speech — including content posted by human rights activists, government whistleblowers and dissidents — by making it less likely that social media platforms would feel the need to remove it.

DelBianco expressed measured optimism about the justices’ approach to Section 230, noting that Justice Clarence Thomas seemed to reject some of the algorithmic harm claims despite his previously expressed interest in altering Section 230. DelBianco also highlighted Justice Amy Coney Barrett’s line of questioning about whether an individual can be held liable for simply liking or retweeting content, calling it “one of the most surprising questions” of the oral arguments.

But despite their appreciation for certain aspects of the justices’ approach, multiple panelists agreed that changing Section 230 should be a careful and deliberate process, better suited to Congress than the courts. “I would much prefer a scalpel to a sledgehammer,” said Matt Wood, vice president of policy and general counsel at Free Press.

The Senate Judiciary Subcommittee on Privacy, Technology and the Law will hold a hearing on Wednesday to examine platform liability, focusing on Gonzalez.

]]>
https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/feed/ 0 49063
Supreme Court Justices Express Caution About Entering Section 230 Debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-justices-express-caution-about-entering-into-section-230-debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/#respond Thu, 23 Feb 2023 01:01:27 +0000 https://broadbandbreakfast.com/?p=48728 Gonzalez v. Google, justices repeatedly voiced concerns about potential unintended consequences.]]> WASHINGTON, February 22, 2023 — Supreme Court justices expressed broad skepticism about removing liability protections for websites that automatically recommend user-generated content, marking a cautious start to a pair of long-awaited cases involving platform liability for terrorist content.

Gonzalez v. Google, argued on Tuesday, hinges on whether YouTube’s use of recommendation algorithms puts it outside the scope of Section 230, which generally provides platforms with immunity for third-party content.

A separate case involving terrorism and social media, Twitter v. Taamneh, was argued on Wednesday. Although the basic circumstances of the cases are similar — both brought against tech companies by the families of terrorist attack victims — the latter focuses on what constitutes “aiding and abetting” under the Anti-Terrorism Act.

Section 230 arguments central to Gonzalez

Section 230 protections are at the heart of Gonzalez. The provision, one of the few surviving components of the 1996 Communications Decency Act, is credited by many experts with facilitating the internet’s development and enabling its daily workings.

But the plaintiffs in Gonzalez argued that online platforms such as YouTube should be held accountable for actively promoting harmful content.

As oral arguments commenced, Justice Elena Kagan repeatedly raised concerns that weakening Section 230 protections could have a wider impact than intended. “Every time anybody looks at anything on the internet, there is an algorithm involved… everything involves ways of organizing and prioritizing material,” she said.

These organization methods are essential for making platforms user-friendly, argued Lisa Blatt, the attorney representing Google. “There are a billion hours of videos watched each day on YouTube, and 500 hours uploaded every minute,” she said.

Justice Brett Kavanaugh pointed to the inclusion of platforms that “pick, choose, analyze or digest content” in the statutory definition of covered entities. Claiming that YouTube forfeited Section 230 protections by using recommendation algorithms, Kavanaugh said, “would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230.”

Eric Schnapper, the attorney representing the plaintiffs, argued that the provision in question was only applicable to software providers and YouTube did not qualify.

Justices concerned about unintended impacts of weakening Section 230

Despite Schnapper’s interpretation of the statute’s intent, Kavanaugh maintained his concerns about altering it. “It seems that you continually want to focus on the precise issue that was going on in 1996, but… to pull back now from the interpretation that’s been in place would create a lot of economic dislocation, would really crash the digital economy,” he said.

Weakening Section 230 could also open the door to “a world of lawsuits,” Kagan predicted. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit,” she said, pointing to search engines and social media platforms as other services that could be impacted.

Deputy Solicitor General Malcolm Stewart, who primarily sided with the plaintiff, argued that even if such lawsuits were attempted, “they would not be suits that have much likelihood of prevailing.”

Justice Amy Coney Barrett noted that the text of Section 230 explicitly includes users of online platforms in addition to the platforms themselves. If the statute was changed, Barrett questioned, could individual users be held liable for any content that they liked, reposted or otherwise engaged with?

“That’s content you’ve created,” Schnapper replied.

‘Confusion’ about the case and the court’s proper role

Throughout the hearing, several justices expressed confusion at the complexities of the case.

During an extended definition of YouTube “thumbnails” — which Schnapper described as a “joint creation” because of the platform-provided URLs accompanying user-generated media — Justice Samuel Alito told Schnapper that the justice was “completely confused by whatever argument you’re making at the present time.”

At another point, Justice Ketanji Brown Jackson said she was “thoroughly confused” by the way that two different questions — whether Google could claim immunity under Section 230 and whether the company aided terrorism — were seemingly being conflated.

Just minutes later, after Stewart presented his argument on behalf of the Justice Department, Justice Clarence Thomas began his line of questioning with, “Well, I’m still confused.”

In addition to frequent references to confusion, multiple justices suggested that some aspects of the case might be better left to Congress.

“I don’t have to accept all of [Google’s] ‘the sky is falling’ stuff to accept… there is a lot of uncertainty about going the way you would have us go, in part just because of the difficulty of drawing lines in this area,” Kagan said. “Isn’t that something for Congress to do, not the court?”

Kavanaugh echoed those concerns, saying that the case would require “a very precise predictive judgment” and expressing uncertainty about whether the court could adequately consider the implications.

But Chief Justice John Roberts seemed equally hesitant to hand off the decision. “The amici suggest that if we wait for Congress to make that choice, the internet will be sunk,” he said.

]]>
https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/feed/ 0 48728
Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/#respond Wed, 25 Jan 2023 16:51:16 +0000 https://broadbandbreakfast.com/?p=48083 WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

]]>
https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/feed/ 0 48083
Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/#respond Tue, 24 Jan 2023 16:36:52 +0000 https://broadbandbreakfast.com/?p=48045 WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders.”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

]]>
https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/feed/ 0 48045
Google Defends Section 230 in Supreme Court Terror Case https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/?utm_source=rss&utm_medium=rss&utm_campaign=google-defends-section-230-in-supreme-court-terror-case https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/#respond Fri, 13 Jan 2023 13:57:54 +0000 https://broadbandbreakfast.com/?p=47756 WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

]]>
https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/feed/ 0 47756
CES 2023: Changing Section 230 Would Jeopardize Startup https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2023-changing-section-230-would-jeopardize-startup https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/#respond Fri, 06 Jan 2023 20:41:30 +0000 https://broadbandbreakfast.com/?p=47514 LAS VEGAS, January 6, 2021 – Removing Section 230’s protections for online platforms would expose small startups to crippling legal costs, said Kate Tummarello, executive director of Engine, a non-profit that advocates for startups, speaking on a Friday panel at the Consumer Electronics Show.

Section 230 of the Communications Decency Act, which became law in 1996, shields online platforms from civil liability for content posted by third-parties. While proponents say the provision is critical to the existence of platforms, public figures and policymakers on both right and left have, of late, advocated its repeal.

Tummarello argued that Section 230 allows young, resource-poor companies to combat lawsuits more efficiently, noting that a the costs of a full litigation could put a startup out of business. “Defending against a lawsuit over user content, even with 230 in place, still costs tens of thousands of dollars,” Tummarello said. She stated that even platforms whose actions are legally justified benefit from Section 230 since they could be subjected to and ruined by a frivolous lawsuit.

Section 230 will likely soon be subjected to judicial interpretation at the Supreme Court in a pair of cases, Gonzalez v. Google and Twitter v. Taamneh. Both cases question whether tech platforms are liable for hosting pro-terrorist third-party content.

Charlotte Slaiman, competition policy director at Public Knowledge, voiced concern over platforms’ content-moderation decisions that, she said, enable online misinformation harassment. However, she argued that directly regulating content moderation is “fraught,” instead calling for “competition-based” reform that will provide alternative services for users.

]]>
https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/feed/ 0 47514
Amid Big Tech Controversies, Section 230’s Future is Uncertain https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/?utm_source=rss&utm_medium=rss&utm_campaign=amid-big-tech-controversies-section-230s-future-is-uncertain https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/#respond Wed, 21 Dec 2022 03:46:20 +0000 https://broadbandbreakfast.com/?p=47091 From the 12 Days of Broadband:

The past year has seen many controversial decisions from big tech platforms, but 2022 might end up being the last year that such decisions are shielded by the liability protections of Section 230 of the Telecommunications Act.

Many actors are now calling for the statue’s repeal or reformulation. Conservative populists on the right argue that it enables social media giants to silence conservative speech. Progressives on the left believe it allows platforms to shirk responsibility for moderating hate speech and misinformation.

 Download the complete 12 Days of Broadband report

Of course, Section 230 still has defenders from across the political spectrum. Indeed, none of the many proposed bills for legislative change have garnered much traction. Furthermore, new Twitter CEO Elon Musk’s takeover has demonstrated the pitfalls of a pure “free speech” approach to content moderation: It took just days for his “comedy is legal again” declaration to turn into “tricking people is not OK” — during which time parody tweets reportedly cost advertisers billions. 

And despite Musk’s initially stated intention to allow all legally permissible content, he decided to suspend Ye (formerly Kanye West) from Twitter in December for tweeting a swastika graphic. Later that month, he took still bolder steps, blocking links to competitor platforms as well as suspending the accounts of several tech journalists and an account that tracked his private jet based on public flight data.

On a larger scale, Florida’s attorney general asked the Supreme Court to review a law that would limit online platforms’ ability to moderate content after an appeals court ruled that the law violated the First Amendment. A similar Texas law that forbids content moderation based on “viewpoint” is on hold pending an appeal to the Supreme Court. 

While the Court has not yet taken up those cases, it has agreed to hear two others related to Section 230: Gonzalez v. Google and Twitter v. Taamneh, both of which ask if tech companies can be held liable for terrorist content on their platforms.

Given the Court’s conservative majority, and the fact that at least one justice (Clarence Thomas) has openly argued that social media companies should be regulated as common carriers, Section 230’s 25-year reign might be coming to an end.

]]>
https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/feed/ 0 47091
Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/?utm_source=rss&utm_medium=rss&utm_campaign=tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/#respond Thu, 08 Dec 2022 20:05:59 +0000 https://broadbandbreakfast.com/?p=46735 WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

Supreme Court to hear two social media cases next year

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Justice Clarence Thomas tips his hand against Section 230

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”

]]>
https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/feed/ 0 46735
Narrow Majority of Supreme Court Blocks Texas Law Regulating Social Media Platforms https://broadbandbreakfast.com/2022/05/narrow-majority-of-supreme-court-blocks-texas-law-regulating-social-media-platforms/?utm_source=rss&utm_medium=rss&utm_campaign=narrow-majority-of-supreme-court-blocks-texas-law-regulating-social-media-platforms https://broadbandbreakfast.com/2022/05/narrow-majority-of-supreme-court-blocks-texas-law-regulating-social-media-platforms/#respond Tue, 31 May 2022 23:40:09 +0000 https://broadbandbreakfast.com/?p=42065 WASHINGTON, May 31, 2022 – On a narrow 5-4 vote, the Supreme Court of the United States on Tuesday blocked a Texas law that Republicans had argued would address the “censorship” of conservative voices on social media platforms.

Texas H.B. 20 was written by Texas Republicans to combat perceived bias against conservative viewpoints voiced on Facebook, Twitter, and other social media platforms with at least 50 million active monthly users.

Watch Broadband Breakfast Live Online on Wednesday, June 1, 2022

Broadband Breakfast on June 1, 2022 — The Supreme Court, Social Media and the Culture Wars

The bill was drafted at least in part as a reaction to President Donald Trump’s ban from social media. Immediately following the January 6 riots at the United States Capitol, Trump was simultaneously banned on several platforms and online retailers, including Amazon, Facebook, Twitter, Reddit, and myriad other websites.

See also Explainer: With Florida Social Media Law, Section 230 Now Positioned In Legal Spotlight, Broadband Breakfast, May 25, 2021

Close decision on First Amendment principles

A brief six-page dissent on the matter was released on Tuesday. Conservative Justices Samuel Alito, Neil Gorsuch, and Clarence Thomas dissented, arguing that the law should have been allowed to stand. Justice Elena Kagan also agreed that the law should be allowed to stand, though she did not join Alito’s penned dissent and did not elaborate further.

The decision was on an emergency action to vacate a one-sentence decision of the Fifth Circuit Court of Appeals. The appeals court had reversed a prior stay by a federal district court. In other words, the, the law passed by the Texas legislature and signed by Gov. Greg Abbott is precluded from going into effect.

Tech lobbying group NetChoice – in addition to many entities in Silicon Valley – argued that the law would prevent social media platforms from moderating and addressing hateful and potentially inflammatory content.

In a statement, Computer & Communications Industry Association President Matt Schruers said, “We are encouraged that this attack on First Amendment rights has been halted until a court can fully evaluate the repercussions of Texas’s ill-conceived statute.”

“This ruling means that private American companies will have an opportunity to be heard in court before they are forced to disseminate vile, abusive or extremist content under this Texas law. We appreciate the Supreme Court ensuring First Amendment protections, including the right not to be compelled to speak, will be upheld during the legal challenge to Texas’s social media law.”

In a statement, Public Knowledge Legal Director John Bergmayer said, “It is good that the Supreme Court blocked HB 20, the Texas online speech regulation law. But it should have been unanimous. It is alarming that so many policymakers, and even Supreme Court justices, are willing to throw out basic principles of free speech to try to control the power of Big Tech for their own purposes, instead of trying to limit that power through antitrust and other competition policies. Reining in the power of tech giants does not require abandoning the First Amendment.”

In his dissent, Alito pointed out that the plaintiffs argued “HB 20 interferes with their exercise of ‘editorial discretion,’ and they maintain that this interference violates their right ‘not to disseminate speech generated by others.’”

“Under some circumstances, we have recognized the right of organizations to refuse to host the speech of others,” he said, referencing Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc.

“But we have rejected such claims in other circumstances,” he continued, pointing to PruneYard Shopping Center v. Robins.

Will Section 230 be revamped on a full hearing by the Supreme Court?

“It is not at all obvious how our existing precedents, which predate the age of the internet, should apply to large social media companies, but Texas argues that its law is permissible under our case law,” Alito said.

Alito argued that there is a distinction between compelling a platform to host a message and refraining from discriminating against a user’s speech “on the basis of viewpoint.” He said that H.B. 20 adopted the latter approach.

Alito went on, arguing that the bill only applied to “platforms that hold themselves out as ‘open to the public,’” and “neutral forums for the speech of others,” and thus, the targeting platforms are not spreading messages they endorse.

Alito added that because the bill only targets platforms with more than 50 million users, it only targets entities with “some measure of common carrier-like market power and that this power gives them an ‘opportunity to shut out [disfavored] speakers.’”

Justices John Roberts, Stephen Breyer, Sonya Sotomayor, Brett Kavanaugh, and Amy Coney Barrett all voted affirmatively – siding with NetChoice LLC’s emergency application – to block H.B. 20 from being enforced.

]]>
https://broadbandbreakfast.com/2022/05/narrow-majority-of-supreme-court-blocks-texas-law-regulating-social-media-platforms/feed/ 0 42065
Parler Policy Exec Hopes ‘Sustainable’ Free Speech Change on Twitter if Musk Buys Platform https://broadbandbreakfast.com/2022/05/parler-policy-exec-hopes-sustainable-free-speech-change-on-twitter-if-musk-buys-platform/?utm_source=rss&utm_medium=rss&utm_campaign=parler-policy-exec-hopes-sustainable-free-speech-change-on-twitter-if-musk-buys-platform https://broadbandbreakfast.com/2022/05/parler-policy-exec-hopes-sustainable-free-speech-change-on-twitter-if-musk-buys-platform/#respond Mon, 16 May 2022 21:23:27 +0000 https://broadbandbreakfast.com/?p=41535 WASHINGTON, May 16, 2022 – A representative from a growing conservative social media platform said last week that she hopes Twitter, under new leadership, will emerge as a “sustainable” platform for free speech.

Amy Peikoff, chief policy officer of social media platform Parler, said as much during a Broadband Breakfast Live Online event Wednesday, in which she wondered about the implications of platforms banning accounts for views deemed controversial.

The social media world has been captivated by the lingering possibility that SpaceX and Tesla CEO Elon Musk could buy Twitter, which the billionaire has criticized for making decisions he said infringe on free speech.

Before Musk’s decision to go in on the company, Parler saw a surge in member sign-ups after former President Donald Trump was banned from Twitter for comments he made that the platform saw as encouraging the Capitol riots on January 6, 2021, a move Peikoff criticized. (Trump also criticized the move.)

Peikoff said she believes Twitter should be a free speech platform just like Parler and hopes for “sustainable” change with Musk’s promise.

“At Parler, we expect you to think for yourself and curate your own feed,” Peikoff told Broadband Breakfast Editor and Publisher Drew Clark. “The difference between Twitter and Parler is that on Parler the content is controlled by individuals; Twitter takes it upon itself to moderate by itself.”

She recommended “tools in the hands of the individual users to reward productive discourse and exercise freedom of association.”

Peikoff criticized Twitter for permanently banning Donald Trump following the insurrection at the U.S. Capitol on January 6, and recounted the struggle Parler had in obtaining access to hosting services on AWS, Amazon’s web services platform.

Screenshot of Amy Peikoff

While she defended the role of Section 230 of the Telecom Act for Parler and others, Peikoff criticized what she described as Twitter’s collusion with the government. Section 230 provides immunity from civil suits for comments posted by others on a social media network.

For example, Peikoff cited a July 2021 statement by former White House Press Secretary Jen Psaki raising concerns with “misinformation” on social media. When Twitter takes action to stifle anti-vaccination speech at the behest of the White House, that crosses the line into a form of censorship by social media giants that is, in effect, a form of “state action.”

Conservatives censored by Twitter or other social media networks that are undertaking such “state action” are wrongfully being deprived of their First Amendment rights, she said.

“I would not like to see more of this entanglement of government and platforms going forward,” she said Peikoff and instead to “leave human beings free to information and speech.”

Screenshot of Drew Clark and Amy Peikoff during Wednesday’s Broadband Breakfast’s Online Event

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, May 11, 2022, 12 Noon ET – Mr. Musk Goes to Washington: Will Twitter’s New Owner Change the Debate About Social Media?

The acquisition of social media powerhouse Twitter by Elon Musk, the world’s richest man, raises a host of issues about social media, free speech, and the power of persuasion in our digital age. Twitter already serves as the world’s de facto public square. But it hasn’t been without controversy, including the platform’s decision to ban former President Donald Trump in the wake of his tweets during the January 6 attack on the U.S. Capitol. Under new management, will Twitter become more hospitable to Trump and his allies? Does Twitter have a free speech problem? How will Mr. Musk’s acquisition change the debate about social media and Section 230 of the Telecommunications Act?

Guests for this Broadband Breakfast for Lunch session:

  • Amy Peikoff, Chief Policy Officer, Parler
  • Drew Clark (host), Editor and Publisher, Broadband Breakfast

Amy Peikoff is the Chief Policy Officer of Parler. After completing her Ph.D., she taught at universities (University of Texas, Austin, University of North Carolina, Chapel Hill, United States Air Force Academy) and law schools (Chapman, Southwestern), publishing frequently cited academic articles on privacy law, as well as op-eds in leading newspapers across the country on a range of issues. Just prior to joining Parler, she founded and was President of the Center for the Legalization of Privacy, which submitted an amicus brief in United States v. Facebook in 2019.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

Illustration by Mohamed Hassan used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

https://pixabay.com/vectors/elon-musk-twitter-owner-investor-7159200/

]]>
https://broadbandbreakfast.com/2022/05/parler-policy-exec-hopes-sustainable-free-speech-change-on-twitter-if-musk-buys-platform/feed/ 0 41535
Leave Section 230 Alone, Panelists Urge Government https://broadbandbreakfast.com/2022/05/leave-section-230-alone-panelists-urge-government/?utm_source=rss&utm_medium=rss&utm_campaign=leave-section-230-alone-panelists-urge-government https://broadbandbreakfast.com/2022/05/leave-section-230-alone-panelists-urge-government/#respond Tue, 10 May 2022 20:00:41 +0000 https://broadbandbreakfast.com/?p=41325 WASHINGTON, May 10, 2022 – A panelist at a Heritage Foundation event on Thursday said that the government should not make changes to Section 230, which protects online platforms from being liable for the content their users post.

However, the other panelist, Newsweek Opinion Editor Josh Hammer, said technology companies have been colluding with the government to stifle speech. Hammer said that Section 230 should be interpreted and applied more vigorously against tech platforms.

Countering this view was Niam Yaraghi, senior fellow at the Brookings Institution’s Center for Technology Innovation.

“While I do agree with the notion that what these platforms are doing is not right, I am much more optimistic” than Hammer, Yaraghi said. “I do not really like the government to come in and do anything about it, because I believe that a capitalist market, an open market, would solve the issue in the long run.”

Addressing a question from the moderator about whether antitrust legislation or stricter interpretation of Section 230 should be the tool to require more free speech on big tech platforms, Hammer said that “Section 230 is the better way to go here.”

Yaraghi, by contrast, said that it was incumbent on big technology platforms to address content moderation, not the government.

In March, Vint Cerf, a vice president and chief internet evangelist at Google, and the president of tech lobbyist TechFreedom warned against government moderation of content on the internet as Washington focuses on addressing the power of big tech platforms.

While some say Section 230 only protects “neutral platforms”, others claim it allows powerful companies to ignore user harm. Legislation from the likes of Amy Klobuchar, D-Minn., would exempt 230 protections for platforms that fail to address Covid mis- and disinformation.

Correction: A previous version of this story said Sen. Ron Wyden, D-Ore., agreed that Section 230 only protected “neutral platforms,” or that it allowed tech companies to ignore user harm. Wyden, one of the authors of the provision in the 1996 Telecom Act, instead believes that the law is a “sword and shield” to protect against small companies, organizations and movements against legal liability for what users post on their websites.

Additional correction: A previous version of this story misattributed a statement by Niam Yaraghi to Josh Hammer. The story has been corrected, and additional context added.

]]>
https://broadbandbreakfast.com/2022/05/leave-section-230-alone-panelists-urge-government/feed/ 0 41325
Reforming Section 230 Won’t Help With Content Moderation, Event Hears https://broadbandbreakfast.com/2022/04/reforming-section-230-wont-help-with-content-moderation-panel-hears/?utm_source=rss&utm_medium=rss&utm_campaign=reforming-section-230-wont-help-with-content-moderation-panel-hears https://broadbandbreakfast.com/2022/04/reforming-section-230-wont-help-with-content-moderation-panel-hears/#respond Mon, 11 Apr 2022 20:58:51 +0000 https://broadbandbreakfast.com/?p=40714 WASHINGTON, April 11, 2022 — Reforming Section 230 won’t help with content moderation on online platforms, observers said Monday.

“If we’re going to have some content moderation standards, the government is going to be, usually, the worst person to do it,” said Chris Cox, a member of the board of directors at tech lobbyist Net Choice and a former Congressman.

These comments came during a panel discussion during an online event hosted by the American Enterprise Institute that focused on speech regulation and Section 230, a provision in the Communications Decency Act that protects technology platforms from being liable for posts by their users.

“Content moderation needs to be handled platform by platform and rules need to be established by online communities according to their community standards,” Cox said. “The government is not very competent at figuring out the answers to political questions.”

There was also discussion about the role of the first amendment in content moderation on platforms. Jeffrey Rosen, a nonresident fellow at AEI, questioned if the first amendment provides protection for content moderation by a platform.

“The concept is that the platform is not a publisher,” he said. “If it’s not [a publisher], then there’s a whole set of questions as to what first amendment interests are at stake…I don’t think that it’s a given that the platform is the decider of those content decisions. I think that it’s a much harder question that needs to be addressed.”

Late last year, experts said that it is not possible for platforms to remove from their site all content that people may believe to be dangerous during a Broadband Breakfast Live Online event. However some, like Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, believe that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible.

]]>
https://broadbandbreakfast.com/2022/04/reforming-section-230-wont-help-with-content-moderation-panel-hears/feed/ 0 40714
Greene, Paul Social Media Developments Resurface Section 230 Debate https://broadbandbreakfast.com/2022/01/greene-rand-social-media-developments-resurface-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=greene-rand-social-media-developments-resurface-section-230-debate https://broadbandbreakfast.com/2022/01/greene-rand-social-media-developments-resurface-section-230-debate/#respond Wed, 05 Jan 2022 20:20:06 +0000 https://broadbandbreakfast.com/?p=38287 WASHINGTON, January 5, 2022 – The departure of Republican Kentucky Senator Rand Paul from YouTube and the banning of Georgia Republican Representative Marjorie Taylor Greene from Twitter at the beginning of a new year has rekindled a still lit flame of what lawmakers will do about Section 230 protections for Big Tech.

Paul removed himself Monday from the video-sharing platform after getting two strikes on his channel for violating the platform’s rules on Covid-19 misinformation, saying he is “[denying] my content to Big Tech…About half of the public leans right. If we all took our messaging to outlets of free exchange, we could cripple Big Tech in a heartbeat.”

Meanwhile, Greene has been permanently suspended from Twitter following repeated violations of Twitter’s terms of service. She has previously been rebuked by both her political opponents and allies for spreading fake news and mis/disinformation since she was elected in 2020. Her rap sheet includes being accused of spreading conspiracy theories promoting white supremacy and antisemitism.

It was ultimately the spreading of Covid-19 misinformation that got Greene permanently banned from Twitter on Sunday. She had received at least three previous “strikes” related to Covid-19 misinformation, according to New York Times. Greene received a fifth strike on Sunday, which resulted in her account’s permanent suspension.

Just five days into the new year, Greene’s situation – and the quickly-followed move by Paul – has reignited the tinderbox that is Section 230 of the Communications Decency Act, which shields big technology platforms from any liability from posts by their users.

As it stands now, Twitter is well within its rights to delete or suspend the accounts of any person who violates its terms of service. The right to free speech that is protected by the First Amendment does not prevent a private corporation, such as Twitter, from enforcing their rules.

In response to her Tweets, Texas Republican Congressman Dan Crenshaw called Greene a “liar and an idiot.” His comments notwithstanding, Crenshaw, like many conservative legislators, has argued that social media companies have become an integral part of the public forum and thus should not have the authority to unilaterally ban or censor voices on their platforms.

Some states, such as Texas and Florida, have gone as far as making it illegal for companies to ban political figures. Though Florida’s bill was quickly halted in the courts, that did not stop Texas from trying to enact similar laws (though they were met with similar results).

Crenshaw himself has proposed federal amendments to Section 230 for any “interactive computer service” that generates $3 billion or more in annual revenue or has 300 million or more monthly users.

The bill – which is still being drafted and does not have an official designation – would allow users to sue social media platforms for the removal of legal content based on political views, gender, ethnicity, and race. It would also make it illegal for these companies to remove any legal, user generated content from their website.

Under Crenshaw’s bill, a company such as Facebook or Twitter could be compelled to host any legal speech – objectionable or otherwise – at the risk of being sued. This includes overtly racist, sexist, or xenophobic slurs and rhetoric. While a hosting website might be morally opposed to being party to such kinds of speech, if said speech is not explicitly illegal, it would thus be protected from removal.

While Crenshaw would amend Section 230, other conservatives have advocated for its wholesale repeal. Sen. Lindsey Graham, R-South Carolina, put forward Senate Bill 2972 which would do just that. If passed, the law would go into effect on the first day of 2024, with no replacement or protections in place to replace it.

Consequences of such legislation

This is a nightmare scenario for every company with an online presence that can host user generate content. If a repeal bill were to pass with no replacement legislation in place, every online company would suddenly become directly responsible for all user content hosted on their platforms.

With the repeal of Section 230, websites would default to being treated as publishers. If users upload illegal content to a website, it would be as if the company published the illegal content themselves.

This would likely exacerbate the issue of alleged censorship that Republicans are concerned about. The sheer volume of content generated on platforms like Reddit and YouTube would be too massive for a human moderating team to play a role in.

Companies would likely be forced to rely on heavier handed algorithms and bots to censor anything that could open them to legal liability.

Democratic views

Republicans are not alone in their criticism of Section 230, however. Democrats have also flirted with amending or abolishing Section 230, albeit for very different reasons.

Many Democrats believe that Big Tech uses Section 230 to deflect responsibility, and that if they are afforded protections by it, they will not adjust their content moderation policies to mitigate allegedly dangerous or hateful speech posted online by users with real-world consequences.

Some Democrats have written bills that would carve out numerous exemptions to Section 230. Some seek to address the sale of firearms online, others focus on the spread of Covid-19 misinformation.

Some Democrats have also introduced the Safe Tech Act, which would hold companies accountable for failing to “remove, restrict access to or availability of, or prevent dissemination of material that is likely to cause irreparable harm.”

The reality right now is that two parties are diametrically opposed on the issue of Section 230.

While Republicans believe there is unfair content moderation that disproportionately censors conservative voices, Democrats believe that Big Tech is not doing enough to moderate their content and keep users safe.

]]>
https://broadbandbreakfast.com/2022/01/greene-rand-social-media-developments-resurface-section-230-debate/feed/ 0 38287
Experts Warn Against Total Repeal of Section 230 https://broadbandbreakfast.com/2021/11/experts-warn-against-total-repeal-of-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=experts-warn-against-total-repeal-of-section-230 https://broadbandbreakfast.com/2021/11/experts-warn-against-total-repeal-of-section-230/#respond Mon, 22 Nov 2021 20:03:27 +0000 https://broadbandbreakfast.com/?p=37429 WASHINGTON, November 22, 2021 – Communications experts say action by Congress to essentially gut Section 230 would not truly solve any problems with social media.

Experts emphasized that it is not possible for platforms to remove from their site all content that people may believe to be dangerous. They argue that Section 230 of the Communications Decency Act, which shields platforms from legal liability with respect to what their users post, is necessary in at least some capacity.

During discussion between these experts at Broadband Breakfast’s Live Online Event on Wednesday, Alex Feerst, the co-founder of the Digital Trust and Safety Partnership, who used to work as a content moderator, said that to a certain extent it is impossible for platforms to moderate speech that is “dangerous” because every person has differing opinions about what speech they consider to be dangerous. He says it is this ambiguity that Section 230 protects companies from.

Still, Feerst believes that platforms should hold some degree of liability for the content of their sites as harm mitigation with regards to dangerous speech is necessary where possible. He believes that the effects of artificial intelligence’s use by platforms makes some degree of liability even more essential.

Particularly with the amount of online speech to be reviewed by moderators in the internet age, Feerst says the clear-cut moderation standards are too messy and expensive to be viable options.

Matt Gerst, vice president for legal and policy affairs at the Internet Association, and Shane Tews, nonresident senior fellow at the American Enterprise Institute, also say that while content moderation is complex, it is necessary. Scott McCollough, attorney at McCollough Law Firm, says large social media companies like Facebook are not the causes of all the problems with social media that are in the national spotlight right now, but rather that social features of today’s society, such as the extreme prevalence of conflict, are to blame for this focus on social media.

Proposals for change

Rick Lane, CEO of Iggy Ventures, proposes that reform of Section 230 should include a requirement for social media platforms to make very clear what content is and is not allowed on their sites. McCullough echoed this concern, saying that many moderation actions platforms take presently do not seem to be consistent with those platforms’ stated terms and conditions, and that individual states across the nation should be able to look at these instances on a case-by-case basis to determine whether platforms fairly apply their terms and conditions.

Feerst highlighted the nuance of this issue by saying that people’s definitions of “consistent” are naturally subjective, but agrees with McCullough that users who have content removed should be notified of such, as well as the reasoning for moderators’ action.

Lane also believes that rightfully included in the product of Section 230 reform will be a requirement for platforms to demonstrate a reasonable standard of care and moderate illegal and other extremely dangerous content on their sites. Tews generally agreed with Lane that such content moderation is complex, as she sees a separation between freedom of speech and illegal activity.

Gerst highlighted concerns from companies the Internet Association represents that government regulation coming from Section 230 reform will require widely varied platforms to standardize their operation approaches, diminishing innovation on the internet.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. You can watch the November 17, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, November 17, 2021, 12 Noon ET — The Changing Nature of the Debate About Social Media and Section 230

Facebook is under fire as never before. In response, the social-networking giant has gone so far as to change its official name, to Meta (as in the “metaverse”). What are the broader concerns about social media beyond Facebook? How will concerns about Facebook’s practices spill over into other social media networks, and to debate about Section 230 of the Communications Act?

Panelists for this Broadband Breakfast Live Online session:

  • Scott McCullough, Attorney, McCullough Law Firm
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • Alex Feerst, Co-founder, Digital Trust & Safety Partnership
  • Rick Lane, CEO, Iggy Ventures
  • Matt Gerst, VP for Legal & Policy Affairs, Internet Association
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

W. Scott McCollough has practiced communications and Internet law for 38 years, with a specialization in regulatory issues confronting the industry.  Clients include competitive communications companies, Internet service and application providers, public interest organizations and consumers.

Shane Tews is a nonresident senior fellow at the American Enterprise Institute (AEI), where she works on international communications, technology and cybersecurity issues, including privacy, internet governance, data protection, 5G networks, the Internet of Things, machine learning, and artificial intelligence. She is also president of Logan Circle Strategies.

Alex Feerst is a lawyer and technologist focused on building systems that foster trust, community, and privacy. He leads Murmuration Labs, which helps tech companies address the risks and human impact of innovative products, and co-founded the Digital Trust & Safety Partnership, the first industry-led initiative to establish best practices for online trust and safety. He was previously Head of Legal and Head of Trust and Safety at Medium, General Counsel at Neuralink, and currently serves on the editorial board of the Journal of Online Trust & Safety, and as a fellow at Stanford University’s Center for Internet and Society.

Rick Lane is a tech policy expert, child safety advocate, and the founder and CEO of Iggy Ventures. Iggy advises and invests in companies and projects that can have a positive social impact. Prior to starting Iggy, Rick served for 15 years as the Senior Vice President of Government Affairs of 21st Century Fox.

Matt Gerst is the Vice President for Legal & Policy Affairs and Associate General Counsel at Internet Association, where he builds consensus on policy positions among IA’s diverse membership of companies that lead the internet industry. Most recently, Matt served as Vice President of Regulatory Affairs at CTIA, where he managed a diverse range of issues including consumer protection, public safety, network resiliency, and universal service. Matt received his J.D. from New York Law School, and he served as an adjunct professor of law in the scholarly writing program at the George Washington University School of Law.

Drew Clark is the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Drew brings experts and practitioners together to advance the benefits provided by broadband. Under the American Recovery and Reinvestment Act of 2009, he served as head of a State Broadband Initiative, the Partnership for a Connected Illinois. He is also the President of the Rural Telecommunications Congress.

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2021/11/experts-warn-against-total-repeal-of-section-230/feed/ 0 37429
Democrats Use Whistleblower Testimony to Launch New Effort at Changing Section 230 https://broadbandbreakfast.com/2021/10/democrats-use-whistleblower-testimony-to-launch-new-effort-at-changing-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=democrats-use-whistleblower-testimony-to-launch-new-effort-at-changing-section-230 https://broadbandbreakfast.com/2021/10/democrats-use-whistleblower-testimony-to-launch-new-effort-at-changing-section-230/#respond Thu, 14 Oct 2021 22:13:37 +0000 https://broadbandbreakfast.com/?p=36663 WASHINGTON, October 14, 2021 – House Democrats are preparing to introduce legislation Friday that would remove legal immunities for companies that knowingly allow content that is physically or emotionally damaging to its users, following testimony last week from a Facebook whistleblower who claimed the company is able to push harmful content because of such legal protections.

The Justice Against Malicious Algorithms Act would amend Section 230 of the Communications Decency Act – which provides legal liability protections to companies for the content their users post on their platform – to remove that shield when the platform “knowingly or recklessly uses an algorithm or other technology to recommend content that materially contributes to physical or severe emotional injury,” according to a Thursday press release, which noted that the legislation will not apply to small online platforms with fewer than five million unique monthly visitors or users.

The legislation is relatively narrow in its target: algorithms that rely on the personal user’s history to recommend content. It won’t apply to search features or algorithms that do not rely on that personalization and won’t apply to web hosting or data storage and transfer.

Reps. Anna Eshoo, D-California, Frank Pallone Jr., D-New Jersey, Mike Doyle, D-Pennsylvania, and Jan Schakowsky, D-Illinois, plan to introduce the legislation a little over a week after Facebook whistleblower Frances Haugen alleged that the company misrepresents how much offending content it terminates.

Citing Haugen’s testimony before the Senate on October 5, Eshoo said in the release that “Facebook is knowingly amplifying harmful content and abusing the immunity of Section 230 well beyond congressional intent.

“The Justice Against Malicious Algorithms Act ensures courts can hold platforms accountable when they knowingly or recklessly recommend content that materially contributes to harm. This approach builds on my bill, the Protecting Americans from Dangerous Algorithms Act, and I’m proud to partner with my colleagues on this important legislation.”

The Protecting Americans from Dangerous Algorithms Act was introduced with Rep. Tom Malinowski, D-New Jersey, last October to hold companies responsible for “algorithmic amplification of harmful, radicalizing content that leads to offline violence.”

From Haugen testimony to legislation

Haugen claimed in her Senate testimony that according to internal research estimates, Facebook acts against just three to five percent of hate speech and 0.6 percent of violence incitement.

“The reality is that we’ve seen from repeated documents in my disclosures is that Facebook’s AI systems only catch a very tiny minority of offending content and best content scenario in the case of something like hate speech at most they will ever get 10 to 20 percent,” Haugen testified.

Haugen was catapulted into the national spotlight after she revealed herself on the television program 60 Minutes to be the person who leaked documents to the Wall Street Journal and the Securities and Exchange Commission that reportedly showed Facebook knew about the mental health harm its photo-sharing app Instagram has on teens but allegedly ignored them because it inconvenienced its profit-driven motive.

Earlier this year, Facebook CEO Mark Zuckerberg said the company was developing an Instagram version for kids under 13. But following the Journal story and calls by lawmakers to backdown from pursuing the app, Facebook suspended the app’s development and said it was making changes to its apps to “nudge” users away from content that they find may be harmful to them.

Haugen’s testimony versus Zuckerberg’s Section 230 vision

In his testimony before the House Energy and Commerce committee in March, Zuckerberg claimed that the company’s hate speech removal policy “has long been the broadest and most aggressive in the industry.”

This claim has been the basis for the CEO’s suggestion that Section 230 be amended to punish companies for not creating systems proportional in size and effectiveness to the company’s or platform’s size for removal of violent and hateful content. In other words, larger sites would have more regulation and smaller sites would face fewer regulations.

Or in Zuckerberg’s words to Congress, “platforms’ intermediary liability protection for certain types of unlawful content [should be made] conditional on companies’ ability to meet best practices to combat the spread of harmful content.”

Facebook has previously pushed for FOSTA-SESTA, a controversial 2018 law which created an exception for Section 230 in the case of advertisements related prostitution. Lawmakers have proposed other modifications to the liability provision, including removing protections in the case for content that the platform is paid for and for allowing the spread of vaccine misinformation.

Zuckerberg said companies shouldn’t be held responsible for individual pieces of content which could or would evade the systems in place so long as the company has demonstrated the ability and procedure of “adequate systems to address unlawful content.” That, he said, is predicated on transparency.

But according to Haugen, “Facebook’s closed design means it has no oversight — even from its own Oversight Board, which is as blind as the public. Only Facebook knows how it personalizes your feed for you. It hides behind walls that keep the eyes of researchers and regulators from understanding the true dynamics of the system.” She also alleges that Facebook’s leadership hides “vital information” from the public and global governments.

An Electronic Frontier Foundation study found that Facebook lags behind competitors on issues of transparency.

Where the parties agree

Zuckerberg and Haugen do agree that Section 230 should be amended. Haugen would amend Section 230 “to make Facebook responsible for the consequences of their intentional ranking decisions,” meaning that practices such as engagement-based ranking would be evaluated for the incendiary or violent content they promote above more mundane content. If Facebook is choosing to promote content which damages mental health or incites violence, Haugen’s vision of Section 230 would hold them accountable. This change would not hold Facebook responsible for user-generated content, only the promotion of harmful content.

Both have also called for a third-party body to be created by the legislature which provides oversight on platforms like Facebook.

Haugen asks that this body be able to conduct independent audits of Facebook’s data, algorithms, and research and that the information be made available to the public, scholars and researchers to interpret with adequate privacy protection and anonymization in place. Beside taking into account the size and scope of the platforms it regulates, Zuckerberg asks that the practices of the body be “fair and clear” and that unrelated issues “like encryption or privacy changes” are dealt with separately.

With reporting from Riley Steward

]]>
https://broadbandbreakfast.com/2021/10/democrats-use-whistleblower-testimony-to-launch-new-effort-at-changing-section-230/feed/ 0 36663
Repealing Section 230 Would be Harmful to the Internet As We Know It, Experts Agree https://broadbandbreakfast.com/2021/09/repealing-section-230-would-be-harmful-to-the-internet-as-we-know-it-experts-agree/?utm_source=rss&utm_medium=rss&utm_campaign=repealing-section-230-would-be-harmful-to-the-internet-as-we-know-it-experts-agree https://broadbandbreakfast.com/2021/09/repealing-section-230-would-be-harmful-to-the-internet-as-we-know-it-experts-agree/#respond Fri, 17 Sep 2021 15:52:11 +0000 https://broadbandbreakfast.com/?p=36116 WASHINGTON, September 17, 2021—Republican representative from Colorado Ken Buck advocated for legislators to “tighten up” the language of Section 230 while preserving the “spirit of the internet” and enhancing competition.

There is common ground in supporting efforts to minimize speech advocating for imminent harm, said Buck, even though he noted that Republican and Democratic critics tend to approach the issue of changing Section 230 from vastly different directions

“Nobody wants a terrorist organization recruiting on the internet or an organization that is calling for violent actions to have access to Facebook,” Buck said. He followed up that statement, however, by stating that the most effective way to combat “bad speech is with good speech” and not by censoring “what one person considers bad speech.”

Antitrust not necessarily the best means to improve competition policy

For companies that are not technically in violation of antitrust policies, improving competition though other means would have to be the answer, said Buck. He pointed to Parler as a social media platform that is an appropriate alternative to Twitter.

Though some Twitter users did flock to Parler, particularly during and around the 2020 election, the newer social media company has a reputation for allowing objectionable content that would otherwise be unable to thrive on social media.

Buck also set himself apart from some of his fellow Republicans—including Donald Trump—by clarifying that he does not want to repeal Section 230.

“I think that repealing Section 230 is a mistake,” he said, “If you repeal section 230 there will be a slew of lawsuits.” Buck explained that without the protections afforded by Section 230, big companies will likely find a way to sufficiently address these lawsuits and the only entities that will be harmed will be the alternative platforms that were meant to serve as competition.

More content moderation needed

Daphne Keller of the Stanford Cyber Policy Center argued that it is in the best interest of social media platforms to enact various forms of content moderation, and address speech that may be legal but objectionable.

“If platforms just hosted everything that users wanted to say online, or even everything that’s legal to say—everything that the First Amendment permits—you would get this sort of cesspool or mosh pit of online speech that most people don’t actually want to see,” she said. “Users would run away and advertisers would run away and we wouldn’t have functioning platforms for civic discourse.”

Even companies like Parler and Gab—which pride themselves on being unyielding bastions of free speech—have begun to engage in content moderation.

“There’s not really a left right divide on whether that’s a good idea, because nobody actually wants nothing but porn and bullying and pro-anorexia content and other dangerous or garbage content all the time on the internet.”

She explained that this is a double-edged sword, because while consumers seem to value some level of moderation, companies moderating their platforms have a huge amount of influence over what their consumers see and say.

What problems do critics of Section 230 want addressed?

Internet Association President and CEO Dane Snowden stated that most of the problems surrounding the Section 230 discussion boil down to a fundamental disagreement over the problems that legislators are trying to solve.

Changing the language of Section 230 would impact not just the tech industry: “[Section 230] impacts ISPs, libraries, and universities,” he said, “Things like self-publishing, crowdsourcing, Wikipedia, how-to videos—all those things are impacted by any kind of significant neutering of Section 230.”

Section 230 was created to give users the ability and security to create content online without fear of legal reprisals, he said.

Another significant supporter of the status quo was Chamber of Progress CEO Adam Kovacevich.

“I don’t think Section 230 needs to be fixed. I think it needs [a better] publicist.” Kovacevich stated that policymakers need to gain a better appreciation for Section 230, “If you took away 230 You would have you’d give companies two bad options: either turn into Disneyland or turn into a wasteland.”

“Either turn into a very highly curated experience where only certain people have the ability to post content, or turn into a wasteland where essentially anything goes because a company fears legal liability,” Kovacevich said.

]]>
https://broadbandbreakfast.com/2021/09/repealing-section-230-would-be-harmful-to-the-internet-as-we-know-it-experts-agree/feed/ 0 36116
Judge Rules Exemption Exists in Section 230 for Twitter FOSTA Case https://broadbandbreakfast.com/2021/08/california-judge-rules-exemption-exists-in-section-230-for-twitter-fosta-case/?utm_source=rss&utm_medium=rss&utm_campaign=california-judge-rules-exemption-exists-in-section-230-for-twitter-fosta-case https://broadbandbreakfast.com/2021/08/california-judge-rules-exemption-exists-in-section-230-for-twitter-fosta-case/#respond Tue, 24 Aug 2021 19:41:43 +0000 https://broadbandbreakfast.com/?p=35675 August 24, 2021—A California court has allowed a lawsuit to commence against Twitter from two victims of sexual trafficking, who allege the social media company initially refused to remove content that exploited the underaged plaintiffs – and then went viral.

The anonymous plaintiffs allege that they were manipulated into making pornographic videos of themselves through another social media app, Snapchat, after which the videos were posted on Twitter. When the plaintiffs asked Twitter to take down the posts, it refused, and it was only after the Department of Homeland Security got involved that the social media company complied.

At issue in the case is whether Twitter had any obligation to remove the content at least “immediately” under Section 230 of the Communications Decency Act, which provides legal liability protections for the content the platforms’ users post.

Court’s finding

The court ruled Thursday that the case should proceed after finding that Twitter knowingly knew such content was on the site, had to have known it was sex trafficking, and refused to do something about it immediately.

“The Court finds that these allegations are sufficient to allege an ongoing pattern of conduct amounting to a tacit agreement with the perpetrators in this case to allow them to post videos and photographs it knew or should have known were related to sex trafficking without blocking their accounts or the Videos,” the decision read.

“In sum, the Court finds that Plaintiffs have stated a claim for civil liability under the [Trafficking Victims Protection Reauthorization Act] on the basis of beneficiary liability and that the claim falls within the exemption to Section 230 immunity created by FOSTA.”

The Stop Enabling Sex Traffickers Act and the Allow States and Victims to Fight Online Sex Trafficking Act that became the package law SESTA-FOSTA was passed in 2018 and amended immunity claims under Section 230 to exclude enforcement of federal or state sex trafficking laws from intermediary protections.

The court dismissed other claims against the company made by the plaintiffs, but met the relatively low bar to move the case forward.

The arguments

The plaintiffs allege that Twitter violated the TVPRA because it allegedly knew about the videos, benefitted from them and did nothing to address the problem before it went viral.

Twitter argued that FOSTA, as applied to the CDA, only narrowly applies to websites that are “knowingly assisting and profiting from reprehensible crimes;” the plaintiffs allegedly fail to show that the company “affirmatively participated” in such crimes; and the company cannot be held liable “simply because it did not take the videos down immediately.”

Experts asserted companies may hesitate to bring Section 230 defense in court

The case is yet another instance of U.S. courts increasingly poking holes in arguments brought by technology companies that suggests they cannot be liable for content on their platforms, per Section 230, which is currently the subject of hot debate in Washington about whether to reform it or completely abolish it.

A number of state judges have ruled against Amazon, for example, and its Section 230 defense in a number of case-specific instances in Texas and California. Experts on a panel in May said if courts keep ruling against the defense, there may be a deluge of lawsuits to come against companies.

And last month, citing some of these cases, lawyers argued that big tech companies may begin to shy away from bringing the 230 defense to court in fear of awakening lawmakers to changing legal views on the provision that could ignite its reform.

]]>
https://broadbandbreakfast.com/2021/08/california-judge-rules-exemption-exists-in-section-230-for-twitter-fosta-case/feed/ 0 35675
Facebook, Google, Twitter Register to Lobby Congress on Section 230 https://broadbandbreakfast.com/2021/08/facebook-google-twitter-register-to-lobby-congress-on-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=facebook-google-twitter-register-to-lobby-congress-on-section-230 https://broadbandbreakfast.com/2021/08/facebook-google-twitter-register-to-lobby-congress-on-section-230/#respond Tue, 03 Aug 2021 20:54:01 +0000 https://broadbandbreakfast.com/?p=35255 August 3, 2021 — The largest social media companies have registered to lobby Congress on Section 230, according to lobby records.

Facebook, Google, and Twitter filed new paperwork late last month to discuss the internet liability provision under the Communications Decency Act, which protects these companies from legal trouble for content their users post.

Facebook’s registration specifically mentions the Safe Tech Act, an amendment to the provision proposed earlier this year by Sens. Amy Klobuchar, D-Minnesota, Mark Warner, D-Virginia, and Mazie Hirono, D-Hawaii, which would largely keep the provision’s protections except for content the platforms are paid for.

A separate Facebook registration included discussion on the “repeal” of the provision.

Other issues included in the Menlo Park-based company’s registration are privacy, data security, online advertising, and general regulations on the social media industry.

Google also wants to discuss taxes and cybersecurity, as security issues take center stage following high-profile attacks and as international proposals for a new tax regime on tech companies emerge.

Notable additional subject matters Twitter includes in its registration are content moderation practices, data security, misinformation, and net neutrality, as the Federal Communications Commission is being urged to bring back Obama-era policies friendly to the principle that ensures content cannot be given preferential treatment on networks.

Section 230 has gripped Congress

Social media critics have been foaming at the mouth over possible retaliatory measures against the technology companies that have taken increasingly strong measures against those that violate its policies.

Those discussions picked up steam when, at the beginning of the year, former President Donald Trump was banned from Twitter, and then from Facebook and other platforms, for allegedly stoking the Capitol Hill riot on January 6. (Trump has since filed a lawsuit as a private citizen against the social media giants for his removal.)

Since the Capitol riot, a number of proposals have been put forward to amend — in some cases completely repeal — the provision to address what some Republicans are calling outright censorship by social media companies. Even Florida tried to take matters into its own hands when it made law rules that penalized social media companies that banned politicians. That law has since been put on hold by the courts.

The social media giants, and its allies in the industry, have pressed the importance of the provision, which they say have allowed once-fledgling companies like Facebook to be what it is today. And some representatives think reform of the law could lean more toward amendment than outright repeal. But lawyers have warned about a shift in attitude toward those liability protections, as more judges in courts across the country hold big technology companies accountable for harm caused by the platforms.

]]>
https://broadbandbreakfast.com/2021/08/facebook-google-twitter-register-to-lobby-congress-on-section-230/feed/ 0 35255
Companies May Hesitate Bringing Section 230 Arguments in Court Fearing Political Ramifications: Lawyers https://broadbandbreakfast.com/2021/07/companies-may-second-guess-bringing-section-230-arguments-in-court-fearing-political-ramifications-on-protections-lawyers/?utm_source=rss&utm_medium=rss&utm_campaign=companies-may-second-guess-bringing-section-230-arguments-in-court-fearing-political-ramifications-on-protections-lawyers https://broadbandbreakfast.com/2021/07/companies-may-second-guess-bringing-section-230-arguments-in-court-fearing-political-ramifications-on-protections-lawyers/#respond Wed, 14 Jul 2021 16:50:28 +0000 https://broadbandbreakfast.com/?p=34814 July 14, 2021—Legal experts are speculating that companies may shy away from testing Section 230 arguments in future court cases because recent legal decisions against the defense could influence political action on amending the intermediary liability provision.

Section 230 of the Communications Decency Act offers online platforms immunity from civil liability based on content their users post on their websites. But recent decisions by various courts that have ruled against the companies’ Section 230 defenses and held them liable for incidents could have a lasting effect on how companies approach these cases.

“People are being a lot more thoughtful when they use a 230 defense, and sometimes not using one at all, because they realize that that just won’t bode well for their future cases,” Michele Lee, assistant general counsel and the head of litigation at social media company Pinterest, said at a conference hosted by the Federal Communications Bar Association on Tuesday.

“The number of companies that operate within this space, frankly, aren’t that many. And I think people are thinking much more long term than just the cases that are in front of them.”

Legal experts at the conference argued that firms would be increasingly selective about what cases they elect to employ for a Section 230 defense. The more attention it receives, they argue, the more likely it is to receive political attention, which could reignite discussion about its reform.

Debate about what to do with Section 230 has enamored Capitol Hill for many months, with the climax of discussions occurring after former President Donald Trump was banned from several platforms at the start of the year for comments he made on the services that allegedly stoked the Capitol riot on January 6.

Since then, several proposed amendments were put forth, including from Sen. Amy Klobuchar, D-Minnesota, who proposed to keep Section 230 protections largely the same except for paid content.

And last month, Sen. Marco Rubio, R-Florida, introduced his own proposed legislation, which would “halt Big Tech’s censorship of Americans, defend free speech on the internet, and level the playing field to remove unfair protections that shied massive Silicon Valley firms from accountability.”

Legal precedent and policy: two vehicles for change

The concern for companies that provide platforms for the flow of information is that they could lose certain liability protections through legislation or a change in precedent. Historically, those protections did take up much mental real estate for Congresspeople, the White House and is often held up in court.

But that tide may be shifting.

In May, the court ruled against the popular messaging company Snapchat’s Section 230 defense, claiming that it could be held civilly liable because it had created a dangerous product following the death of a 20-year-old Snapchat user who crashed his car in 2020 while using a filter on the app that rewarded fast driving.

Reaching 120 miles-per-hour at one point, the crash also killed two teenage passengers. Two of the victims’ parents sued Snapchat for wrongful death, claiming that the reward system on that filter encouraged reckless driving.

The case was thrown out of court on Section 230 grounds, but the Ninth Circuit Appeals Court revived the case, reversing the ruling and favoring the victims, holding Snapchat liable for creating an inherently dangerous product.

Carrie Goldberg, founder of C.A. Goldberg, a victims’ rights law firm, said Tuesday that this ruling offers a “small window of online platform accountability,” in which platforms might be held liable for published content when that content demonstrates a harm to the public.

Goldberg referenced another case out of Texas last month, where the state’s supreme court ruled that Facebook could be held liable after three plaintiffs filed separate suits against the company, alleging that they became victims of sex trafficking, being lured in through people they met on Facebook and Instagram.

Facebook claimed immunity through Section 230, but the court sided with the plaintiffs, saying the provision does not “create a lawless no-man’s-land on the Internet.” The court made a further clarification that Section 230 protects online platforms from the words or actions of others, but “[h]olding internet platforms accountable for their own misdeeds is quite another thing.”

This particular case may only be applicable to Texas jurisdiction, however, and hold little impact for the rest of the country, as part of the case was fought using a Texas-specific statute that allows civil lawsuits “against those who intentionally or knowingly benefit from participation in a sex-trafficking venture.”

In May, observers noted that a number of these legal decisions reversing course on Section 230 matters could lead to a floodgate of other lawsuits across the country.

]]>
https://broadbandbreakfast.com/2021/07/companies-may-second-guess-bringing-section-230-arguments-in-court-fearing-political-ramifications-on-protections-lawyers/feed/ 0 34814
Head of Big Tech Lobby Group Says Repealing Section 230 Unconstitutional https://broadbandbreakfast.com/2021/06/head-of-big-tech-lobby-group-says-repealing-section-230-unconstitutional/?utm_source=rss&utm_medium=rss&utm_campaign=head-of-big-tech-lobby-group-says-repealing-section-230-unconstitutional https://broadbandbreakfast.com/2021/06/head-of-big-tech-lobby-group-says-repealing-section-230-unconstitutional/#respond Wed, 23 Jun 2021 19:36:04 +0000 https://broadbandbreakfast.com/?p=34235 June 23, 2021— Gary Shapiro, the CEO of the Consumer Technology Association, said at a conference Tuesday that while social media platforms have responsibilities to the public, repealing Section 230 would violate the Constitution’s free-speech protections.

Section 230 of the Communications Decency Act protects social media platforms from legal responsibility for the third-party content posted to their website.

Countering the philosophy that Section 230 should be repealed, Shapiro – whose trade organization represents Facebook, Amazon, Apple, and Microsoft — said at the conference hosted by the Media Institute that doing so would not only destroy the value of these platforms but would blatantly contradict the U.S.’s value of free speech.

Shapiro reasoned that the free speech protected by the Constitution applies specifically to government, not private industry. He said Congress, specifically, cannot interfere with free speech, and that repealing Section 230 would do just that for private companies.

“If Yelp is responsible for user reviews, if Next Door is responsible for a neighbor’s critical comments, or if Facebook is responsible for political comments, not only are we making these services essentially unusable, we are trampling on the free expression values we treasure, that are embodied in our constitution.”

Despite speculation about the future of Section 230, including what Congress will do about proposed amendments to it, Republican party members, including Bob Latta, R-Ohio, have said that they believe Section 230 is here to stay.

Shapiro said that while the government should not get involved in how platforms moderate their content, social media companies still have a responsibility to foster free speech and healthy discourse in the public sphere. He said while moderating content is their constitutionally protected right, banning former President Donald Trump was “over the line” and irresponsible.

Shapiro accused both political parties of wanting to make platforms legally responsible for content posted to their website.

“This is like making hotel owners responsible for guests’ behavior,” he said. “Creating this liability, given the huge amount of posting, would severely crimp the value of these services and lead to an onslaught of opportunistic trial-lawyer lawsuits.”

Big tech and antitrust

During the conference, Shapiro also spoke out against the antitrust bills currently proposed in the House, calling them “weird and dangerous.” He said the bills were rushed through Congress without proper hearings because the bills are politically motivated and have less to do with consumer welfare than they are about retaliating against social media companies.

“As currently drafted, the package of antitrust bills introduced in the House Judiciary Committee would be a disaster for American innovators and consumers,” Shapiro said in a separate press release. “If signed into law, the bills would cause irreparable harm to small businesses and startups and put the U.S. at a competitive disadvantage against China.”

Editor’s Note: The Consumer Technology Association followed up with us to note that Gary Shapiro did not use the world “unconstitutional” in his speech. They did not object to Broadband Breakfast’s reporting that Shapiro said that repealing Section 230 would violate the Constitution’s free-speech protections. Therefore, Broadband Breakfast stands by its original story.

]]>
https://broadbandbreakfast.com/2021/06/head-of-big-tech-lobby-group-says-repealing-section-230-unconstitutional/feed/ 0 34235
Broadband Breakfast Hosts Section 230 Debate https://broadbandbreakfast.com/2021/06/broadband-breakfast-hosts-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=broadband-breakfast-hosts-section-230-debate https://broadbandbreakfast.com/2021/06/broadband-breakfast-hosts-section-230-debate/#respond Tue, 01 Jun 2021 17:17:36 +0000 https://broadbandbreakfast.com/?p=33584 June 1, 2021–Broadband Breakfast’s Live Online event hosted a debate about Section 230, with some arguing for a revision or a repeal and others suggesting it is integral to the healthy flow of information.

The debate, held on May 26 and moderated by Communications Daily’s Karl Herchenroeder, pitted DigitalFrontiers Advocacy founder Neil Fried and consulting company Precursor president Scott Cleland, who are proponents of Section 230 reform, against attorney Cathy Gellis and TechFreedom president Berin Szóka, who were for maintaining the safeguards protecting intermediary platforms from the liability posed by what their users post online.

Fried said Section 230 allowed platforms to moderate harmful remarks without the courts getting involved. His solution to blunt unlawful behavior is an adjustment to Section 230 creating more accountability. Reform could include distinguishing between small and large platforms, as they should not be treated the same.

Proponents of Section 230 have said that the likes of Facebook could never be without legal protections against what their users post.

Cleland shared the similar thoughts with Fried for the removal or adjustment of the provision. He explained that “repeal is comprehensive and constitutional”; he even went so far as to say “repeal is inevitable.”

For maintaining Section 230

On the other side, Gellis stated her position is that the provision “needs help, not destruction.”  She explained Section 230 allows immunity to create a healthy ecosystem for the sharing of ideas. In her rebuttal, she noted the value the country puts on free speech should prevent rules from being put into place to moderate information.

“We need to keep our eye on the ball of the ecosystem, to make sure the ecosystem is equipped without artificial barriers… It is not about big tech…it is about every platform of every size.”

Szóka was quick on his feet to both reiterate Gellis’ beliefs and to counter Cleland’s claims. He said he agrees there is too much hate speech, but that does not mean the internet is lawless.

“There is very little the government can do about such speech because of the first amendment…we cannot directly ban hate speech,” Szóka said. “Section 230 aims to do the next best thing.”

Our Broadband Breakfast Live Online events take place every Wednesday at 12 Noon ET. You can watch the May 26, 2021, event on this page. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

Wednesday, May 26, 2021, 12 Noon ET — “Unpacking the Controversies Around Section 230”

When Congress approved the Communications Decency Act as part of the Telecommunications Act  in 1996, few saw Section 230 as the central issue surrounding online speech and debate. Long considered a foundational law for the internet in the United States, Section 230 has — slowly at first, but now in a torrent — come under reexamination. Join us for a debate between proponents and critics of Section 230.

Featuring panelists:

  • Neil Fried, Founder, DigitalFrontiers Advocacy
  • Cathy Gellis, Attorney
  • Berin Szoka, President, TechFreedom
  • Scott Cleland, President, Precursor
  • Moderated by Karl Herchenroeder, Assistant Editor, Communications Daily

In an Oxford style debate, the audience will be polled at both the beginning and end of the event about the following resolution: Section 230 is harmful and should be abolished or significantly changed.” Each panelist will give an opening statement and a rebuttal, following which the moderator and members of the live audience will be able to ask questions.

  • First affirmative opening statement (6 minutes): Neil Fried
  • First negative opening statement (6 minutes): Cathy Gellis
  • Second affirmative opening statement (6 minutes): Scott Cleland
  • Second negative opening statement (6 minutes): Berin Szoka
  • First affirmative rebuttal (4 minutes): Scott Cleland
  • First negative rebuttal (4 minutes): Berin Szoka
  • Second affirmative rebuttal (4 minutes): Neil Fried
  • Second negative rebuttal (4 minutes): Cathy Gellis

Explainer: With Florida Social Media Law, Section 230 Now Positioned In Legal Spotlight

Neil Fried was formerly chief communications and technology counsel to the House Energy and Commerce Committee and SVP for congressional and regulatory affairs at the Motion Picture Association. He also helped implement the 1996 Telecommunications Act while at the FCC and advised journalists while at the Reporters Committee for Freedom of the Press. In 2020 he launched DigitalFrontiers Advocacy, which advises clients on Communications Act and Copyright Act issues.

Frustrated that people were making the law without asking for her opinion, Cathy Gellis gave up a career as a web developer to become a lawyer so that she could help them not make it badly, especially when it came to technology. A former aspiring journalist and longtime fan of civil liberties, her legal work includes defending the rights of Internet users and advocating for policy that protects online speech and innovation. When not advising clients on the current state of the law with respect to such topics as platform liability, copyright, trademark, privacy, or cybersecurity she frequently writes about these subjects and more for outlets such as the Daily Beast, Law.com, and Techdirt.com, where she is a regular contributor.

Berin Szoka serves as President of TechFreedom. Previously, he was a Senior Fellow and the Director of the Center for Internet Freedom at The Progress & Freedom Foundation. Before joining PFF, he was an Associate in the Communications Practice Group at Latham & Watkins LLP, where he advised clients on regulations affecting the Internet and telecommunications industries. Before joining Latham’s Communications Practice Group, Szoka practiced at Lawler Metzger Milkman & Keeney, LLC, a boutique telecommunications law firm in Washington, and clerked for the Hon. H. Dale Cook, Senior U.S. District Judge for the Northern District of Oklahoma.

Scott Cleland is a Christian, conservative, Republican and President of Precursor®, a responsible Internet consultancy. He is not a lawyer. He served as Deputy U.S. Coordinator for International Communications & Information Policy in the George H. W. Bush Administration, and Institutional Investor twice ranked him the #1 independent analyst in communications when he was an investment analyst. He has testified before eight congressional subcommittees a total of sixteen times.

Karl Herchenroeder is a technology policy journalist for publications including Communications Daily. Born in Rockville, Maryland, he joined the Warren Communications News staff in 2018. He began his journalism career in 2012 at the Aspen Times in Aspen, Colorado, where he covered city government. After that, he covered the nuclear industry for ExchangeMonitor in Washington.

Watch our 2:27 minute preview video on Section 230

WATCH HERE, or on YouTubeTwitter and Facebook

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2021/06/broadband-breakfast-hosts-section-230-debate/feed/ 0 33584
Despite Speculation, Section 230 Is Here to Stay: Rep. Bob Latta https://broadbandbreakfast.com/2021/05/despite-speculation-section-230-is-here-to-stay-rep-bob-latta/?utm_source=rss&utm_medium=rss&utm_campaign=despite-speculation-section-230-is-here-to-stay-rep-bob-latta https://broadbandbreakfast.com/2021/05/despite-speculation-section-230-is-here-to-stay-rep-bob-latta/#respond Tue, 25 May 2021 18:25:28 +0000 https://broadbandbreakfast.com/?p=33474 May 25, 2021 — Section 230 of the Communications Decency Act may face amendment in the near future, but it’s here to stay—and for good reason, according to Rep. Bob Latta, R-Ohio.

Speaking at the 13th annual policy conference hosted by the Free State Foundation on Monday, Latta said a complete repeal of Section 230 would be misguided, saying that such a measure would stifle free enterprise and competition throughout the technology industry. He spoke to Republican fears of censorship on social media.

Section 230 provides legal immunity to online platforms that publish third-party content from civil liability. This permits consumers to post what they like without the platforms themselves taking responsibility for the content; it also allows platforms to remove content they deem undesirable. (Broadband Breakfast has published an explainer on the issue.)

The legislation came under scrutiny last year when critics, led in large by former President Donald Trump, accused tech firms of effectively using Section 230 to censor conservative voices.

Latta said he sympathized with Republican fears, pointing to a tweet by Marsha Blackburn, R-Tennessee, which was recently removed “for no reason.”

Free speech cannot be “deterred”

Republican lawmakers threatened to repeal Section 230 last year, arguing that tech companies were stifling free speech in their alleged exploitation of the measure. Commenting on the proposal, Latta agreed with the intent, saying, “one of the things that we don’t want to see happen is that private speech is being deterred.”

But Latta was quick to add that lawmakers should refrain from making regulations “too tight.”

Impact on smaller platforms if Section 230 revoked

If Section 230 were removed altogether, larger platforms would inevitably “lawyer up” to protect themselves against the new legal exposer. Smaller competing platforms would be unable to afford the costs, however. Critics have said their revenues would drop, eating into their profits until they likely would have no choice but to sell to a larger company.

Additionally, larger firms could push the additional legal costs onto consumers, leading to potentially additional negative social externalities that users of platforms already encounter.

“We don’t want to hurt those new startups,” Latta said. “We don’t want to just throw it out—I think there’s going to have to be some revisions that we’re going to have to do to it. And that’s where we’re going to have to iron out the difference between the Republicans and the Democrats, where we find a common ground to get there.”

]]>
https://broadbandbreakfast.com/2021/05/despite-speculation-section-230-is-here-to-stay-rep-bob-latta/feed/ 0 33474
Explainer: With Florida Social Media Law, Section 230 Now Positioned In Legal Spotlight https://broadbandbreakfast.com/2021/05/explainer-with-florida-social-media-law-section-230-now-positioned-in-legal-spotlight/?utm_source=rss&utm_medium=rss&utm_campaign=explainer-with-florida-social-media-law-section-230-now-positioned-in-legal-spotlight https://broadbandbreakfast.com/2021/05/explainer-with-florida-social-media-law-section-230-now-positioned-in-legal-spotlight/#respond Tue, 25 May 2021 13:08:55 +0000 https://broadbandbreakfast.com/?p=33513 May 25, 2021 – Republican Florida Governor Ron DeSantis on Monday signed a bill that would allow for fines against social media companies for suspending or banning political candidates in the state.

The move, which is expected to face constitutional questions, marks the latest apex of an issue that can be traced back to what some Republicans called social media’s targeting of its members, but specifically against their leader Donald Trump.

Because in the waning days of the Trump presidency, as unfounded allegations of election theft gripped the Republican Party, a bombshell dropped: Twitter had banned the president for tweets the company said could be read as an incitement of the riots at the capitol just days prior on January 6.

By then, Twitter had already made it a regular practice of labelling Trump’s tweets with links to information about the subject about which he commented. For more egregious violations, the social media company used words like “disputed” next to some of his claims.

Join the Broadband Breakfast Live Online Oxford-style debate on “Unpacking Section 230 Controversies” on Wednesday, May 26, 2021. The moderated debate will feature two proponents of Section 230, and two critics. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

The president, who by then had lost the election, and his allies were already clamoring for reforms to a decades-old law known as Section 230 of the Communications Decency Act, which shields social media and other platforms from legal liability for posts by their users. For example, if a user posts something libelous about a person or business, the platform on which it is posted enjoys full immunity.

But critics of the law have argued that immunity vanishes when the platform gets involved in moderating those posts. The reasoning is that the platform moves from mere facilitator to editorial or publisher of what can and cannot be seen by its users.

Officials in Australia, for example, have been calling for social media companies to be categorized as “publishers” for the sake of holding them accountable for things their users post.

With proposed Section 230 reform legislation being proposed, Broadband Breakfast has selected this topic for its third in a series of explainers.

A short history of the legislation and the First Amendment

Section 230 of the Communications Decency Act of 1996 protects websites from lawsuits if they contain illegal material. However, there are exceptions for copyright violations, sexually explicit material, and violations of federal laws.

Congressman Ron Wyden, D-Oregon, crafted Section 230 to ensure website owners could manage their sites without fear of legal repercussions. The law is crucial for social media sites – allowing companies like Facebook to become the giant it is — but it also covers newspapers as well. Some lawmakers wrongly claim it only protects “neutral platforms,” while critics claim it lets powerful companies ignore users’ actual harm.

According to the First Amendment in the United States, most speech forms are protected, including many proposals to force technology companies to moderate their content.

The First Amendment protects private companies as well when this comes to regulating speech. For example, Facebook and Twitter both prohibit hate speech, even though hate speech is legal in the United States. The First Amendment protects these moderation rules.

Although it often gets regarded as part of Section 230 discussions, this issue stands on its own.

The politicization of Section 230

The move by Twitter to ban Trump has cascaded into bans and restrictions on other social media accounts, including Facebook. Most recently, the Oversight Board, an arms-length independent group that determines whether actions by the social media giant are justified or not, said Trump’s ban from the platform would remain, but has ordered the platform to define the penalty – currently indefinite — and provide justification for it.

Before all that came to pass, then-President Donald Trump issued an executive order in May 2020 targeting Section 230 and social media. Congress and the courts have no authority to override or modify Section 230. The order also pushed agencies to collect complaints that could justify revoking sites’ legal protections.

Trump has generally endorsed Republican legislation to change the law in Congress. After Joe Biden’s election, Trump has gone further and advocated for the abolition of Section 230, packaging that proposal in the ongoing push for $2,000 direct stimulus payments.

Biden is less vocal about the Section 230 law than Trump, but he’s also not a fan of it. In the first month of his administration, Biden recommended repealing Section 230 entirely.

Watch our 2:27 minute preview view on Section 230

“For [Facebook CEO Mark] Zuckerberg and other platforms, it should be revoked because it is not a purely Internet company. They are spreading falsehoods they know to be false.” Facebook has historically defended itself by suggesting its human and algorithmic monitors actively take-down groups and accounts that cause harm.

Since the election, Biden has not put forward a specific plan to revise Section 230. In December 2020, however, an advisor to Vice President Kamala Harris suggested that Section 230 be “thrown out” and a new program crafted to protect children from disturbing material online.

Section 230 modifications

The Stop Enabling Sex Traffickers Act (SESTA) and Fight Online Sex Trafficking Act (FOSTA), which were signed into law by President Trump in April 2018, reducs the protections afforded to online platforms for the purposes of counteracting sex trafficking. To distinguish between civil and criminal sex trafficking and conduct that promotes or facilitates sex services, FOSTA extends Section 230 to cover this section’s violations retroactively.

Because of the new law, some websites initially censored their forums to protect people from the vague possibility that a third party could run prostitution ads in the future. Now, sex workers say they are mostly forced offline, which makes their work far less safe.

Democrats have called for an investigation of the adverse effects the law has on sex workers. There is little-to-no evidence of a decrease in online sex trafficking since the law has been in place.

During a workshop conducted by the US Department of Justice in February, the department examined cases in which platform providers allowed users to distribute non-consensual pornography, harassing images of children, and abuse images of children.

Proposals are generally in two categories: Removing specific kinds of content from protection — as FOSTA-SESTA did for works of a sexual nature. The proposed EARN IT Act would require sites to demonstrate that they are fighting child sex abuse, but it would likely also loosen encryption for private messaging, some say.

A separate guide offers more information on this approach, which tends to be bundled with privacy and technology proposals. The EARN IT Act has been the only one to pass out of committee so far, which was amendable before advancing.

Democratic proposals

The Democratic Party’s primary goal is to get online platforms to remove hate speech, terrorism, and harassment. As a result, they have introduced several bipartisan proposals meant to curtail and erode full Section 230 protections.

Richard Blumenthal, D-Connecticut, sponsored the EARN IT Act and criticized the protections in Section 230. He proposed a different solution, the PACT Act, which strives to ensure that website operators know how they moderate the content.

Sen. Amy Klobuchar, D-Minnesota, with Sen. Chuck Grassley, R-Iowa, have pushed their own proposal, called the Safe Tech Act, which would keep many of the protections of Section 230, but exempt from liability protections content on platforms that those platforms are getting paid for.

Republican proposals

Based on a series of workshops in early 2020, then-Attorney General William Barr released recommendations for Section 230 reform in June 2020. In addition to new measures to penalize arbitrary or discriminatory moderation, the proposals include new restrictions against cyberstalking and terrorism.

Barr’s proposal applies immunity to moderation decisions that follow “plain and particular terms of service and are accompanied by detailed justifications” — a narrower scope than the current law.

If Congress approves Barr’s recommendations, they will have legal force, but so far, they are the best blueprint conservatives have for mainline 230 reform. A smaller group of Republicans has been dedicated to limiting the immunity that moderates enjoy, penalizing parties that act with bias or discrimination.

Senator Josh Hawley, R-Missouri, has also introduced a bill that would require platforms to abide by a “duty of good faith,” resulting in significant monetary damages to their users if they can demonstrate in court that the platform violated the duty.

With the introduction of the Ending Support for Internet Censorship Act in 2019, Hawley would have required platforms that moderate content to certify their content as politically “neutral” to stay protected from lawsuits. No formal progress has been made in either proposal thus far.

Big Tech response

Facebook’s Zuckerberg said his company is leading the charge on calls for more regulation. In February 2020, Facebook released a white paper outlining the approach regulators ought to take.

There are several assumptions attached to this approach: that platforms should be global and thus subject to different laws and cultures, act more like a platform for speech than traditional publishers, and constantly change for competitive reasons.

According to Facebook, it is possible to hold tech companies accountable for specific metrics, such as limiting the number of views posted to a prohibited level or setting a mandatory response time for removing posts.

However, they point out that any effort to enforce a 24-hour removal requirement will likely result in platforms ignoring older posts in favor of positions within the 24-hour window.

After a couple of months in power and with the virus more under control, Biden might add some focus on Section 230 and propose some new amendments, changing the bill’s course. Still, it’s likely to remain on the table, and Republicans will likely continue to push for their own changes.

Join the Broadband Breakfast Live Online Oxford-style debate on “Unpacking Section 230 Controversies” on Wednesday, May 26, 2021. The moderated debate will feature two proponents of Section 230, and two critics. You can also PARTICIPATE in the current Broadband Breakfast Live Online event. REGISTER HERE.

]]>
https://broadbandbreakfast.com/2021/05/explainer-with-florida-social-media-law-section-230-now-positioned-in-legal-spotlight/feed/ 0 33513
Section 230 Has Coddled Big Tech For Too Long, Says Co-Author of Book on Amazon https://broadbandbreakfast.com/2021/05/section-230-has-coddled-big-tech-for-too-long-says-co-author-of-book-on-amazon/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-has-coddled-big-tech-for-too-long-says-co-author-of-book-on-amazon https://broadbandbreakfast.com/2021/05/section-230-has-coddled-big-tech-for-too-long-says-co-author-of-book-on-amazon/#respond Tue, 11 May 2021 15:10:30 +0000 http://broadbandbreakfast.com/?p=32018 May 11, 2021 – The internet liability provision Section 230 has allegedly given Amazon the ability to allow unvetted products on its platform, which has boosted its business at the expense of customers, an Amazon critic is claiming.

Jason Boyce is the co-author of the book The Amazon Jungle: The Truth About Amazon, The Seller’s Survival Guide for Thriving on the World’s Most Perilous E-Commerce Marketplace,”  that dives into the size of Amazon and its alleged privacy issues.

“Amazon is protected by Section 230” and not in a good way, Boyce said in an interview with Broadband Breakfast.

“These White entitled hoodie wearing billionaires aren’t going to do the right thing for the U.S. citizens,” said Jason Boyce, co-author with Rick Cesari of the book. It attempts to explain just how big the company really is and the privacy issues it poses.

In an interview with Broadband Breakfast, Boyce discussed how and why it needs to be held accountable for its alleged dominance in ecommerce and other markets is vying to own.

Boyce singled out CEOs of other big tech companies, calling Mark Zuckerberg “poison,” and the “bearded wonder,” referring to Jack Dorsey at Twitter. Not even Jeff Bezos’ blazer exempts him from his responsibility, he said.

“These white entitled hoodie wearing billionaires aren’t going to do the right thing for the U.S. citizens,” he said.  Boyce said that these “libertarian” idols are no longer trustworthy in part due to their lack of respect for intellectual property and privacy invasion with no means of remuneration to the consumer.

Amazon was contacted multiple times for comment on this interview before publication, and while it said a response would be given, the company did not follow-up.

Amazon’s role in the Section 230 debate

Section 230 from the Communications Decency Act of 1996 says that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In other words, online intermediaries are protected against a range of laws that might otherwise be used to hold them legally responsible for what others say and do—which can extend to the sale of products from third parties in Amazon’s case.

If a third party from a factory in China decides to sell a retractable dog leash on Amazon, Amazon is not responsible for the safety of that Chinese-based retractable leash. If the leash snaps and takes an eye out, the consumer who bought it cannot sue Amazon since it is protected by Section 230. Unfortunately, this has already happened.

Heather Oberdorf, a Pennsylvania woman, bought an $18.48 retractable leash from a Chinese company selling on Amazon. One day while walking her dog, the leash suddenly snapped and sprang back, hitting her left eye and resulted in partial blindness.

MarketWatch reported on this story and said that “Amazon contended that Oberdorf could not hold it liable for posting [the retractable leash] because a section of the 1996 Communications Decency Act (CDA) shields companies from liability for publishing what third parties say on their sites.”

The Chinese factory that produced the leash will also be free from legal responsibility as “no U.S. citizen is going to have any success suing a factory in China,” as there is no incentive to offer the sale of safe products to consumers on Amazon, said Boyce.

Waiving rights to sue

When signing up for an Amazon account, all rights to sue the company, even if a third party’s product use results in serious harm, is waived upon account creation. Amazon should be required to force its third party sellers to publish safety information about their products so consumers can be informed.

Amazon is a once-in-a-generation company that was won ecommerce, offering half a billion items online, while the biggest Walmart store can only stock 500,000 items, said Boyce. He raised alarm over Amazon’s dominance in the book industry and its Amazon Web Services, calling them monopolies that at the present time, prevent the “next Amazon” from being born.

Boyce cited the historical case of Microsoft, where it was sued by the U.S. government for “tying,” since it prevented other browsers from existing, among other things. The government was successful in breaking up Microsoft in one of the largest anti-trust cases, which allowed Google to be born under the “do no evil” mantra.

]]>
https://broadbandbreakfast.com/2021/05/section-230-has-coddled-big-tech-for-too-long-says-co-author-of-book-on-amazon/feed/ 0 32018
Sen. Mike Lee Promotes Bills Valuing Federal Spectrum, Requiring Content Moderation Disclosures https://broadbandbreakfast.com/2021/04/sen-mike-lee-promotes-bills-valuing-federal-spectrum-requiring-content-moderation-disclosures/?utm_source=rss&utm_medium=rss&utm_campaign=sen-mike-lee-promotes-bills-valuing-federal-spectrum-requiring-content-moderation-disclosures https://broadbandbreakfast.com/2021/04/sen-mike-lee-promotes-bills-valuing-federal-spectrum-requiring-content-moderation-disclosures/#comments Mon, 05 Apr 2021 20:25:50 +0000 http://broadbandbreakfast.com/?p=32388 April 5, 2021 – Sen. Mike Lee, R-Utah, said Friday spectrum used by federal agencies is not being utilized efficiently, following legislation he introduced early last year that would evaluate the allocation and value of federally-reserved spectrum.

The Government Spectrum Valuation Act, or S.553 and introduced March 3, directs the National Telecommunications and Information Administration to consult with the Federal Communications Commission and Office of Management and Budget to estimate the value of spectrum between 3 kilohertz and 95 gigahertz that is assigned to federal agencies.

Lee spoke at an event hosted by the Utah tech association Silicon Slopes on Friday about the legislation, in addition to other topics, including Section 230.

Some bands on the spectrum are reserved for federal agencies as they need it, but it’s not always managed efficiently, Lee said. Some are used by the Department of Defense for ‘national security,’ for example, but when asked what that spectrum is used for, we’re told, ‘we can’t tell you because of national security,’ he said.

“Just about everything we do on the internet is carried out through a mobile device, and all of that requires access to spectrum,” he said.

He said that lives are becoming more affected and enhanced by our connection to the internet, often through a wireless connection, which is increasing the need for the government to efficiently manage spectrum bandwidth, he said. Some of the bands are highly valuable, he said, comparing them to the “beach front property” of spectrum.

Legislation changing Section 230

Lee also spoke on Section 230, a statute that protects online companies from liability for content posted by their users. It’s a hot topic for policymakers right now as they consider regulating social media platforms.

Both Republicans and Democrats want more regulation for tech companies, but for different reasons. Democrats want more moderation against alleged hate speech or other content, citing the January 6 riot at the Capitol as one example of not enough censorship. Republicans on the other hand, including Lee, allege social media companies censor or remove right-leaning political content but do not hold the same standard for left-leaning content.

Lee highlighted that platforms have the right to be as politically-biased as they want, but it’s a problem when their terms of service or CEOs publicly state they are neutral, but then moderate content from a non-neutral standpoint, he said.

Lee expressed hesitation about repealing or changing Section 230. “If you just repealed it altogether, it would give, in my view, an undo advantage to big market incumbents,” he said. One solution is supplementing Section 230 with additional clarifying language or new legislation, he said.

That’s why he came up with the Promise Act, legislation he introduced on February 24 that would require the disclosure of rules for content moderation, and permit the Federal Trade Commission to take corrective action against companies who violate those disclosed rules. “I don’t mean it to be an exclusive solution, but I think it is a reasonably achievable step toward some type of sanity in this area,” he said.

Senator Amy Klobuchar, D-Minn., and a couple of her colleagues also drafted Section 230 legislation that would maintain the spirit of the liability provision, but would remove it for paid content.

]]>
https://broadbandbreakfast.com/2021/04/sen-mike-lee-promotes-bills-valuing-federal-spectrum-requiring-content-moderation-disclosures/feed/ 1 32388
Pressed by Congress, Big Tech Defends Itself and Offers Few Solutions After Capitol Riot https://broadbandbreakfast.com/2021/03/pressed-by-congress-big-tech-defends-itself-and-offers-few-solutions-after-capitol-riot/?utm_source=rss&utm_medium=rss&utm_campaign=pressed-by-congress-big-tech-defends-itself-and-offers-few-solutions-after-capitol-riot https://broadbandbreakfast.com/2021/03/pressed-by-congress-big-tech-defends-itself-and-offers-few-solutions-after-capitol-riot/#respond Fri, 26 Mar 2021 16:23:34 +0000 http://broadbandbreakfast.com/?p=32138 March 26, 2021 – The heads of the largest social media companies largely defended their platforms, reiterated what they’ve done, and offered few solutions to the problems that ail them during a congressional hearing Thursday.

But, under harsh questioning from the House Energy and Commerce Committee, none of the CEOs of Google, Facebook or Twitter were given chance to respond to questions for more than 30 to 60 seconds on a given topic.

The hearing was about misinformation on social media in the fallout of the January 6 Capitol riot. The CEOs said dealing with the problem of dis- and misinformation on their platforms is more difficult than people think.

“The responsibility here lies with the people who took the actions to break the law and do the insurrection,” Facebook CEO Mark Zuckerberg said in response to a question about whether the platforms were to blame for the riot.

“Secondarily, also, the people who spread that content, including the president, but others as well, with repeated rhetoric over time, saying that the election was rigged and encouraging people to organize. I think those people bare the primary responsibility as well,” Zuckerberg said.

Zuckerberg added that “polarization was rising in America long before social networks were even invented,” he said. He blamed the “political and media environment that drives Americans apart.”

A ‘complex question’ of fault

Google CEO Sundar Pichai said it’s a “complex question” in response to the question of who’s at fault for the riot. Twitter CEO Jack Dorsey, however, was more direct: “Yes, but you also have to take into consideration a broader ecosystem; it’s not just about the technology platforms we use,” he said.

It was the first time Zuckerberg, Dorsey and Pichai appeared on Capitol Hill since the January 6 insurrection at the U.S. Capitol. The hearing was spurred by the riot and the turbulent presidential election that concluded in Joe Biden’s win and Donald Trump’s ban from Twitter and Facebook. Congress has turned their eye toward the social media companies for several months on possible Section 230 reform to address the alleged problems in the tech industry.

“Our nation is drowning in misinformation driven by social media. Platforms that were once used to share kids with grandparents are all-too-often havens of hate, harassment and division,” said Rep. Mike Doyle, D-Penn., chairman of the Communications and Technology subcommittee, who led the hearing. Doyle alleged the platforms “supercharged” the riot.

Both Democratic and Republican members of the committee laid out a variety of grievances during the five-hour meeting, and while they didn’t all share the same concerns, all agreed that something needs to be done.

“I hope you can take away from this hearing how serious we are, on both sides of the aisle, to see many of these issues that trouble Americans addressed,” Doyle said.

Congressional concerns

On the left side of the political aisle the main criticism against the tech giants was the spread of misinformation and extremism, including COVID-19 vaccines, climate change and the 2020 presidential election that Trump alleged was rigged against him.

“It is not an exaggeration to say that your companies have fundamentally and permanently transformed our very culture, and our understanding of the world,” said Rep. Jan Schakowsky, D-Illinois. “Much of this is for good, but it is also true that our country, our democracy, even our understanding of what is ‘truth’ has been harmed by the proliferation and dissemination of misinformation and extremism,” she said.

“Unfortunately, this disinformation and extremism doesn’t just stay online, it has real-world, often dangerous and even violent consequences, and the time has come to hold online platforms accountable,” said Rep. Frank Pallone, D-N.J.

From the right, Republican members voiced concerns about too much censorship, easy access to opioids, and the harm on children they said social media has.

“I’m deeply concerned by your decisions to operate your companies in a vague and biased manner, with little to no accountability, while using Section 230 as a shield for your actions and their real-world consequences,” said Rep. Bob Latta, R-Ohio. “Your companies had the power to silence the president of the United States, shut off legitimate journalism in Australia, shut down legitimate scientific debate on a variety of issues, dictate which articles or websites are seen by Americans when they search the internet,” he said.

“Your platforms are my biggest fear as a parent,” said Rep. Cathy McMorris Rodgers, R-Washington, expressing frustration over the impact that social media has on children. “It’s a battle for their development, a battle for their mental health, and ultimately, a battle for their safety,” she said, citing a rise of teen suicides since 2011. “I do not want you defining what is true for them, I do not want their future manipulated by your algorithms,” she said.

Platforms say it’s challenging, reiterate initiatives

In response to the many criticisms, Zuckerberg made it clear that while moderating content is central to address misinformation, it is important to protect speech as much as possible while taking down illegal content, which he said can be a huge challenge. As an example, bullying hurts the victim but there’s not a clear line where we can just censor speech, he said.

Pichai said that Google’s mission is about organizing and delivering information to the world and allowing free expression while also combatting misinformation. But it’s an evolving challenge, he said, because approximately 15 percent of google searches each day are new, and 500 hours of video are uploaded to YouTube every minute. To reinforce that point, he cited the fact that 18 months ago no one had heard of COVID-19, and in 2020 ‘coronavirus’ was the most trending search.

Dorsey expressed a similar sentiment about the evolving challenge of balancing freedom of expression with content moderation. “We observe what’s happening on our service, we work to understand the ramifications, and we use that understanding to strengthen our operations. We push ourselves to improve based on the best information we have,” he said.

The best way to face new challenges is to narrow down the problem to have the greatest impact, Dorsey said. For example, disinformation is a broad concept, and we focused on disinformation leading to offline harm, he said. Twitter worked on three specific categories, he said, these included manipulated media, public health and civic integrity.

“Ultimately, we’re running a business, and a business wants to grow the number of customers it serves. Enforcing a policy is a business decision,” Dorsey said.

Dorsey noted Twitter’s new Bluesky project, a decentralized internet protocol that various social media companies would be able to utilize, rather than being owned by a single company. It will improve the social media environment by increasing innovation around business models, recommended algorithms, and moderation controls in the hands of individuals instead of private companies, he said. But others already working in a similar technology space say the project is not without its problems.

On Section 230 reform

On the question of changing Section 230 of the Telecommunications Act, which grants social media companies immunity from liability for user-generated content, Zuckerberg suggested two specific changes: Platforms need to issue transparency reports about harmful content, and need better moderation for content that is clearly illegal. These changes should only affect large social media platforms, he said, but did not specify the difference between a large and small platform.

Dorsey said those may be good ideas, but it could be difficult to determine what is a large and small platform, and having those stipulations may incentivize the wrong things.

When asked about Instagram’s new version for children, Zuckerberg confirmed it was in the planning stage and many details were still being worked out.

Several Democrats raised concerns about minority populations, citing as one example the March 16 shooting in Atlanta that killed eight people including several Asian American women. Rep. Doris Matsui, D-Cal., asked why various hashtags such as #kungflu and #chinavirus were not removed from Twitter.

Dorsey responded that Twitter does take action against hate speech, but it can also be a challenge because it’s not always simple to distinguish between content that supports an idea and counter speech that condemns the support of that idea.

The tech leaders were asked by multiple members about the platform algorithms failing to catch specific instances of content moderation. Democrats referred to examples of posts containing misinformation or hate speech, while Republicans used examples of conservative-based content being removed.

Both Zuckerberg and Dorsey said that their systems are not perfect and it’s not realistic to expect perfection. Some content will always slip by our radars that we have to address individually, Zuckerberg said.

In response to Rep. Steve Scalise’s reference to a 2020 New York Post story about Hunter Biden that was taken down, Dorsey said we have made mistakes in some instances.

Editor’s Note: This story has been revised to add in a second paragraph that more accurately captured the fact that, while the tech executives offered few solutions, they were given little opportunity to do so by members of Congress. Additionally, the word “secondarily” was added back into Facebook CEO Mark Zuckerberg’s statement about who bore responsibility for the insurrection.

]]>
https://broadbandbreakfast.com/2021/03/pressed-by-congress-big-tech-defends-itself-and-offers-few-solutions-after-capitol-riot/feed/ 0 32138
Sen. Mark Warner Says His Section 230 Bill Is Crafted With Help of Tech Companies https://broadbandbreakfast.com/2021/03/sen-mark-warner-says-his-section-230-bill-is-crafted-with-help-of-tech-companies/?utm_source=rss&utm_medium=rss&utm_campaign=sen-mark-warner-says-his-section-230-bill-is-crafted-with-help-of-tech-companies https://broadbandbreakfast.com/2021/03/sen-mark-warner-says-his-section-230-bill-is-crafted-with-help-of-tech-companies/#respond Tue, 23 Mar 2021 21:19:30 +0000 http://broadbandbreakfast.com/?p=31942 March 23, 2021 – Sen. Mark Warner, D-Virginia, said he and his staff are in “regular contact” with big tech representatives about Section 230 reform.

“Both my staff and I are in regular contact with a host of individuals on the tech side,” Warner said Monday at a Protocol webinar discussing internet intermediary liability provision Section 230.

“We have had a great deal of contact with Facebook; in the most senior levels on the performance team, we have had an ongoing conversation with Google, although sometimes they decided not to show in our hearings.

“My staff is in contact with major platforms entities and will continue to have a dialogue.”

The proposed legislation, which was brought forth by Warner, Mazie Hirono, D-Hawaii, and Amy Klobuchar, D-Minn., would maintain immunity from legal consequences for whatever the platforms’ users post, but makes an exemption for content that the companies get paid for.

Critics of the proposal, including Senator Ron Wyden, D-Ore., have said that, if enacted, the change would effectively create a new form of liability on commercial relationships that would force “web hosts, cloud storage providers and even paid email services to purge their networks of any controversial speech.”

After consulting with interest groups, consultants, and experts, Warner declared that it is time to make some changes and get it right. “Some say the bill doesn’t go far enough; some say it goes too far, but I’m sure we got at the right point.”

Screenshot from the webinar

To make it clear what the bill does and what it doesn’t do, Warner shared that this legislation does not restrict anyone’s right to free speech, and he still wants “customers to be able to say about the good or bad of things they got at their local restaurant.”

The changes, Warner said, will address the disparity between big and small tech companies by maintaining protections for the latter but holding the former responsible for things they get paid for.

“In the late 90s, Section 230 was built to protect tech startups,” Warner said, but it has become a “get out of jail free card” for large corporations.

]]>
https://broadbandbreakfast.com/2021/03/sen-mark-warner-says-his-section-230-bill-is-crafted-with-help-of-tech-companies/feed/ 0 31942
Changes to Section 230 Might Lead To Removal of Legitimate Speech, Subtract from Historical Record https://broadbandbreakfast.com/2021/03/changes-to-section-230-might-lead-to-removal-of-legitimate-speech-subtract-from-historical-record/?utm_source=rss&utm_medium=rss&utm_campaign=changes-to-section-230-might-lead-to-removal-of-legitimate-speech-subtract-from-historical-record https://broadbandbreakfast.com/2021/03/changes-to-section-230-might-lead-to-removal-of-legitimate-speech-subtract-from-historical-record/#respond Wed, 17 Mar 2021 17:14:04 +0000 http://broadbandbreakfast.com/?p=31740 March 17, 2021 – Changes to Section 230 of the Communications Decency Act may not lead to the solution that America wants or needs, said panelists during a South by Southwest event Tuesday.

Section 230 grants immunity to social media platforms for user-generated content. It’s an increasingly visible topic before Congress, in the media and in the tech industry as many are concerned about the spread of misinformation and extremism. Like many others, the panel agreed that something needs to change, but the answer to that is not clear.

For Kate Ruane, senior legislative counsel at the American Civil Liberties Union, one of the main concerns is data algorithms that tech companies use for moderating content. They’ll build systems “that will identify speech that is ‘bad’ or could create a liability risk, they will build those programs and just run them automatically, and there will be no human review of it. And what’s going to happen is far, far more over-moderation than anybody intends,” she said.

Marginalized communities and speech that is considered outside the mainstream will be targeted, she explained. “Speech like that is speech that we want,” she said. “We don’t get marriage equality without speech like that, we don’t get Black Lives Matter without speech like that,” she said.

Rather than changing Section 230, Ruane targeted the core business model for big tech companies like Google, Facebook and Twitter. “Actually go after the business model of these companies, which is to collect your data, and market you as the product,” she said. If we can interrupt that, we’re moving in the right direction, she said.

Steven Rosenbaum, managing director at NYC Media Lab, said that disturbing online content is good revenue for social media platforms because users are drawn to it like people driving past a car accident. But these companies need to address the philosophical question of whether they want to support the amplification of this type of content, he said.

In recent months, social media companies have engaged in de-platforming users, including the Twitter ban of former-President Donald Trump after a group of his supporters caused a riot at the U.S. Capitol on January 6, and Amazon Web Services shut down servers for the conservative social media site Parler. But many other instances have happened over the years, such as AWS shutting down the news publication site Wikileaks in 2010 and various social media platforms collectively targeting ISIS in 2015.

Facebook also suspended Trump’s account, but that action is currently under review by the company’s new oversight board—a committee formed in 2020 that is akin to the Supreme Court for Facebook’s content moderation.

Protecting user’s freedom of speech is a concern for many, but Twitter and Facebook are not required to ensure the first amendment rights of their users. Despite that, Ruane said that companies need to be viewpoint-neutral in how they moderate content. “It is very important for platforms like Twitter, FB, YouTube, that are responsible for the speech of billions of people around the world, to be avoiding censorship to the extent that they can, because they are gatekeepers to the public square. And when they moderate content, they often get it wrong,” she said.

Social media has been a medium for recruiting and spreading violent groups, such as ISIS and, more recently, far-right extremists. Much of that content has been banned from online platforms, which the panelists agreed was a good thing.

Determining what content is removed can be a challenge though, depending on what type it is, said Amarnath Amarasingam, professor at Queen’s University in Canada. ISIS content was fairly easy to target because a lot of it was branded, he said. But with other content, such as from far-right extremist groups, it is more difficult because those groups don’t have a brand, he said.

But preserving that content in some way is also important for academic and historical reasons, said Jillian York, director for international freedom of expression at the Electronic Frontier Foundation, stressing the importance of documenting human rights. She expressed concern over the loss of content that is being scrapped from the internet that details atrocities and other problems in areas like Syria.

“There is a case to be made even if that material should not be allowed to be publicly posted, that it should still be documented in some way, and the vast majority of it is thrown in the bin of history by content moderators,” she said.

Ruane agreed with York, referring to the Capitol riot as a recent example. “We’re seeing so much evidence, we’re seeing so much documentation of what happened on January 6 being removed from the internet entirely with no sense of whether we will be able to preserve it for research value or for historical value at all,” she said.

Ruane also expressed concern about the lack of transparency from tech companies in their decisions to remove users and content. These platforms are not consistent and are not transparent, she said.

Whether or not de-platforming actually works to limit one’s influence is a major question. The panel said it is possible that users banned from mainstream sites may find other platforms that agree with their sentiments.

]]>
https://broadbandbreakfast.com/2021/03/changes-to-section-230-might-lead-to-removal-of-legitimate-speech-subtract-from-historical-record/feed/ 0 31740
Section 230 Reform Requires Citizen Participation, Says Sen. Amy Klobuchar https://broadbandbreakfast.com/2021/03/section-230-reform-requires-citizen-participation-says-sen-amy-klobuchar/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-reform-requires-citizen-participation-says-sen-amy-klobuchar https://broadbandbreakfast.com/2021/03/section-230-reform-requires-citizen-participation-says-sen-amy-klobuchar/#respond Fri, 05 Mar 2021 20:19:39 +0000 http://broadbandbreakfast.com/?p=31370 In the conversation to reform Section 230 of the Communications Decency Act, which governs liability for internet intermediaries, Sen. Amy Klobuchar, D-Minn, said Tuesday the public must get involved.

“We can’t fight Google and other million-dollar companies with duct tape and Band-Aids,” Klobuchar said, speaking during a live event hosted by the tech publication The Verge on Tuesday.

“People will say dumb stuff,” she said, but “internet users need to lift their voices, actively participate in contacting their senators” to better inform them about what they think about reform.

The reasoning is that these voices are the ones who will be impacted the most.

The reform discussions — egged on by former President Donald Trump — reached a fever pitch when some of Trump’s misleading tweets were labelled by Twitter with accompanying factual information about the issue he tweeted about. Other platforms followed suit.

Early last month, Klobuchar was joined by other senate democrats in proposing their own changes to Section 230, called the SAFE TECH Act.

The proposal would generally keep internet companies free from liability on content their users post, except for paid content, such as advertising that they financially benefit from.

]]>
https://broadbandbreakfast.com/2021/03/section-230-reform-requires-citizen-participation-says-sen-amy-klobuchar/feed/ 0 31370
Supporters of Section 230 Agreed About Concerns, But Counterarguments Dominated Panel Discussion https://broadbandbreakfast.com/2021/02/supporters-of-section-230-agreed-about-concerns-but-counterarguments-dominated-panel-discussion/?utm_source=rss&utm_medium=rss&utm_campaign=supporters-of-section-230-agreed-about-concerns-but-counterarguments-dominated-panel-discussion https://broadbandbreakfast.com/2021/02/supporters-of-section-230-agreed-about-concerns-but-counterarguments-dominated-panel-discussion/#respond Sat, 13 Feb 2021 17:11:28 +0000 http://broadbandbreakfast.com/?p=30620 February 13, 2021 – With many current proposals to revise Section 230 of the Communications Decency Act, supporters of the landmark internet law on Tuesday discussed how new proposals could heavily interfere with the First Amendment.

The general angst about how people are using the internet badly, coupled with rising politicization and online hate speech have given advocates for change an upper hand. But Cathy Gellis, a tech law expert and Section 230 specialist, said that advocates for charge are overlooking how Section 230 has worked to the benefit of users.

“In some degree, we are reacting to some very real problems, but we’re not necessarily reacting with the appropriately nuanced analysis of what is really the root cause of concerns,” said Gellis, speaking at an event of the progressive American Constitution Society for Law and Policy.

During the meeting, which was hosted by the chapter of the organization at Santa Clara Law School, the discussion centered heavily on how future changes to Section 230 might diminish users’ speaking power and create unnecessary conflict.

Gellis insisted that these issues need to be analyzed with caution and consider users’ perspectives on the operation of social platforms.

Eric Goldman, associate dean at the law school, and co-director of the High-Tech Law Institute there, raised the concerns of some elected officials. He agreed that there are bad things that happen on the internet due to the outsized influence of overly-capitalized big tech platforms in the digital space.

Yet Goldman said that the entire critique of Section 230 has a severe structural problem: Critics of Section 230 have bought into the idea that big entities censor governments when they impose speech restrictions on how people talk to each other on the platforms. The truth, he said, is the reverse: “The First Amendment exists to keep it so that the government or big tech companies can’t use their power to keep citizens from being able to talk back to people in power.”

“Not having a regulatory system that prevents intermediaries from adopting policies, where a content-based discrimination is sharply limited, allows the internet to be user friendly,” agreed David Greene, civil liberties co-director of the Electronic Frontier Foundation, which has championed Section 230 for more than 25 years.

Cutting Section 230 would injure the First Amendment and the power of speech, he said, and make web sites lack user-friendly policies.

It is still unclear how the changes of Section 230 will enhance connectivity, he said. Whether the platforms are doing a good job or not, there is still and always will be the question of more or less moderation of content.

]]>
https://broadbandbreakfast.com/2021/02/supporters-of-section-230-agreed-about-concerns-but-counterarguments-dominated-panel-discussion/feed/ 0 30620
Former FCC Commissioners Reflect on Changes Since 1996 Telecommunications Act https://broadbandbreakfast.com/2021/02/former-fcc-commissioners-reflect-on-changes-since-1996-telecommunications-act/?utm_source=rss&utm_medium=rss&utm_campaign=former-fcc-commissioners-reflect-on-changes-since-1996-telecommunications-act https://broadbandbreakfast.com/2021/02/former-fcc-commissioners-reflect-on-changes-since-1996-telecommunications-act/#respond Tue, 09 Feb 2021 23:46:31 +0000 http://broadbandbreakfast.com/?p=30541 February 9, 2021 – As 2021 marks the 25-yearanniversary of the Telecommunications Act, former Federal Communications Commissioners Mike O’Rielly and Harold Furchtgott-Roth reminisced Monday on their time in Congress as staff members when the law was passed. The Hudson Institute hosted the conversation.

The Telecommunications Act was the first major update to telecommunications law since 1934, and the two former commissioners reflected onhow much the internet has shifted the focus of technology legislation.

O’Rielly said the “heart of the legislation” was looking at local and long-distance telephone company markets and opening them to more competition, but “no one knew at the time that the internet would go in a different direction,” he said.

“No one really figured out at the time what was going to happen as broadband and online technology would take over from circuit-switch technologies,” agreed Furchtgott-Roth.

“These markets that we thought were so important back in 1996, long-distance services, they don’t exist anymore,” he said. “Technology has changed and provided a different and a superior form of competition than the [Act] could have ever imagined,” he said.

Four of the biggest tech companies today—Amazon, Google, Facebook, Tesla—didn’t even exist 25 years ago, Furchtgott-Roth said, using them as examples of how much technology and the market has changed. “All of which have a very active role directly or indirectly in the communication space,” he said.

The two also expressed surprise at how prominent some of the law’s provisions have become, and how rarely the FCC uses other provisions.

O’Rielly said that even though the preamble to the law was written as a description and had no legal merit, that language “has been abused” by courts and by the FCC even though, in his view, the preamble is “something that has no statutory weight.”

Section 230 a new focus for concern

In recent months Section 230 of the Act, which grants immunity to online platforms for content provided by their users, has become a major conversation for Congress and in public discourse, due to controversial topics like the election, COVID-19, and the U.S. Capitol riot on January 6, which led to Donald Trump’s ban from Twitter and Facebook, and the shutting down of Parler, a conservative-led competitor to Twitter.

“Everyone agrees that Section 230 is worthy of review or some type of reform, but they come from different perspectives,” O’Rielly said. Republicans and conservatives are worried about censorship on the online platforms, while Democrats on the other side are worried about the spread of misinformation without any correction or policing, he said. “Those two things make it really hard to find a middle ground, even if everyone agrees on the overall premise of some type of reform,” he said.

Furchtgott-Roth mentioned two parts of the Act that he thought would have been used more often. First, the forbearance clause from Section 10, which gives the FCC the option to not enforce parts of the Act if certain conditions are met by entities. Second, regulatory review from section 11, which allows the FCC to review its own rules.

On being questioned about reforming the Telecommunications Act, O’Rielly said that Congress needs to be forward thinking, not constantly fixing previous legislation, and that they need to be specific in their statutes for what they want and do not want federal agencies to do.

The anniversary also received praise from members of Congress and industry groups on Monday. Rep. Anna Eshoo, D-Calif, and Sen. Ed Markey, D-Mass and co-author of the Act, said Congress must revisit the law to bring up-to-speed demands for better broadband.

Meanwhile, FirstLight Fiber CEO Kurt Van Wagenen and Incompas CEO Chip Pickering suggested the new-look White House and Federal Communications Commission make broadband deployment a top agenda item to usher in connectivity in underserved areas.

]]>
https://broadbandbreakfast.com/2021/02/former-fcc-commissioners-reflect-on-changes-since-1996-telecommunications-act/feed/ 0 30541
Tread Carefully if Section 230 is to Be Changed, Experts Say at INCOMPAS Event https://broadbandbreakfast.com/2021/02/tread-carefully-if-section-230-is-to-be-changed-experts-say-at-incompas-event/?utm_source=rss&utm_medium=rss&utm_campaign=tread-carefully-if-section-230-is-to-be-changed-experts-say-at-incompas-event https://broadbandbreakfast.com/2021/02/tread-carefully-if-section-230-is-to-be-changed-experts-say-at-incompas-event/#respond Tue, 09 Feb 2021 19:57:00 +0000 http://broadbandbreakfast.com/?p=30547 February 8, 2021—Amending too rashly or quickly the internet intermediary liability provision of the Telecommunications Act will harm smaller companies and new entrants but won’t have the intended impact on larger players, according to experts.

Former President Donald Trump has often complained about big technology companies alleged control over speech on the internet, especially in the wake of Twitter adding disclaimers on his misleading tweets and then banning him for allegedly inciting the Capital riot last month.

Trump and other conservatives have sought revocation of Section 230 of the Communications Decency Act that shields these companies from liability for content posted on their platforms.

On the 25th anniversary of the Telecommunications Act, experts wrangled with the question of whether Section 230 should be reformed and, if so, to what degree.

On Monday, the Internet and Competitive Networks Association, also known as INCOMPAS, hosted a panel of experts to further dissect some of the consequences of repealing or reforming Section 230, which has been discussed in past events.

Julie Samuels, founder and executive director of Tech: NYC, said at the INCOMPAS event that  Section 230 should not be altered lightly, and that changes to its current state could have significant consequences for small platforms.

“If we do see drastic reform or even, frankly, moderate reform to Section 230…larger companies will be able to comply,” Samuels continued. “They have the resources to comply. Who doesn’t have the resources to comply? Smaller startups, nonprofit organizations, [and] marginalized voices.”

Proceed with caution on Section 230 changes

This is not to say that Samuels does not believe that no changes should be made to Section 230; Samuels made it clear that her chief concern is that Congress or the National Telecommunications and Information Administration, an agency of the commerce department, would “come in with a sledgehammer, when what they really need is a scalpel.”

She reminded the audience that Section 230 was designed to address concerns associated with large corporations, yet it will ultimately be the smaller organizations that end up paying the price. More than that, Samuels emphasized that amending Section 230 would impact companies that don’t even exist yet. “What we’re in theory doing [by amending Section 230] is creating potential barriers to entry that are incredibly difficult to surmount.”

Screenshot of from the INCOMPAS Policy Summit

Samuels specifically pointed to Sen. Mark Warner, a Democrat from Virginia, who is proposing legislation that allegedly could hurt small companies. Senator Warner’s bill would effectively change Section 230 into an affirmative defense, meaning that the companies in question would need to provide evidence that they did not violate the law, she said. While large companies could afford to litigate issues like this in court, small companies would have significantly more trouble doing so.

Pinterest’s Head of U.S. Public Policy and Social Impact, Braden Cox, pointed out that there is a common misconception that Section 230 could be amended so that it could somehow only effect large companies. The reality is, he said,it effects all online media.

Other say Section 230 is good as it is

Attorney and policy advisor for INCOMPAS Lindsay Stern, however, said one of the primary goals of Section 230 is to promote competition in the online landscape, and it has succeeded in the regard.

“The ability to host and moderate [third party] content in good faith is good for competition because it allows websites to differentiate themselves by what they allow.” Stern emphasized that altering Section 230 could remove this benefit.

]]>
https://broadbandbreakfast.com/2021/02/tread-carefully-if-section-230-is-to-be-changed-experts-say-at-incompas-event/feed/ 0 30547
Social Media Needs to Be Held Accountable to a Higher Standard Than No Standard https://broadbandbreakfast.com/2021/02/social-media-needs-to-be-held-accountable-to-a-higher-standard-than-no-standard/?utm_source=rss&utm_medium=rss&utm_campaign=social-media-needs-to-be-held-accountable-to-a-higher-standard-than-no-standard https://broadbandbreakfast.com/2021/02/social-media-needs-to-be-held-accountable-to-a-higher-standard-than-no-standard/#respond Mon, 08 Feb 2021 17:52:36 +0000 http://broadbandbreakfast.com/?p=30511 February 8, 2021— The spread of disinformation and misinformation can be controlled if the same rules on transparency required of the broadcast industry are applied to social media, an Atlantic Council webinar heard Wednesday.

That includes making changes to Section 230 of the Communications Decency Act governing liability of internet intermediaries to include a requirement that social media companies make clear who paid for ads that are displayed, said Pablo Breuer, co-author of the Adversarial Misinformation and Influence Tactics and Techniques framework.

Breuer’s framework, which was co-authored with Sara-Jayne Terp, seeks to identify the best means to detect and discuss what Terp referred to as “disinformation behaviors.”

The webinar last Wednesday focused on the critical issue of misinformation and disinformation and the roles and responsibilities of social media, the government and citizens.

Breuer noted that just four years ago, the attitude surrounding misinformation and disinformation campaigns was very different.

“When Sara-Jayne and I started talking about this, people thought we were crazy—they thought there was no disinformation problem,” he said. “Now you see it covered on the nightly news.”

When asked why the issue has only come to the forefront of society within the last couple of years, Breuer pointed out that in the past, disseminating information required a lot of capital. With the advent of social media, that was no longer the case.

Pablo Breuer

Pablo Breuer

“We’ve democratized the ability to reach a mass audience. Now we live in a world where an entertainer has twice the number of followers as the President of the United States,” said Breuer. ”They don’t have to clear their message with anyone—they can say something completely false.”

For a long time, social media was a largely-unregulated wild west of commentary, news and opinions.

But then the data-harvesting exploits of firms like Cambridge Analytica exposed how information was used to mold citizens’ thinking on issues that impacted political elections around the world began to put things into focus.

We may be approaching the end of non-regulation, as the banning of former President Donald Trump and other right-wing political commentators from Twitter and other social media platforms may lead to renewed scrutiny on the power of tech companies.

Breuer conceded that while more attention being focused on the issue is a step in the right direction, there are still huge dangers associated with the spread of fraudulent information and the many channels at the hands of malevolent actors..

Following the banning of the aforementioned figures, more of that base gravitated toward other more receptive applications, including Parler and Gab.

Counter-measures to social media disinformation?

Terp and Breuer compiled a list of what they regard as effective countermeasures to mitigate misinformation.Terpnoted that many people have been unknowingly co-opted as “unwitting agents.” In addition to being unwitting, they are not necessarily being influenced by external entities.

“Disinformation is coming from inside the house. What we are seeing is this move past, ‘the Russians are coming,’ to a more honest discussion about financial motivations, political motivations and reputational drivers of misinformation.

Terp also expressed that there is a strong relationship between privacy, democracy, and disinformation. She explained how greater consumer privacy reduces the level of targeting by outside entities in terms of the content a consumer is exposed to.

In the aftermath of Facebook’s move to wholly integrate Whatsapp into the social media ecosystem, for example, Signal, a privacy-by-design messaging app, saw its adoption skyrocket. End-to-end encryption messaging has also been a problem for law enforcement, they say, because it inhibits their ability to access messages of criminals.

Terp described disinformation as merchandise, and that one of the primary goals of anyone trying to curb its spread should be to take money out of it. According to Terp, countermeasure efforts deployed by social media platforms designed to make disinformation less profitable have had a mitigating effect.

Tackling bad behavior, not combatting people

In her conclusion, Terp made it clear that the only way to make policies that are effective at combatting the spread of disinformation is to tackle the behavior and not people. More needs to be done to spot behaviors early so that social media and government  can engage in more preventative action, she said, rather than simply reacting to things as they happen.

Breuer offered some advice for the average person: He encouraged the audience to engage with those they disagree with, and to avoid trapping themselves in a virtual echo chamber.

He added the government needs to reexamine Section 230 and  be more proactive in crafting policy to address the demands of modern technology.

]]>
https://broadbandbreakfast.com/2021/02/social-media-needs-to-be-held-accountable-to-a-higher-standard-than-no-standard/feed/ 0 30511