Social Media – Broadband Breakfast https://broadbandbreakfast.com Better Broadband, Better Lives Wed, 03 Jan 2024 14:33:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://i0.wp.com/broadbandbreakfast.com/wp-content/uploads/2021/05/cropped-logo2.png?fit=32%2C32&ssl=1 Social Media – Broadband Breakfast https://broadbandbreakfast.com 32 32 190788586 12 Days of Broadband: State Regulations and Children’s Safety Online https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/?utm_source=rss&utm_medium=rss&utm_campaign=12-days-of-broadband-state-regulations-and-childrens-safety-online https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/#respond Wed, 03 Jan 2024 14:31:39 +0000 https://broadbandbreakfast.com/?p=56649 January 3, 2024 – A nationwide push to restrict teenagers’ online actions gained ground in 2023 as several states implemented stringent laws targeting social media use among youth.

In March, Utah ventured into uncharted territory when Republican Gov. Spencer Cox signed two measures, H.B. 311 and S.B. 152, mandating parental consent for all minors – 17 and under – before they can register for platforms like TikTok and Meta’s Instagram. For decades, the default standard of the 1998 Children’s Online Privacy Protection Act has been no restrictions on social media use by kids 13 and over.

The pair of bills, which do not go into effect until March 2024, require individuals under 18 to gain parental consent to open a social media account, bar minors from accessing social media platforms between the hours of 10:30 p.m. and 6:30 a.m., and grant parents full access to their child’s social media accounts.

In October, Utah announced a lawsuit against TikTok, alleging that the app deploys addictive features to hook young users. The lawsuit raises additional concerns regarding user data and privacy, citing that TikTok’s China-based parent company, ByteDance, is legally binded with the Chinese Communist Party. 

Arkansas, Montana may be following Utah

Soon after, Arkansas took a similar step as Republican Gov. Sarah Huckabee Sanders signed Act 689, named the Social Media Safety Act, in April 2023. The newly approved act, aiming to mandate age verification and parental consent for social media users under 18, was set to come into effect on September 1. 

However, on that very day, U.S. District Judge Timothy Brooks granted a preliminary injunction following a petition from the tech trade industry group, NetChoice Litigation Center. Their contention was that the new law infringed upon the First Amendment’s freedom of expression guarantee.

In May, Montana Gov. Greg Ganforte signed legislation banning TikTok on all devices statewide, threatening fines up to $10,000 per violation for app providers like Google and Apple. Before the law took effect on January 1, Federal Judge Donald Molloy stopped the TikTok ban in late November, stating that the law exceeds state authority and violates the constitutional rights of users. 

Shortly after, TikTok filed a lawsuit against Montana. Judge Molloy found merit to numerous arguments raised by TikTok, including that TikTok has a number of safeguards in place surrounding user data.

Is Age verification a First Amendment issue?

Consumer groups, including the American Civil Liberties Union, have raised issues with the fact that many of these bills extend beyond merely mandating age verification solely for minors; they now necessitate age verification through proof of legal documents for anyone seeking to utilize social media within the states.

The issue was much discussed at a Broadband Breakfast Live Online session in November 2023, where child safety advocate Donna Rice Hughes and Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems were much more robust than from a generation ago, when the Supreme Court struck down one such scheme. They disagreed with civil liberties groups including the Electronic Frontier Foundation.

On TikTok, 13 states joined in enacting bans over the use of the Chinese-owned platform being installed on government-issued devices. That brings to 34 the total number of states that have banned TikTok on government devices due to national security concerns. Additionally, more than 40 public universities have barred TikTok from their on-campus Wi-Fi and university-owned computers in response to these state-level bans.

See “The Twelve Days of Broadband” on Broadband Breakfast

]]>
https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/feed/ 0 56649
Diverse Groups File Amicus Briefs Against Florida and Texas Social Media Laws https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/?utm_source=rss&utm_medium=rss&utm_campaign=diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/#respond Fri, 08 Dec 2023 19:17:59 +0000 https://broadbandbreakfast.com/?p=56323 WASHINGTON, December 8, 2023 –  Industry, public interest, and conservative groups filed briefs with the Supreme Court this week arguing against Texas and Florida social media laws.

Drafted to combat what state legislators saw as the unfair treatment of right-wing content online, the 2021 laws would allow residents of those states to sue social media companies for suspending their accounts. Both have been blocked from going into effect after legal challenges from tech industry trade groups. The cases were initially separate, but the Supreme Court agreed in October to hear them together because they raise similar issues.

Industry groups argue the laws violate the First Amendment by forcing platforms to host speech they normally would not. The White House agrees – Solicitor General Elizabeth Prelogar asked the Court in August to take up the issue and strike down Texas’s law.

Consumer protection group Public Knowledge filed an amicus brief on Thursday in support of the tech trade groups, arguing the laws are unconstitutional and “driven by political animus.”

Center-right think tank TechFreedom filed a similar brief on Wednesday. 

“Only the state can ‘censor’ speech,” Corbin Barthold, the group’s director of appellate litigation, said in a statement. “And these states are doing so by trying to co-opt websites’ right to editorial control over the speech they disseminate.

Both groups also pushed against the states’ move to treat social media platforms as ‘common carrier’ services, a part of both laws. The legal designation, typically applied to services like railroads or voice telephone calls, requires a carrier to serve the public at just rates without unreasonable discrimination.

The states’ move to designate social media platforms as common carriers would make it more difficult for them to refuse their service to users. But the designation, the groups argued, does not map cleanly onto the service social media provides, as the platforms make editorial decisions about content they transmit – through moderation and recommendation – in a way companies like voice providers do not.

In all, at least 40 similar briefs have been filed arguing against the laws, according to the Computer and Communications Industry Association, one of the parties in the case.

A set of 15 states with Republican-led legislatures and former president Donald Trump, who had multiple social media accounts suspended after the January 2021 attack on the Capitol, have filed amicus briefs in support of Texas and Florida. The Court is expected to hear oral arguments in the case sometime in 2024.

]]>
https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/feed/ 0 56323
Improved Age Verification Allows States to Consider Restricting Social Media https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/?utm_source=rss&utm_medium=rss&utm_campaign=improved-age-verification-allows-states-to-consider-restricting-social-media https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/#respond Mon, 20 Nov 2023 16:00:14 +0000 https://broadbandbreakfast.com/?p=55701 WASHINGTON, November 20, 2023 — A Utah law requiring age verification for social media accounts is likely to face First Amendment lawsuits, experts warned during an online panel Wednesday hosted by Broadband Breakfast.

The law, set to take effect in March 2024, mandates that all social media users in Utah verify their age and imposes additional restrictions on minors’ accounts.

The Utah law raises the same constitutional issues that have led courts to strike down similar laws requiring age verification, said Aaron Mackey, free speech and transparency litigation director at the non-profit Electronic Frontier Foundation.

“What you have done is you have substantially burdened everyone’s First Amendment right to access information online that includes both adults and minors,” Mackey said. “You make no difference between the autonomy and First Amendment rights of older teens and young adults” versus young children, he said.

But Donna Rice Hughes, CEO of Enough is Enough, contended that age verification technology has successfully restricted minors’ access to pornography and could be applied to social media as well.

“Utah was one of the first states [to] have age verification technology in place to keep minor children under the age of 18 off of porn sites and it’s working,” she said.

Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems had progressed considerably from a generation ago, when the Supreme Court in 2002’s Ashcroft v. American Civil Liberties Union, struck down the 1998 Child Online Protection Act. The law had been designed to shield minors from indecent material, but the court ruled that age-verification methods often failed at that task.

Andrew Zack, policy manager at the Family Online Safety Institute, said that his organization he welcomed interest in youth safety policies from Utah.

But Zack said, “We still have some concerns about the potential unintended consequences that come with this law,”  worrying particularly about potential unintended consequences for teen privacy and expression rights.

Taylor Barkley, director of technology and innovation at the Center for Growth and Opportunity, highlighted the importance of understanding the specific problems the law aims to address. “Policy Solutions have trade-offs.” urging that solutions be tailored to the problems identified.

Panelists generally agreed that comprehensive data privacy legislation could help address social media concerns without facing the same First Amendment hurdles.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 15, 2023 – Social Media for Kids in Utah

In March 2023, Utah became the first state to adopt laws regulating kids’ access to social media. This legislative stride was rapidly followed by several states, including Arkansas, Illinois, Louisiana, and Mississippi, with numerous others contemplating similar measures. For nearly two decades, social media platforms enjoyed unbridled growth and influence. The landscape is now changing as lawmakers become more active in shaping the future of digital communication. This transformation calls for a nuanced evaluation of the current state of social media in the United States, particularly in light of Utah’s pioneering role. Is age verification the right way to go? What are the broader implications of this regulatory trend for the future of digital communication and online privacy across the country?

Panelists

  • Andrew Zack, Policy Manager, Family Online Safety Institute
  • Donna Rice Hughes, President and CEO of Enough Is Enough
  • Taylor Barkley, Director of Technology and Innovation, Center for Growth and Opportunity
  • Tony Allen, Executive Director, Age Check Certification Scheme
  • Aaron Mackey, Free Speech and Transparency Litigation Director, Electronic Frontier Foundation
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Andrew Zack is the Policy Manager for the Family Online Safety Institute, leading policy and research work relating to online safety issues, laws, and regulations. He works with federal and state legislatures, relevant federal agencies, and industry leaders to develop and advance policies that promote safe and positive online experience for families. Andrew joined FOSI after five years in Senator Ed Markey’s office, where he worked primarily on education, child welfare, and disability policies. Andrew studied Government and Psychology at the College of William and Mary.

Donna Rice Hughes, President and CEO of Enough Is Enough is an internationally known Internet safety expert, author, speaker and producer. Her vision, expertise and advocacy helped to birth the Internet safety movement in America at the advent of the digital age. Since 1994, she has been a pioneering leader on the frontlines of U.S. efforts to make the internet safer for children and families by implementing a three-pronged strategy of the public, the technology industry and legal community sharing the responsibility to protect children online.

Taylor Barkley is the Director of Technology and Innovation at the Center for Growth and Opportunity where he manages the research agenda, strategy, and represents the technology and innovation portfolio. His primary research and expertise are at the intersection of culture, technology, and innovation. Prior roles in tech policy have been at Stand Together, the Competitive Enterprise Institute, and the Mercatus Center at George Mason University.

Tony Allen a Chartered Trading Standards Practitioner and acknowledged specialist in age restricted sales law and practice. He is the Chair of the UK Government’s Expert Panel on Age Restrictions and Executive Director of a UKAS accredited conformity assessment body specialising in age and identity assurance testing and certification. He is the Technical Editor of the current international standard for Age Assurance Systems.

Aaron Mackey is EFF’s Free Speech and Transparency Litigation Director. He helps lead cases advancing free speech, anonymity, and privacy online while also working to increase public access to government records. Before joining EFF in 2015, Aaron was in Washington, D.C. where he worked on speech, privacy, and freedom of information issues at the Reporters Committee for Freedom of the Press and the Institute for Public Representation at Georgetown Law

Breakfast Media LLC CEO Drew Clark has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/feed/ 0 55701
Senate Commerce Committee Passes Two Bills To Protect Children Online https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/?utm_source=rss&utm_medium=rss&utm_campaign=senate-commerce-committee-passes-two-bills-to-protect-children-online https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/#respond Thu, 27 Jul 2023 20:31:09 +0000 https://broadbandbreakfast.com/?p=52693 WASHINGTON, July 27, 2023 – The Senate Commerce committee on Thursday swiftly passed two pieces of legislation aimed to protect the safety and privacy of children online, exactly one year after the same bills passed the committee but failed to advance further.

The first bill to clear the committee was the Kids Online Safety Act, which requires social media sites to put in place safeguards protecting users under the age of 17 from content that promotes harmful behaviors, such as suicide and eating disorders. KOSA was first introduced in 2022 by Sen. Richard Blumenthal, D-Conn., and Sen. Marsha Blackburn, D-Tenn. It previously won bipartisan support but ultimately failed to become law.

The current version of the bill was reintroduced in May, gaining traction in several hearings, and picked up more than 30 co-sponsors. Several changes were made to the text, including a specific list of online harms and certain exemptions for support services, such as substance abuse groups that might unintentionally suffer from the bill’s requirements.

The bill was also amended Thursday to include a provision proposed by Sen. John Thune, R-S.D. that would require companies to disclose the use of algorithms for content filtering and give users the choice to opt out.

Critics of the bill, however, said the revised version largely resembled the original one and failed to address issues raised before. These concerns included sections that would require tech companies to collect more data to filter content and verify user age, as well as an infringement on children’s free speech.

Sen. Ted Cruz, R-Texas, supported the bill but agreed that more work needs to be done before it moves to the floor. Since the last committee’s markup of KOSA, several states have approved measures concerning children’s online safety that might be inconsistent with the existing provisions, he noted, proposing a preemptive provision to ensure the bill would be enforced regardless of state laws.

The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, introduced by Sen. Edward Markey, D-Mass., and Bill Cassidy, R-LA, was the second bill passed out of the committee. It expands on existing legislation that has been in effect since 2000 to protect children from harmful marketing. The bill would make it illegal for websites to collect data on children under the age of 16, outlaw marketing specifically aimed at kids, and allow parents to erase their kids’ information on the websites.

“It is time for Congress to meet this moment and to act with the urgency that these issues demand,” said Sen. Markey.

This pair of legislation is among many others that seek to protect children from online harms, none of which have made any headway in Congress so far.

]]>
https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/feed/ 0 52693
UK’s Online Safety Bill Likely to Impact American User Experience https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/?utm_source=rss&utm_medium=rss&utm_campaign=uks-online-safety-bill-likely-to-impact-american-user-experience https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/#respond Fri, 21 Jul 2023 15:08:37 +0000 https://broadbandbreakfast.com/?p=52497 WASHINGTON, July 21, 2023 – The United Kingdom’s Online Safety Bill will impact the American-based user’s experience on various platforms, said panelist at a Broadband Breakfast Live Online event Wednesday.  

The Online Safety Bill is the UK’s response to concerns about the negative impact of various internet platforms and applications. The core of the bill addresses illegal content and content that is harmful to children. It places a duty of care on internet sites, including social media platforms, search engines, and online shopping centers, to provide risk assessments for their content, prevent access to illegal content, protect privacy, and prevent children from accessing harmful content. 

The legislation would apply to any business that has a substantial user base in the UK, having unforeseen impacts on the end user experience, said Amy Peikoff, Chief Policy Officer of UK-based video-streaming platform, BitChute. 

Even though the legislation is not U.S. legislation, it will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering their services in the jurisdictions where this legislation will be enacted, said Peikoff. Already, the European Union’s Digital Services Act, is affecting Twitter, which is “throttling its speech” to turn out statistics that say a certain percentage of their content is “healthy,” she claimed. 

Large social media companies as we know them are finished, Peikoff said.  

Ofcom, the UK’s communications regulator, will be responsible to provide guidelines and best practices as well as conduct investigations and auditing. It will be authorized to apprehend revenue if a company fails to adhere to laws and may enact rules that require companies to provide user data to the agency and/or screen user messages for harmful content. 

Peikoff claimed that the legislation could set off a chain of events, “namely, that platforms like BitChute would be required to affirmatively, proactively scan every single piece of content – comments, videos, whatever posted to the platform – and keep a record of any flags.” She added that U.S-based communication would not be exempt. 

Meta-owned WhatsApp, a popular messaging app, has warned that it will exit the UK market if the legislation requires it to release data about its users or screen their messages, claiming that doing so would “compromise” the privacy of all users and threaten the encryption on its platform. 

Matthew Lesh, director of public policy and communications at the UK think tank Institute of Economic Affairs, said that the bill is a “recipe for censorship on an industrial, mechanical scale.” He warned that many companies will choose to simply block UK-based users from using their services, harming UK competitiveness globally and discouraging investors.  

In addition, Lesh highlighted privacy concerns introduced by the legislation. By levying fines on platforms that host harmful content accessible by children, companies may have to screen for children by requiring users to present government-issued IDs, presenting a major privacy concern for users.  

The primary issue with the bill and similar policies, said Lesh, is that it enacts the same moderation policies to all online platforms, which can limit certain speech and stop healthy discussion and interaction cross political lines. 

The bill is currently in the final stages of the committee stage in the House of Lords, the UK’s second chamber of parliament. Following its passage, the bill will go to the House of Commons in which it will either be amended or be accepted and become law. General support in the UK’s parliament for the bill suggests that the bill will be implemented sometime next year. 

This follows considerable debate in the United States regarding content moderation, many of which discussions are centered around possible reform of Section 230. Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. 

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, July 19, 2023 – The UK’s Online Safety Bill

The UK’s Online Safety Bill seeks to make the country “the safest place in the world to be online” has seen as much upheaval as the nation itself in the last four years. Four prime ministers, one Brexit and one pandemic later, it’s just a matter of time until the bill finally passes the House of Lords and eventually becomes law. Several tech companies including WhatsApp, Signal, and Wikipedia have argued against its age limitation and breach of end-to-end encryption. Will this legislation serve as a model for governments worldwide to regulate online harms? What does it mean for the future of U.S. social media platforms?

Panelists

  • Amy Peikoff, Chief Policy Officer, BitChute
  • Matthew Lesh, Director of Public Policy and Communications at the Institute of Economic Affairs.
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Amy Peikoff is Chief Policy Officer for BitChute. She holds a BS in Math/Applied Science and a JD from UCLA, as well as a PhD in Philosophy from University of Southern California, and has focused in her academic work and legal activism on issues related to the proper legal protection of privacy. In 2020, she became Chief Policy Officer for the free speech social media platform, Parler, where she served until Parler was purchased in April 2023.

Matthew Lesh is the Director of Public Policy and Communications at the Institute of Economic Affairs. Matthew often appears on television and radio, is a columnist for London’s CityAM newspaper, and a regular writer for publications such as The TimesThe Telegraph and The Spectator. He is also a Fellow of the Adam Smith Institute and Institute of Public Affairs.

Drew Clark is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

 

 

 

Illustration from the Spectator

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/feed/ 0 52497
New Tool Measures Economic Impact of Internet Shutdowns https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/?utm_source=rss&utm_medium=rss&utm_campaign=new-tool-measures-economic-impact-of-internet-shutdowns https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/#respond Mon, 10 Jul 2023 20:36:06 +0000 https://broadbandbreakfast.com/?p=52234 July 10, 2023 – New measuring tool NetLoss launched by the Internet Society shows the impacts of internet shutdowns on economies including Iraq, Sudan and Pakistan, where government-mandated outages have cost millions of dollars in a matter of hours or days.

NetLoss, launched on June 28, calculated a four-hour shutdown in July in Iraq, implemented by the government to prevent cheating during high school exam season, resulted in an estimated loss of $1.6 million. In May, a shutdown in Pakistan cost more than $13 million over the span of four days, while a five-day internet outage in Sudan in April cost the economy more than $4 million and resulted in the loss of 560 jobs.

NetLoss is unique among other internet assessment tools as it also includes subsequent economic impacts on the unemployment rate, foreign direct investments, and the risk of future shutdowns, claimed the advocacy group Internet Society. It provides data on both ongoing and anticipated shutdowns, drawing from historical dataset of over 90 countries dating back to 2019.

“The calculator is a major step forward for the community of journalists, policymakers, technologists and other stakeholders who are pushing back against the damaging practice of Internet shutdowns,” said Andrew Sullivan, CEO of the Internet Society. “Its groundbreaking and fully transparent methodology will help show governments around the world that shutting down the Internet is never a solution.”

The tool relies on open-access databases, including the Internet Society Pulse’s Shutdown data, the World Bank’s economic indicators, the Armed Conflict Location and Event Data Project’s civil unrest data, Yale University’s election data, and other relevant socioeconomic factors. To stay up to date with real-time changes, the data will be updated quarterly.

According to the press release, internet shutdowns worldwide peaked in 2022 with governments increasingly blocking internet services due to concerns over civil unrest or cybersecurity threats. These disruptions are extremely damaging to the economy, read the document, as they impede online commercial activities and expose companies and the economy to financial and reputational risks.

]]>
https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/feed/ 0 52234
Meta’s New Platform Threads is Called a Potential ‘Twitter-Killer’ https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/?utm_source=rss&utm_medium=rss&utm_campaign=metas-new-platform-threads-is-called-a-potential-twitter-killer https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/#respond Fri, 07 Jul 2023 18:27:57 +0000 https://broadbandbreakfast.com/?p=52207 WASHINGTON, July 7, 2023 – Meta’s new social media platform released on Wednesday, Threads, is the potential end of Twitter, said panelists at a National Digital Roundtable Advisory Board event on Friday. 

The app provides billions of users with an alternative to Twitter amidst growing dissatisfaction with the Elon Musk-owned social media platform. Outrage ensued when Musk announced on July 1 that most Twitter users would be limited to reading just 600 tweets per day on a tier system that limits tweets based on verification status and length of subscription. 

In an official release, Twitter claimed the tweet limit was “to ensure the authenticity of our user base” and to “remove spam and bots from our platform.” The company’s new CEO, Linda Yaccarino, tweeted that “when you have a mission like Twitter – you need to make big moves to keep strengthening the platform.”  

Thread took advantage of Musk’s announcement and paid off in how many users immediately joined, said Kevin Coroneos, director of digital advocacy strategy at the Investment Company Institute.  

According to Meta CEO Mark Zuckerberg, 10 million people signed up for Thread within hours of its release. The numbers continue to soar, surpassing 20 million sign-ups and placing as the number one app on the Google Play Store and App Store.  

Thread bears a resemblance to Twitter in terms of appearance, allowing users to post messages, engage in conversations with others and express appreciation through likes or reports. However, it has a fundamental difference in that the account is intertwined with the user’s Instagram account, meaning that Instagram followers are automatically transferred to Thread. 

The blend of two platforms that are typically personal (Instagram) and professional (Twitter) will create an unique platform that is likely to grow larger, said Patrick Kane, head of digital at British Embassy Washington. It also has the added benefit that new users do not start at square one, but instead come onto the unfamiliar platform with connections and followers from their Instagram account. 

We may see more influencers moving into a world of text-based posts which they didn’t have the platform for before, said Kane.  

Although it is uncertain whether Threads will prove to be the “Twitter-killer” that many propose it will be, its potential to do so will be confirmed if Threads is able to build an advertising-revenue model, said Coroneos. 

Twitter is reactive and fast and it will put up a good fight, added Kane. Meta has a good chance as it already has the infrastructure to do content moderation and advertising campaigns as well as an established and engaged user base. 

For some brands, Threads is the advertising platform that they were looking for, added Coroneos , suggesting that the platform may take off for companies that rely on text-heavy advertising or that market to an intellectually inclined audience base. 

Thread does not currently have a large global influence, as it is not yet approved for use in the European Union and is only available to customers in the U.S. and United Kingdom. 

“Our vision is to take the best parts of Instagram and create a new experience for text, ideas and discussing what’s on your mind,” Zuckerberg said in an Instagram post. “I think the world needs this kind of friendly community, and I’m grateful to all of you who are part of Threads from day one.” 

]]>
https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/feed/ 0 52207
Experts Advocate Federal Agency to Oversee Children Online Safety https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/?utm_source=rss&utm_medium=rss&utm_campaign=experts-advocate-federal-agency-to-oversee-children-online-safety https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/#respond Thu, 15 Jun 2023 19:53:52 +0000 https://broadbandbreakfast.com/?p=51701 WASHINGTON, June 15, 2023 – Kids safety experts urged the government Tuesday to establish a federal agency dedicated to targeting online sexual predators in a Tuesday webinar hosted by the Cato Institute.

The federal agency would be more “effective” than the disjointed, state-by-state legislative approach in addressing the problem of children’s internet safety, according to experts.

Growing concerns about social media’s harms have prompted lawmakers to propose several pieces of legislation to protect children safety and privacy on the internet. Most of these proposals, however, have stalled in Congress, leaving no clear path forward for the federal government to address the issue.

Several states have thus taken matters into their own hands. Montana’s TikTok ban will become effective on Jan 1, 2024. A number of states like Utah, Arkansas, California, and most recently Louisiana, have passed laws imposing age limits or requiring parental consents to open kids accounts on certain platforms.

However, these state bills have come under fire for having wildly varying criteria. Experts also worry they risk infringing on children’s free speech and privacy rights as companies have to collect more data from users to comply with such laws.

“The minute we get a legislation in one state or a judge in another state to weigh in with ideas that really don’t make sense and aren’t enforceable, it’s just going to create more chaos,” said child welfare expert Maureen Flatley during the webinar.

Flatley argued that these measures are mostly “performative” and will not be helpful since they do not address the underlining criminal activity. She said she believed the problems with child safety do not lie with social media companies, but rather online predators who take advantage of those platforms to prey on children. To this end, she advocated for a government agency specifically tasked with investigating and prosecuting internet sexual abusers of children.

Andrew Zack, policy manager at Family Online Safety Institute, also echoed the same opinion, calling for a “chief online safety officer” to deal with child online sexual abuse materials.

“I think that’s where we should be focusing our efforts first and most vociferously and energetically when it comes to safety online for teens and kids,” said Zack.

Earlier in May, the Joe Biden administration announced an interagency task force on kids online health and safety led by the Department of Health and Human Services. It will examine internet threats to minors, recommend methods to address harms, and publish standards for transparency reports and audits by spring 2024.

]]>
https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/feed/ 0 51701
Experts Debate TikTok Ban, Weighing National Security Against Free Speech https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/?utm_source=rss&utm_medium=rss&utm_campaign=experts-debate-tiktok-ban-weighing-national-security-against-free-speech https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/#respond Fri, 26 May 2023 18:19:20 +0000 https://broadbandbreakfast.com/?p=51218 WASHINGTON, May 26, 2023 — With lawmakers ramping up their rhetoric against TikTok, industry and legal experts are divided over whether a ban is the best solution to balance competing concerns about national security and free speech.

Proponents of a TikTok ban argue that the app poses an “untenable threat” because of the amount of data it collects — including user location, search history and biometric data — as well as its relationship with the Chinese government, said Joel Thayer, president of the Digital Progress Institute, at a debate hosted Wednesday by Broadband Breakfast.

These fears have been cited by state and federal lawmakers in a wide range of proposals that would place various restrictions on TikTok, including a controversial bill that would extend to all technologies connected to a “foreign adversary.” More than two dozen states have already banned TikTok on government devices, and Montana recently became the first state to ban the app altogether.

TikTok on Monday sued Montana over the ban, arguing that the “unprecedented and extreme step of banning a major platform for First Amendment speech, based on unfounded speculation about potential foreign government access to user data and the content of the speech, is flatly inconsistent with the Constitution.”

Thayer contested the lawsuit’s claim, saying that “the First Amendment does not prevent Montana or the federal government from regulating non expressive conduct, especially if it’s illicit.”

However, courts have consistently held that the act of communicating and receiving information cannot be regulated separately from speech, said David Greene, civil liberties director and senior staff attorney at the Electronic Frontier Foundation.

“This is a regulation of expression — it’s a regulation of how people communicate with each other and how they receive communications,” he said.

Stringent regulations could protect privacy without suppressing speech

A complete ban of TikTok suppresses far more speech than is necessary to preserve national security interests, making less intrusive options preferable, said Daniel Lyons, nonresident senior fellow at the American Enterprise Institute.

TikTok is currently engaged in a $1.5 billion U.S. data security initiative that will incorporate several layers of government and private sector oversight into its privacy and content moderation practices, in addition to moving all U.S. user data to servers owned by an Austin-based software company.

This effort, nicknamed Project Texas, “strikes me as a much better alternative that doesn’t have the First Amendment problems that an outright TikTok ban has,” Lyons said.

Greene noted that many online platforms — both within and outside the U.S. — collect and sell significant amounts of user data, creating the potential for foreign adversaries to purchase it.

“Merely focusing on TikTok is an underinclusive way of addressing these concerns about U.S. data privacy,” he said. “It would be really great if Congress would actually take a close look at comprehensive data privacy legislation that would address that problem.”

Greene also highlighted the practical barriers to banning an app, pointing out that TikTok is accessible through a variety of alternative online sources. These sources tend to be much less secure than the commonly used app stores, meaning that a ban focused on app stores is actually “making data more vulnerable to foreign exploitation,” he said.

TikTok risks severe enough to warrant some action, panelists agree

Although concerns about suppressing speech are valid, the immediate national security risks associated with the Chinese government accessing a massive collection of U.S. user data are severe enough to warrant consideration of a ban, said Anton Dahbura, executive director of the Johns Hopkins University Information Security Institute.

“Will it hurt people who are building businesses from it? Absolutely,” he said. “But until we have safeguards in place, we need to be cautious about business as usual.”

These safeguards should include security audits, data flow monitoring and online privacy legislation, Dahbura continued.

Thayer emphasized the difference between excessive data collection practices and foreign surveillance.

“I think we all agree that there should be a federal privacy law,” he said. “That doesn’t really speak to the fact that there are potential backdoors, that there are these potential avenues to continue to surveil… So I say, why not both?”

Lyons agreed that TikTok’s “unique threat” might warrant action beyond a general privacy law, but maintained that a nationwide ban was “far too extreme.”

Even if further action against TikTok is eventually justified, Greene advocated for federal privacy legislation to be the starting point.  “We’re spending a lot of time talking about banning TikTok, which again, is going to affect millions of Americans… and we’re doing nothing about having data broadly collected otherwise,” he said. “At a minimum, our priorities are backwards.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, May 24, 2023 – Debate: Should the U.S. Ban TikTok?

Since November, more than two dozen states have banned TikTok on government devices. Montana recently became the first state to pass legislation that would ban the app altogether, and several members of Congress have advocated for extending a similar ban to the entire country. Is TikTok’s billion-dollar U.S. data security initiative a meaningful step forward, or just an empty promise? How should lawmakers navigate competing concerns about national security, free speech, mental health and a competitive marketplace? This special session of Broadband Breakfast Live Online will engage advocates and critics in an Oxford-style debate over whether the U.S. should ban TikTok.

Panelists

Pro-TikTok Ban

  • Anton Dahbura, Executive Director, Johns Hopkins University Information Security Institute
  • Joel Thayer, President, Digital Progress Institute

Anti-TikTok Ban

  • David Greene, Civil Liberties Director and Senior Staff Attorney, Electronic Frontier Foundation
  • Daniel Lyons, Nonresident Senior Fellow, American Enterprise Institute

Moderator

  • Drew Clark, Editor and Publisher, Broadband Breakfast

Anton Dahbura serves as co-director of the Johns Hopkins Institute for Assured Autonomy, and is the executive director of the Johns Hopkins University Information Security Institute. Since 2012, he has been an associate research scientist in the Department of Computer Science. Dahbura is a fellow at the Institute of Electrical and Electronics Engineers, served as a researcher at AT&T Bell Laboratories, was an invited lecturer in the Department of Computer Science at Princeton University and served as research director of the Motorola Cambridge Research Center.

Joel Thayer, president of the Digital Progress Institute, was previously was an associate at Phillips Lytle. Before that, he served as Policy Counsel for ACT | The App Association, where he advised on legal and policy issues related to antitrust, telecommunications, privacy, cybersecurity and intellectual property in Washington, DC. His experience also includes working as legal clerk for FCC Chairman Ajit Pai and FTC Commissioner Maureen Ohlhausen.

David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation, has significant experience litigating First Amendment issues in state and federal trial and appellate courts. He currently serves on the steering committee of the Free Expression Network, the governing committee of the ABA Forum on Communications Law, and on advisory boards for several arts and free speech organizations across the country. Before joining EFF, David was for twelve years the executive director and lead staff counsel for First Amendment Project.

Daniel Lyons is a professor and the Associate Dean of Academic Affairs at Boston College Law School, where he teaches telecommunications, administrative and cyber law. He is also a nonresident senior fellow at the American Enterprise Institute, where he focuses on telecommunications and internet regulation. Lyons has testified before Congress and state legislatures, and has participated in numerous proceedings at the Federal Communications Commission.

Drew Clark (moderator) is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

Graphic by SF Freelancer/Adobe Stock used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/feed/ 0 51218
Supreme Court Sides With Google and Twitter, Leaving Section 230 Untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/#respond Fri, 19 May 2023 02:02:43 +0000 https://broadbandbreakfast.com/?p=51050 WASHINGTON, May 18, 2023 — The Supreme Court on Thursday sided with Google and Twitter in a pair of high-profile cases involving intermediary liability for user-generated content, marking a significant victory for online platforms and other proponents of Section 230.

In Twitter v. Taamneh, the court ruled that Twitter could not be held liable for abetting terrorism by hosting terrorist content. The unanimous decision was written by Justice Clarence Thomas, who had previously signaled interest in curtailing liability protections for online platforms.

“Notably, the two justices who have been most critical of Section 230 and internet platforms said nothing of the sort here,” said Ari Cohn, free speech counsel at TechFreedom.

In a brief unsigned opinion remanding Gonzalez v. Google to the Ninth Circuit, the court declined to address Section 230, saying that the case “appears to state little, if any, plausible claim for relief.”

A wide range of tech industry associations and civil liberties advocates applauded the decision to leave Section 230 untouched.

“Free speech online lives to fight another day,” said Patrick Toomey, deputy director of the ACLU’s National Security Project. “Twitter and other apps are home to an immense amount of protected speech, and it would be devastating if those platforms resorted to censorship to avoid a deluge of lawsuits over their users’ posts.”

John Bergmayer, legal director at Public Knowledge, said that lawmakers should take note of the rulings as they continue to debate potential changes to Section 230.

“Over the past several years, we have seen repeated legislative proposals that would remove Section 230 protections for various platform activities, such as content moderation decisions,” Bergmayer said. “But those activities are fully protected by the First Amendment, and removing Section 230 would at most allow plaintiffs to waste time and money in court, before their inevitable loss.”

Instead of weakening liability protections, Bergmayer argued that Congress should focus on curtailing the power of large platforms by strengthening antitrust law and promoting competition.

“Many complaints about Section 230 and content moderation policies amount to concerns about competition and the outsize influence of major platforms,” he said.

The decision was also celebrated by Sen. Ron Wyden, D-Ore., one of the statute’s original co-authors.

“Despite being unfairly maligned by political and corporate interests that have turned it into a punching bag for everything wrong with the internet, the law Representative [Chris] Cox and I wrote remains vitally important to allowing users to speak online,” Wyden said in a statement. “While tech companies still need to do far better at policing heinous content on their sites, gutting Section 230 is not the solution.”

However, other lawmakers expressed disappointment with the court’s decision, with some — including Rep. Cathy McMorris Rodgers, R-Wash., chair of the House Energy and Commerce Committee — saying that it “underscores the urgency for Congress to enact needed reforms to Section 230.”

]]>
https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/feed/ 0 51050
White House Meets AI Leaders, FTC Claims Meta Violated Privacy Order, Graham Targets Section 230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/#respond Fri, 05 May 2023 13:40:46 +0000 https://broadbandbreakfast.com/?p=50637 May 5, 2023 — Vice President Kamala Harris and other senior officials on Thursday met with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI to discuss the risks associated with artificial intelligence technologies, following the administration’s announcement of $140 million in funding for national AI research.

President Joe Biden briefly stopped by the meeting, telling the tech leaders that “what you’re doing has enormous potential and enormous danger.”

Government officials emphasized the importance of responsible leadership and called on the CEOs to be more transparent about their AI systems with both policymakers and the general public.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said in a statement after the meeting.

In addition to the new investment in AI research, the White House announced that the Office of Management and Budget would be releasing proposed policy guidance on government usage of AI systems for public comment.

The initiatives announced Thursday are “an important first step,” wrote Adam Conner, vice president of technology policy at the Center for American Progress. “But the White House can and should do more. It’s time for President Joe Biden to issue an executive order that requires federal agencies to implement the Blueprint for an AI Bill of Rights and take other key actions to address the challenges and opportunities of AI.”

FTC claims Facebook violated privacy order

The Federal Trade Commission on Wednesday proposed significant modifications to its 2020 privacy settlement with Facebook, accusing the company of violating children’s privacy protections and improperly sharing user data with third parties.

The suggested changes would include a blanket prohibition against monetizing the data of underage users and limits on the uses of facial recognition technology, among several other constraints.

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

Although the agency voted unanimously to issue the order, Commissioner Alvaro Bedoya expressed concerns about whether the changes exceeded the FTC’s limited order modification authority. “I look forward to hearing additional information and arguments and will consider these issues with an open mind,” he said.

Meta responded to the FTC’s action with a lengthy statement calling it a “political stunt” and outlining the changes that have been implemented since the original order.

“Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” wrote Andy Stone, Meta’s director of policy communications, in a statement posted to Twitter.

Meta now has thirty days to respond to the proposed changes. “We will vigorously fight this action and expect to prevail,” Stone said.

Sen. Graham threatens to repeal Section 230 if tech lobby kills EARN IT Act

The Senate Judiciary Committee on Thursday unanimously approved the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, a controversial bill that would create new carveouts to Section 230 in an attempt to combat online child sexual abuse material.

But Sen. Lindsey Graham, R-S.C., the bill’s cosponsor and ranking member of the committee, expressed doubt about the legislation’s future, claiming that “the political and economic power of social media companies is overwhelming.”

“I have little hope that common-sense proposals like this will ever become law because of the lobbying power these companies have at their disposal,” he said in a statement on Thursday. “My next approach is going to be to sunset Section 230 liability protection for social media companies.”

If Congress fails to pass legislation regulating social media companies, Graham continued, “it’s time to open up the American courtrooms as a way to protect consumers.”

However, large tech companies are not the only critics of the EARN IT Act. The American Civil Liberties Union on Thursday urged Congress to reject the proposed legislation, alongside two other bills related to digital privacy.

“These bills purport to hold powerful companies accountable for their failure to protect children and other vulnerable communities from dangers on their services when, in reality, increasing censorship and weakening encryption would not only be ineffective at solving these concerns, it would in fact exacerbate them,” said Cody Venzke, ACLU senior policy counsel.

]]>
https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/feed/ 0 50637
FCC RDOF Penalties, KOSA Reintroduced, Lawmakers Explore AI Regulation https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/#respond Tue, 02 May 2023 17:28:22 +0000 https://broadbandbreakfast.com/?p=50555 May 2, 2023 — The Federal Communications Commission on Monday proposed more than $8 million in fines against 22 applicants for the Rural Digital Opportunity Fund Phase I auction, alleging that they violated FCC requirements by defaulting on their bids.

The defaults prevented an estimated 293,128 locations in 31 states from receiving new investments in broadband infrastructure, according to a press release from the FCC.

“When applicants fail to live up to their obligations in a broadband deployment program, it is a setback for all of us,” Commissioner Geoffrey Starks said in a statement. “Defaulting applicants pay a fine, but rural communities that have already waited too long for broadband pay a larger toll.”

The FCC has previously put forward penalties against several other RDOF applicants for defaulting, including a proposed $4.3 million in fines against 73 applicants in July.

These enforcement actions intends to show that the agency “takes seriously its commitment to hold applicants accountable and ensure the integrity of our universal service funding,” said FCC Chairwoman Jessica Rosenworcel.

Kids Online Safety Act reintroduced

The Kids Online Safety Act was reintroduced on Tuesday by Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., sparking a mix of praise and criticism from a broad range of youth health, civil liberties and technology organizations.

Although KOSA ultimately failed to pass in 2022, it won rare bipartisan support and continued to gain momentum even before its official reintroduction during the current session of Congress through energetic promotion in both House and Senate hearings.

“We need to hold these platforms accountable for their role in exposing our kids to harmful content, which is leading to declining mental health, higher rates of suicide, and eating disorders… these new laws would go a long way in safeguarding the experiences our children have online,” said Johanna Kandel, CEO of the National Alliance for Eating Disorders, in a Tuesday press release applauding the legislation.

However, KOSA’s opponents expressed disappointment that the reintroduced bill appeared largely similar to the original version, failing to substantially address several previous criticisms.

“KOSA’s sponsors seem determined to ignore repeated warnings that KOSA violates the First Amendment and will in fact harm minors,” said Ari Cohn, free speech counsel at TechFreedom, in a press release. “Their unwillingness to engage with these concerns in good faith is borne out by their superficial revisions that change nothing about the ultimate effects of the bill.”

Cohn also claimed that the bill did not clearly establish what constitutes reason for a platform to know that a user is underage.

“In the face of that uncertainty, platforms will clearly have to age-verify all users to avoid liability — or worse, avoid obtaining any knowledge whatsoever and leave minors without any protections at all,” he said. “The most ‘reasonable’ and risk-averse course remains to block minors from accessing any content related to disfavored subjects, ultimately to the detriment of our nation’s youth.”

In addition, the compliance obligations imposed by KOSA could actually undermine teens’ online privacy, argued Matt Schruers, president of the Computer & Communications Industry Association

“Governments should avoid compliance requirements that would compel digital services to collect more personal information about their users — such as geolocation information and a government-issued identification — particularly when responsible companies are instituting measures to collect and store less data on customers,” Schruers said in a statement.

Lawmakers introduce series of bills targeting AI

Amid growing calls for federal regulation of artificial intelligence, Rep. Yvette Clarke, D-N.Y., on Tuesday introduced a bill that would require disclosure of AI-generated content in political ads.

“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” Clarke said in a press release. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.

Other lawmakers have taken a broader approach regulating the rapidly evolving technology. Legislation introduced Friday by Sen. Michael Bennet, D-Colo., would create a cabinet-level AI task force to recommend specific legislative and regulatory reforms for AI-related privacy protections, biometric identification standards and risk assessment frameworks.

“As the deployment of AI accelerates, the federal government should lead by example to ensure it uses the technology responsibly,” Bennet said in a press release. “Americans deserve confidence that our government’s use of AI won’t violate their rights or undermine their privacy.”

Earlier in April, Sen. Chuck Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability.

]]>
https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/feed/ 0 50555
Narrowing Section 230 Could Destroy Smaller Platforms, Warns Nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/?utm_source=rss&utm_medium=rss&utm_campaign=narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/#respond Tue, 04 Apr 2023 23:02:22 +0000 https://broadbandbreakfast.com/?p=50055 WASHINGTON, April 4, 2023 — Narrowing Section 230 protections for online services could have significant economic repercussions, particularly for smaller platforms that rely on content curation as a business model, according to experts at a panel hosted by the Computer & Communications Industry Association Research Center on Tuesday.

“There’s really unintended consequences for the smaller players if you take a ‘one size fits all’ approach here,” said Laura Bisesto, global head of policy, privacy and regulatory compliance for Nextdoor.

Many small to mid-sized platforms operate on a business model that relies on content moderation, Bisesto explained. For example, Reddit hosts thousands of active forums that are each dedicated to a stated topic, and consumers join specific forums for the purpose of seeing content related to those topics.

Similarly, Bisesto claimed that Nextdoor’s proximity-based content curation is what makes the platform competitive.

“We want to make sure you’re seeing relevant, very hyper-local content that’s very timely as well,” she said. “It’s really important to us to be able to continue to use algorithms to provide useful content that’s relevant, and any narrowing of Section 230 could really impede that ability.”

Algorithmic organization is also crucial for large platforms that host a broad range of content, said Ginger Zhe Jin, a professor of economics at the University of Maryland. The sheer volume of content on platforms such as YouTube — which sees 500 hours of new video uploaded each minute — would make it “impossible for consumers to choose and consume without an algorithm to sort and list.”

Without Section 230, some companies’ platforms might choose to forgo the use of algorithms altogether, which Jin argued would “undermine the viability of the internet businesses themselves.”

The alternative would be for companies to broadly remove any content that could potentially generate controversy or be misinterpreted.

“Either way, we’re going to see maybe less content creation and less content consumption,” Jin said. “This would be a dire situation, in my opinion, and would reduce the economic benefits the internet has brought to many players.”

Who should be updating Section 230?

In February, the Section 230 debate finally reached the Supreme Court in a long-awaited case centered around intermediary liability. But some industry experts — and even multiple Supreme Court justices — have cast doubt on whether the court is the right venue for altering the foundational internet law.

Bisesto argued that the question should be left to Congress. “They drafted the law, and I think if it needs to be changed, they should be the ones to look at it,” she said.

However, she expressed skepticism about whether lawmakers would be able to reach a consensus, highlighting the “fundamental disagreement” between the general Republican aim of leaving more content up and Democratic aim of taking more content down.

If the Supreme Court refrains from major changes, “pressure will increase for Congress to do something as the 50 different states are passing different statutes on content moderation,” said Sarah Oh Lam, a senior fellow at the Technology Policy Institute.

]]>
https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/feed/ 0 50055
Congress Grills TikTok CEO Over Risks to Youth Safety and China https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/?utm_source=rss&utm_medium=rss&utm_campaign=congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/#respond Fri, 24 Mar 2023 23:08:05 +0000 https://broadbandbreakfast.com/?p=49865 WASHINGTON, March 24, 2023 — TikTok CEO Shou Zi Chew faced bipartisan hostility from House lawmakers during a high-profile hearing on Thursday, struggling to alleviate concerns about the platform’s safety and security risks amid growing calls for the app to be banned from the United States altogether.

For more than five hours, members of the House Energy and Commerce Committee lobbed criticisms at TikTok, often leaving Chew little or no time to address their critiques.

“TikTok has repeatedly chosen the path for more control, more surveillance and more manipulation,” Chair Cathy McMorris Rodgers, R-Wash., told Chew at the start of the hearing. “Your platform should be banned. I expect today you’ll say anything to avoid this outcome.”

“Shou came prepared to answer questions from Congress, but, unfortunately, the day was dominated by political grandstanding,” TikTok spokesperson Brooke Oberwetter said in a statement after the hearing.

In a viral TikTok video posted Tuesday, and again in his opening statement, Chew noted that the app has over 150 million active monthly users in the United States. TikTok has also become a place where “close to 5 million American businesses — mostly small businesses — go to find new customers and to fuel their growth,” he said.

But McMorris Rodgers argued that the platform’s significant reach only “emphasizes the urgency for Congress to act.”

Lawmakers condemn TikTok’s impact on youth safety and mental health

One of the top concerns highlighted by both Republicans and Democrats was the risk TikTok poses to the wellbeing of children and teens.

“Research has found that TikTok’s addictive algorithms recommend videos to teens that create and exacerbate feelings of emotional distress, including videos promoting suicide, self-harm and eating disorders,” said Ranking Member Frank Pallone, D-N.J.

Chew emphasized TikTok’s commitment to removing explicitly harmful or violative content. The company is also working with entities such as the Boston Children’s Hospital to find models for content that might harm young viewers if shown too frequently, even if the content is not inherently negative — for example, videos of extreme fitness regimens, Chew explained.

In addition, Chew listed several safeguards that TikTok has recently implemented for underage users, such as daily default time limits and the prevention of private messaging for users under 16.

However, few lawmakers seemed interested in these measures, with some noting that they appeared to lack enforceability. Others emphasized the tangible costs of weak safety policies, pointing to multiple youth deaths linked to the app.

Rep. Gus Bilirakis, R-Fla., shared the story of a 16-year-old boy who died by suicide after being served hundreds of TikTok videos glorifying suicidal ideation, self-harm and depression — even though such content was unrelated to his search history, according to a lawsuit filed by his parents against the platform.

At the hearing, Bilirakis underscored his concern by playing a series of TikTok videos with explicit descriptions of suicide, accompanied by messages such as “death is a gift” and “Player Tip: K!ll Yourself.”

“Your company destroyed their lives,” Bilirakis told Chew, gesturing toward the teen’s parents. “Your technology is literally leading to death, Mr. Chew.”

Watch Rep. Bilirakis’ keynote address from the Big Tech & Speech Summit.

Other lawmakers noted that this death was not an isolated incident. “There are those on this committee, including myself, who believe that the Chinese Communist Party is engaged in psychological warfare through Tik Tok to deliberately influence U.S. children,” said Rep. Buddy Carter, R-Ga.

TikTok CEO emphasizes U.S. operations, denies CCP ties

Listing several viral “challenges” encouraging dangerous behaviors and substance abuse, Carter questioned why TikTok “consistently fails to identify and moderate these kinds of harmful videos” — and claimed that no such content was present on Douyin, the version of the app available in China.

Screenshot of Rep. Buddy Carter courtesy of CSPAN

Chew urged legislators to compare TikTok’s practices with those of other U.S. social media companies, rather than a version of the platform operating in an entirely different regulatory environment. “This is an industry challenge for all of us here,” he said.

Douyin heavily restricts political and controversial content in order to comply with China’s censorship regime, while the U.S. currently grants online platforms broad liability for third-party content.

In response to repeated accusations of CCP-driven censorship, particularly regarding the Chinese government’s human rights abuses against the Uyghur population, Chew maintained that related content “is available on our platform — you can go and search it.”

“We do not promote or remove content at the request of the Chinese government,” he repeatedly stated.

A TikTok search for “Uygher genocide” on Thursday morning primarily displayed videos that were critical of the Chinese government, Broadband Breakfast found. The search also returned a brief description stating that China “has committed a series of ongoing human rights abuses against Uyghers and other ethnic and religious minorities,” drawn from Wikipedia and pointing users to the U.S.-based website’s full article on the topic.

TikTok concerns bolster calls for Section 230 reform

Although much of the hearing was specifically targeted toward TikTok, some lawmakers used those concerns to bolster an ongoing Congressional push for Section 230 reform.

“Last year, a federal judge in Pennsylvania found that Section 230 protected TikTok from being held responsible for the death of a 10-year-old girl who participated in a blackout challenge,” said Rep. Bob Latta, R-Ohio. “This company is a picture-perfect example of why this committee in Congress needs to take action immediately to amend Section 230.”

In response, Chew referenced Latta’s earlier remarks about Section 230’s historical importance for online innovation and growth.

“As you pointed out, 230 has been very important for freedom of expression on the internet,” Chew said. “[Free expression] is one of the commitments we have given to this committee and our users, and I do think it’s important to preserve that. But companies should be raising the bar on safety.”

Rep. John Curtis, R-Utah., asked if TikTok’s use of algorithmic recommendations should forfeit the company’s Section 230 protections — echoing the question at the core of Gonzalez v. Google, which was argued before the Supreme Court in February.

Other inquiries were more pointed. Chew declined to answer a question from Rep. Randy Weber, R-Texas, about whether “censoring history and historical facts and current events should be protected by Section 230’s good faith requirement.”

Weber’s question seemed to incorrectly suggest that the broad immunity provided by Section 230 (c)(1) is conditioned on the “good faith” referenced in in part (c)(2)(A) of the statute.

Ranking member says ongoing data privacy initiative is unacceptable

Chew frequently pointed to TikTok’s “Project Texas” initiative as a solution to a wide range of data privacy concerns. “The bottom line is this: American data, stored on American soil, by an American company, overseen by American personnel,” he said.

All U.S. user data is now routed by default to Texas-based company Oracle, Chew added, and the company aims to delete legacy data currently stored in Virginia and Singapore by the end of the year.

Several lawmakers pointed to a Thursday Wall Street Journal article in which China’s Commerce Ministry reportedly said that a sale of TikTok would require exporting technology, and therefore would be subject to approval from the Chinese government.

When asked if Chinese government approval was required for Project Texas, Chew replied, “We do not believe so.”

But many legislators remained skeptical. “I still believe that the Beijing communist government will still control and have the ability to influence what you do, and so this idea — this ‘Project Texas’ — is simply not acceptable,” Pallone said.

]]>
https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/feed/ 0 49865
Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/?utm_source=rss&utm_medium=rss&utm_campaign=additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/#respond Mon, 13 Mar 2023 19:05:29 +0000 https://broadbandbreakfast.com/?p=49309 WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.

“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”

“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”

Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.

“Twitter, YouTube and Facebook  remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”

John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.

“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.

]]>
https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/feed/ 0 49309
Section 230 Shuts Down Conversation on First Amendment, Panel Hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-shuts-down-conversation-on-first-amendment-panel-hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/#respond Fri, 10 Mar 2023 02:10:38 +0000 https://broadbandbreakfast.com/?p=49171 WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/feed/ 0 49171
Creating Institutions for Resolving Content Moderation Disputes Out-of-Court https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/?utm_source=rss&utm_medium=rss&utm_campaign=creating-institutions-for-resolving-content-moderation-disputes-out-of-court https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/#respond Fri, 10 Mar 2023 02:07:02 +0000 https://broadbandbreakfast.com/?p=49168 WASHINGTON, March 9, 2023 – A member of Meta’s oversight board, John Samples, suggested that out-of-court dispute institutions for content moderation may become the preferred method of settlement in Broadband Breakfast’s Big Tech & Speech Summit Thursday. 

Meta’s oversight board was created by the company to support free speech by upholding or reversing Facebook’s content moderation decisions. It works independently of the company and hosts 40 members around the world.  

The European Union’s Digital Services Act, which came into force in November of 2022, requires platforms to remove illegal content and ensure that users can contest removal of their content. It clarifies that platforms are only liable for users’ unlawful behavior if they are aware of it and fail to remove it. 

The Act specifies illegal speech to include speech that does harm to the electoral system, hate speech, and speech that harms fundamental rights. The appeals process allows citizens to go directly to the company, the national courts, or out-of-court dispute resolution institutions, none of which currently exist in Europe. 

According to Samples, the Act opens the way for private organizations like the oversight board to play a part in moderation disputes. “Meta has a tremendous advantage here as a first mover,” said Samples, “and the model of the oversight board may well spread to Europe and perhaps other places.” 

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples. “It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world.  

The private option of self-regulation has worked well, said Samples. “It may well be expanding throughout much of the world. If it goes to Europe, it could go throughout.” 

Currently, of the media that Meta reviews for moderation, only one percent is restricted, either by taking down the content or reducing the size of the audience exposed to it, said Samples. The oversight board primarily rules against Meta’s decisions and accepts comments from independent interests.  

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/feed/ 0 49168
Congress Should Amend Section 230, Senate Subcommittee Hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-amend-section-230-senate-subcommittee-hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/#respond Thu, 09 Mar 2023 00:08:28 +0000 https://broadbandbreakfast.com/?p=49135 WASHINGTON, March 8, 2023 – Law professionals at a Senate Subcommittee on Privacy, Technology and the Law hearing on Wednesday urged Congress to amend Section 230 to specify that it applies only to free speech, rather than the promotion of misinformation.

Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. Mary Anne Franks, professor of law at the University of Miami School of Law, argued that there is a difference between protecting free speech and protecting information and the harmful dissemination of that information.

Hany Farid, professor at University of California, Berkley, argued that there should be a distinction between a negligently designed product feature and a core component to the platform’s business. For example, YouTube’s video recommendations is a product feature rather than an essential function as it is designed solely to maximize advertising revenue by keeping users on the platform, he said.

YouTube claims that the algorithm to recommend videos is unable to distinguish between two different videos. This, argued Farid, should be considered a negligently designed feature as YouTube knew or should have reasonably known that the feature could lead to harm.

Section 230, said Farid, was written to immunize tech companies from defamation litigation, not to immunize tech companies from any wrongdoing, including negligible design of its features.

“At a minimum,” said Franks, returning the statue to its original intention “would require amending the statute to make clear that the law’s protections only apply to speech and to make clear that platforms that knowingly promote harmful content are ineligible for immunity.”

In an State of the Net conference earlier this month, Frank emphasized the “good Samaritan” aspect of the law, claiming that it is supposed to “provide incentives at platforms to actually do the right thing.” Instead, the law does not incentivize platforms to moderate its content, she argued.

Jennifer Bennett of national litigation boutique Gupta Wessler suggested that Congress uphold what is known as the Henderson framework, which would hold a company liable if it materially contributes to what makes content unlawful, including the recommendation and dissemination of the content.

Unfortunately, lamented Eric Schnapper, professor of law at University of Washington School of Law, Section 230 has barred the right of Americans to get redress if they’ve been harmed by big tech. “Absolute immunity breeds absolute irresponsibility,” he said.

Senator Richard Blumenthal, R-Connecticut, warned tech companies that “reform is coming” at the onset of the hearing.

This comes weeks after the Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube. The case saw industry dissention on whether section 230 protects algorithmic recommendations. Justice Brett Kavanaugh claimed that YouTube forfeited its protection by using recommendation algorithms but was overturned in the court ruling.

]]>
https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/feed/ 0 49135
Content Moderation, Section 230 and the Future of Online Speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/?utm_source=rss&utm_medium=rss&utm_campaign=content-moderation-section-230-and-the-future-of-online-speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/#respond Wed, 08 Mar 2023 23:23:43 +0000 https://broadbandbreakfast.com/?p=49138

In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.

Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.

Members of the Breakfast Club also have access to high-resolution videos from the Big Tech & Speech Summit!

Join to receive your copy of the Breakfast Club Exclusive Report!

]]>
https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/feed/ 0 49138
Industry Experts Caution Against Extreme Politicization in Section 230 Debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=industry-experts-caution-against-extreme-politicization-in-section-230-debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/#respond Tue, 07 Mar 2023 21:59:01 +0000 https://broadbandbreakfast.com/?p=49096 WASHINGTON, March 7, 2023 — Congress should reject the heavily politicized rhetoric surrounding Section 230 and instead consider incremental reforms that are narrowly targeted at specific problems, according to industry experts at State of the Net on Monday.

“What I really wish Congress would do, since 230 has become this political football, is put the football down for a second,” said Billy Easley, senior public policy lead at Reddit.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

Instead of starting from Section 230, Easley suggested that Congress methodically identify specific problems and consider how each could best be addressed. With many issues, he claimed that there are “a slew of policy options” more effective than changing Section 230.

Much of the discussion about Section 230 is “intentionally being pitted into binaries,” said Yaël Eisenstat, head of the Anti-Defamation League’s Center for Technology and Society. In reality, she continued, many proposals exist somewhere between keeping Section 230 exactly as it is and throwing it out altogether.

Eisenstat expressed skepticism about the often-repeated claim that changing Section 230 will “break the internet.”

“Let’s be frank — the tobacco industry, the automobile industry, the oil and gas industry, the food industry also did not want to be regulated and claimed it would completely destroy them,” she said. “And guess what? They all still exist.”

Joel Thayer, president of the Digital Progress Institute, claimed that many arguments against Section 230 reform are “harkening back to a more libertarian view, which is ‘let’s not touch it because bad things can happen.”

“I think that’s absurd,” he said. “I think even from a political standpoint, that’s just not the reality.”

Potential reforms should be targeted and consider unintended consequences

While Section 230 has performed “unbelievably well” for a law dating back to 1996, it should at least be “tweaked” to better reflect the present day, said Matt Perault, director of the Center on Technology Policy at the University of North Carolina.

But Perault acknowledged that certain proposed changes would create a significant compliance burden for smaller platforms, unlike large companies with “huge legal teams, huge policy teams, huge communications teams.”

Concerns about the impact of Section 230 reform on small businesses can be addressed by drawing distinct guidelines about which types of companies are included in any given measure, Thayer said.

Easley warned that certain proposals could lead to major unintended consequences. While acknowledging Republican concerns about “censorship” of conservative content on social media platforms, he argued that removing Section 230 protections was not the best way to address the issue — and might completely backfire.

“There’s going to be less speech in other areas,” Easley said. “We saw this with SESTA/FOSTA, we’ve seen this in other sorts of proposals as well, and I just really wish that Congress would keep that in mind.”

Thayer suggested that future legislative efforts start with increasing tech companies’ transparency, building off of the bipartisan momentum from the previous session of Congress.

Easley agreed, adding that increased access to data will allow lawmakers to more effectively target other areas of concern.

]]>
https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/feed/ 0 49096
TikTok Security Officer Touts New Oversight Framework as Congress Pushes for Ban https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/?utm_source=rss&utm_medium=rss&utm_campaign=tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/#respond Tue, 07 Mar 2023 19:16:20 +0000 https://broadbandbreakfast.com/?p=49081 WASHINGTON, March 7, 2023 — As lawmakers grow increasingly wary of TikTok’s risks to national security, the company is developing a complex framework with significant government and third-party oversight in a bid to continue its United States operations.

“It’s going to be an unprecedented amount of transparency,” said Will Farrell, interim security officer at TikTok, in a keynote address at State of the Net on Monday.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

TikTok’s efforts to win U.S. government approval come in the face of growing Congressional hostility toward the platform. Sens. Mark Warner, D-Va., and John Thune, R-S.D., on Tuesday unveiled a bill aimed at giving President Joe Biden the ability to impose a complete ban of the app.

Farrell claimed the new framework would be a comprehensive answer to widespread concerns of unauthorized access to data and Chinese state influence over content. “I can’t explain how hard and complex this is… We’ve been working on this for close to two years,” he said.

TikTok’s U.S. data security initiative — internally named “Project Texas” — is largely a product of the company’s ongoing negotiations with the inter-agency Committee on Foreign Investment in the United States, which first opened an investigation into TikTok’s national security risks in 2019.

‘Project Texas’ will emphasize third-party oversight

The initiative’s title references its partnership with Austin-based software company Oracle, which will house U.S. user data and review TikTok source code.

In June 2022, TikTok wrote in a letter to several senators that all U.S. user data was being being routed to Oracle by default and that the company would eventually “delete U.S. users’ protected data from our own systems and fully pivot to Oracle cloud servers located in the U.S.”

Another key component of Project Texas is a new subsidiary entity, TikTok U.S. Data Security, Inc., which will replicate many of TikTok’s existing processes for U.S. users with several additional layers of oversight. USDS will be governed by an independent board of directors, which in turn will report to CFIUS.

Including Oracle, USDS and CFIUS, Farrell said that “at least seven independent third parties” would be overseeing TikTok’s U.S. data security operations.

“We’re breaking new ground here — no one’s ever done anything like this before,” Farrell said. “Essentially what we’re doing is every single line of code… every single line of code has to be inspected by Oracle and another third-party source code inspector approved by the U.S. government.”

Oracle and the third-party inspector will also thoroughly check the moderation models and recommendation algorithms to ensure that they don’t have “a bias or political agenda,” Farrell said.

Many lawmakers still skeptical about TikTok’s data security practices

Despite TikTok’s efforts, the legislation proposed by Warner and Thune sets the stage for a national ban of the platform — and several other members of Congress have previously indicated their potential support.

In February, Sens. Richard Blumenthal, D-Conn., and Jerry Moran, R-Kan., urged CFIUS to “swiftly conclude its investigation and impose strict structural restrictions between TikTok’s American operations and its Chinese parent company, ByteDance.”

In a letter to Treasury Secretary and CFIUS Chair Janet Yellen, the senators expressed “profound concern” about TikTok’s future U.S. operations and warned that the committee “should not put its imprimatur on a deal with TikTok if it cannot fully ensure our personal data and access to information is free from spying and interference from the Chinese government.”

“Moreover, monitoring and hosting requirements will never address the distrust earned from ByteDance’s past conduct,” the senators added.

In December 2022, the chairs of the House Foreign Affairs Committee and the House Armed Services Committee sent a letter to Yellen and other officials saying that the reported negotiations were “deeply concerning.”

“At present, it does not appear the draft agreement reportedly favored by Treasury would require ByteDance, and by extension [People’s Republic of China] authorities, to give up control of its algorithm,” wrote Reps. Michael McCaul, R-Texas, and Mike Rogers, R-Ala.

]]>
https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/feed/ 0 49081
State of the Net Panelists Clash Over Section 230 Interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/?utm_source=rss&utm_medium=rss&utm_campaign=state-of-the-net-panelists-clash-over-section-230-interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/#respond Mon, 06 Mar 2023 21:19:55 +0000 https://broadbandbreakfast.com/?p=49063 Gonzalez v. Google.]]> WASHINGTON, March 6, 2023 — Experts at the State of the Net conference on Monday expressed a wide range of viewpoints about how Section 230 should be interpreted in the context of Gonzalez v. Google, an intermediary liability case recently argued before the Supreme Court.

If the justices want to understand Section 230’s original intent, NetChoice CEO Steve DelBianco said, they should turn to the law’s original co-authors — Sen. Ron Wyden, D-Ore., and former Rep. Chris Cox, now on the NetChoice board of directors. In January, Wyden and Cox filed an amicus brief urging the Supreme Court to uphold Section 230 of the Communications Decency Act.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

But Mary Anne Franks, professor at the University of Miami School of Law, argued that a modern-day interpretation of the law should be based on several factors other than the author’s explanation, such as the statute’s actual wording and its legislative history. “The law does not have to be subject to revisionist or self-serving interests of interpretations after the fact,” she said.

Franks emphasized the “Good Samaritan” aspect of Section 230, claiming that the law is supposed to “provide incentives for platforms to actually do the right thing.”

Alex Abdo, litigation director at Columbia University’s Knight First Amendment Institute, said he was sympathetic to Franks’ concerns and agreed that tech companies are generally governed by financial motivations, rather than a dedication to free speech or the public interest. Not only can online platforms be exploited to cause harm, he said, they often amplify sensationalized and provocative speech by design.

However, Abdo maintained that Section 230 played a key role in protecting unpopular online speech — including content posted by human rights activists, government whistleblowers and dissidents — by making it less likely that social media platforms would feel the need to remove it.

DelBianco expressed measured optimism about the justices’ approach to Section 230, noting that Justice Clarence Thomas seemed to reject some of the algorithmic harm claims despite his previously expressed interest in altering Section 230. DelBianco also highlighted Justice Amy Coney Barrett’s line of questioning about whether an individual can be held liable for simply liking or retweeting content, calling it “one of the most surprising questions” of the oral arguments.

But despite their appreciation for certain aspects of the justices’ approach, multiple panelists agreed that changing Section 230 should be a careful and deliberate process, better suited to Congress than the courts. “I would much prefer a scalpel to a sledgehammer,” said Matt Wood, vice president of policy and general counsel at Free Press.

The Senate Judiciary Subcommittee on Privacy, Technology and the Law will hold a hearing on Wednesday to examine platform liability, focusing on Gonzalez.

]]>
https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/feed/ 0 49063
Supreme Court Considers Liability for Twitter Not Removing Terrorist Content https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-considers-liability-for-twitter-not-removing-terrorist-content https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/#respond Thu, 23 Feb 2023 22:55:58 +0000 https://broadbandbreakfast.com/?p=48774 Twitter v. Taamneh hinged on specific interpretations of the Anti-Terrorism Act.]]> WASHINGTON, February 22, 2023 — In the second of two back-to-back cases considering online intermediary liability, Supreme Court justices on Wednesday sought the precise definitions of two words — “substantial” and “knowingly” — in order to draw lines that could have major implications for the internet as a whole.

The oral arguments in Twitter v. Taamneh closely examined the text of the Anti-Terrorism Act, considering whether the social media platform contributed to a 2017 terrorist attack by hosting terrorist content and failing to remove ISIS-affiliated accounts — despite the absence of a direct link to the attack. The hearing followed Tuesday’s arguments in Gonzalez v. Google, a case stemming from similar facts but primarily focused on Section 230.

Many of Wednesday’s arguments hinged on specific interpretations of the ATA, which states that liability for injuries caused by international terrorism “may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”

Seth Waxman, the attorney representing Twitter, argued that Twitter should not be held liable unless it knew that it was substantially assisting the act of terrorism that injured the plaintiff.

“But [it’s] not enough to know that you’re providing substantial assistance to a group that does this kind of thing?” Justice Ketanji Brown Jackson asked.

“Of course not,” Waxman said.

Jackson was unconvinced, saying that she did not see a clear distinction.

Justice Amy Coney Barrett questioned whether the means of communication to individuals planning a terrorist attack would be considered “substantial assistance.” Waxman replied that it would depend on how significant and explicit the communications were.

Clashing interpretations of Anti-Terrorism Act left unresolved

At one point, Justice Neil Gorsuch suggested that Waxman was misreading the law by taking the act of terrorism as the object of the “aiding and abetting” clause, rather than the person who committed the act.

The latter reading would help Twitter, the justice said, because the plaintiff would then have to prove that the company aided a specific person, rather than an abstract occurrence.

However, Waxman doubled down on his original reading.

“Are you sure you want to do that?” Gorsuch asked, drawing laughs from the gallery.

Waxman also pushed back against assertions that he claimed were “combining silence or inaction with affirmative assistance.” If Twitter said that its platform should not be used to support terrorist groups or acts, Waxman argued, the company should not be held liable for any potential terrorist content, even if it did nothing at all to enforce that rule.

Justice Elena Kagan disagreed. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism,” she said.

Justices expressed concern over broad scope of potential liability

Unlike in the Gonzalez arguments, where the government largely supported increasing platform liability, Deputy Solicitor General Edwin Kneedler defended Twitter, saying that holding the company liable could result in hindering “legitimate and important activities by businesses, charities and others.”

Several justices raised similar concerns about the decision’s potentially far-reaching impacts.

“If we’re not pinpointing cause and effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances,” Justice Clarence Thomas told Eric Schnapper, the attorney representing the plaintiffs.

Schnapper agreed that this would be the case, but proposed setting reasonable boundaries around liability by using a standard of “remoteness in time, weighed together with the volume of activity.”

Justice Samuel Alito proposed a scenario in which a police officer tells phone companies, gas stations, restaurants and other businesses to stop serving individuals who are broadly suspected of committing a crime. Would the businesses have to comply, Alito questioned, to avoid liability for aiding and abetting?

Schnapper did not answer directly. “That’s a difficult question,” he said. “But clearly, at one end of the spectrum… If you provide a gun to someone who you know is a murderer, I think you could be held liable for aiding and abetting.”

]]>
https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/feed/ 0 48774
Bret Swanson: Censors Target Internet Talkers With AI Truth Scores https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/?utm_source=rss&utm_medium=rss&utm_campaign=bret-swanson-censors-target-internet-talkers-with-ai-truth-scores https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/#respond Thu, 23 Feb 2023 14:44:09 +0000 https://broadbandbreakfast.com/?p=48750 Elon Musk’s purchase of Twitter may have capped the opening chapter in the Information Wars, where free speech won a small but crucial battle. Full spectrum combat across the digital landscape, however, will only intensify, as a new report from the Brookings Institution, a key player in the censorship industrial complex, demonstrates.

First, a review.

Reams of internal documents, known as the Twitter Files, show that social media censorship in recent years was far broader and more systematic than even we critics suspected. Worse, the files exposed deep cooperation – even operational integration – among Twitter and dozens of government agencies, including the FBI, Department of Homeland Security, DOD, CIA, Cybersecurity Infrastructure Security Agency (CISA), Department of Health and Human Services, CDC, and, of course, the White House.

Government agencies also enlisted a host of academic and non-profit organizations to do their dirty work. The Global Engagement Center, housed in the State Department, for example, was originally launched to combat international terrorism but has now been repurposed to target Americans. The U.S. State Department also funded a UK outfit called the Global Disinformation Index, which blacklists American individuals and groups and convinces advertisers and potential vendors to avoid them. Homeland Security created the Election Integrity Partnership (EIP) –  including the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, and the Atlantic Council’s DFRLab – which flagged for social suppression tens of millions of messages posted by American citizens.

George Orwell

Even former high government U.S. officials got in on the act – appealing directly (and successfully) to Twitter to ban mischief-making truth-tellers.

With the total credibility collapse of legacy media over the last 15 years, people around the world turned to social media for news and discussion. When social media then began censoring the most pressing topics, such as Covid-19, people increasingly turned to podcasts. Physicians and analysts who’d been suppressed on Twitter, Facebook, and YouTube, and who were of course nowhere to be found in legacy media, delivered via podcasts much of the very best analysis on the broad array of pandemic science and policy.

Which brings us to the new report from Brookings, which concludes that one of the most prolific sources of ‘misinformation’ is now – you guessed it – podcasts. And further, that the under-regulation of podcasts is a grave danger.

In “Audible reckoning: How top political podcasters spread unsubstantiated and false claims,” Valerie Wirtschafter writes:

  • Due in large part to the say-whatever-you-want perceptions of the medium, podcasting offers a critical avenue through which unsubstantiated and false claims proliferate. As the terms are used in this report, the terms “false claims,” “misleading claims,” “unsubstantiated claims” or any combination thereof are evaluations by the research team of the underlying statements and assertions grounded in the methodology laid out below in the research design section and appendices. Such claims, evidence suggests, have played a vital role in shaping public opinion and political behavior. Despite these risks, the podcasting ecosystem and its role in political debates have received little attention for a variety of reasons, including the technical difficulties in analyzing multi-hour, audio-based content and misconceptions about the medium.

To analyze the millions of hours of audio content, Brookings used natural language processing to search for key words and phrases. It then relied on self-styled fact-checking sites Politifact and Snopes – pause for uproarious laughter…exhale – to determine the truth or falsity of these statements. Next, it deployed a ‘cosine similarity’ function to detect similar false statements in other podcasts.

The result: “conservative podcasters were 11 times more likely than liberal podcasters to share claims fact-checked as false or unsubstantiated.”

One show Brookings misclassified as “conservative” is the Dark Horse science podcast hosted by Bret Weinstein and Heather Heying. Over the past three years, they meticulously explored the complex world of Covid, delivering scintillating insights and humbly correcting their infrequent missteps. Brookings, however, determined 13.8 percent of their shows contained false information.

What would the Brookings methodology, using a different set of fact checkers, spit out if applied to CNN, the Washington Post, the FDA, CDC, or hundreds of blogs, podcasts, TV doctors, and “science communicators,” who got nearly everything wrong?

Speaking on journalist Matt Taibbi’s podcast, novelist Walter Kirn skewered the new A.I. fact-checking scheme. It pretends to turn censorship into a “mathematical, not Constitutional, concern” – or, as he calls it, “sciency, sciency, sciency bullshit.”

The daisy chain of presumptuous omniscience, selection bias, and false precision employed to arrive at these supposedly quantitative conclusions about the vast, diverse, sometimes raucous, and often enlightening world of online audio is preposterous.

And yet it is deadly serious.

The collapse of support for free speech among Western pseudo-elites is the foundation of so many other problems, from medicine to war. Misinformation is the natural state of the world. Open science and vigorous debate are the tools we deploy to become less wrong over time. Individual and collective decision-making depend on them.

Bret Swanson is an analyst of technology & the economy, president of Entropy Economics, fellow at the American Enterprise Institute, and chairman of the Indiana Public Retirement System. This article originally appeared on Infonomena by Bret Swanson on Substack on February 22, 2023, and is reprinted with permission.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

]]>
https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/feed/ 0 48750
Supreme Court Justices Express Caution About Entering Section 230 Debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-justices-express-caution-about-entering-into-section-230-debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/#respond Thu, 23 Feb 2023 01:01:27 +0000 https://broadbandbreakfast.com/?p=48728 Gonzalez v. Google, justices repeatedly voiced concerns about potential unintended consequences.]]> WASHINGTON, February 22, 2023 — Supreme Court justices expressed broad skepticism about removing liability protections for websites that automatically recommend user-generated content, marking a cautious start to a pair of long-awaited cases involving platform liability for terrorist content.

Gonzalez v. Google, argued on Tuesday, hinges on whether YouTube’s use of recommendation algorithms puts it outside the scope of Section 230, which generally provides platforms with immunity for third-party content.

A separate case involving terrorism and social media, Twitter v. Taamneh, was argued on Wednesday. Although the basic circumstances of the cases are similar — both brought against tech companies by the families of terrorist attack victims — the latter focuses on what constitutes “aiding and abetting” under the Anti-Terrorism Act.

Section 230 arguments central to Gonzalez

Section 230 protections are at the heart of Gonzalez. The provision, one of the few surviving components of the 1996 Communications Decency Act, is credited by many experts with facilitating the internet’s development and enabling its daily workings.

But the plaintiffs in Gonzalez argued that online platforms such as YouTube should be held accountable for actively promoting harmful content.

As oral arguments commenced, Justice Elena Kagan repeatedly raised concerns that weakening Section 230 protections could have a wider impact than intended. “Every time anybody looks at anything on the internet, there is an algorithm involved… everything involves ways of organizing and prioritizing material,” she said.

These organization methods are essential for making platforms user-friendly, argued Lisa Blatt, the attorney representing Google. “There are a billion hours of videos watched each day on YouTube, and 500 hours uploaded every minute,” she said.

Justice Brett Kavanaugh pointed to the inclusion of platforms that “pick, choose, analyze or digest content” in the statutory definition of covered entities. Claiming that YouTube forfeited Section 230 protections by using recommendation algorithms, Kavanaugh said, “would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230.”

Eric Schnapper, the attorney representing the plaintiffs, argued that the provision in question was only applicable to software providers and YouTube did not qualify.

Justices concerned about unintended impacts of weakening Section 230

Despite Schnapper’s interpretation of the statute’s intent, Kavanaugh maintained his concerns about altering it. “It seems that you continually want to focus on the precise issue that was going on in 1996, but… to pull back now from the interpretation that’s been in place would create a lot of economic dislocation, would really crash the digital economy,” he said.

Weakening Section 230 could also open the door to “a world of lawsuits,” Kagan predicted. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit,” she said, pointing to search engines and social media platforms as other services that could be impacted.

Deputy Solicitor General Malcolm Stewart, who primarily sided with the plaintiff, argued that even if such lawsuits were attempted, “they would not be suits that have much likelihood of prevailing.”

Justice Amy Coney Barrett noted that the text of Section 230 explicitly includes users of online platforms in addition to the platforms themselves. If the statute was changed, Barrett questioned, could individual users be held liable for any content that they liked, reposted or otherwise engaged with?

“That’s content you’ve created,” Schnapper replied.

‘Confusion’ about the case and the court’s proper role

Throughout the hearing, several justices expressed confusion at the complexities of the case.

During an extended definition of YouTube “thumbnails” — which Schnapper described as a “joint creation” because of the platform-provided URLs accompanying user-generated media — Justice Samuel Alito told Schnapper that the justice was “completely confused by whatever argument you’re making at the present time.”

At another point, Justice Ketanji Brown Jackson said she was “thoroughly confused” by the way that two different questions — whether Google could claim immunity under Section 230 and whether the company aided terrorism — were seemingly being conflated.

Just minutes later, after Stewart presented his argument on behalf of the Justice Department, Justice Clarence Thomas began his line of questioning with, “Well, I’m still confused.”

In addition to frequent references to confusion, multiple justices suggested that some aspects of the case might be better left to Congress.

“I don’t have to accept all of [Google’s] ‘the sky is falling’ stuff to accept… there is a lot of uncertainty about going the way you would have us go, in part just because of the difficulty of drawing lines in this area,” Kagan said. “Isn’t that something for Congress to do, not the court?”

Kavanaugh echoed those concerns, saying that the case would require “a very precise predictive judgment” and expressing uncertainty about whether the court could adequately consider the implications.

But Chief Justice John Roberts seemed equally hesitant to hand off the decision. “The amici suggest that if we wait for Congress to make that choice, the internet will be sunk,” he said.

]]>
https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/feed/ 0 48728
Bipartisan Alarm Over Social Media’s Harms to Children Prompts Slew of Proposed Legislation https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/?utm_source=rss&utm_medium=rss&utm_campaign=bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/#respond Mon, 20 Feb 2023 13:40:17 +0000 https://broadbandbreakfast.com/?p=48684 WASHINGTON, February 20, 2023 — Senators from both sides of the aisle came together on Tuesday to condemn social media platforms’ failure to protect underage users, demonstrating bipartisan collaboration and underscoring a trend of increased government scrutiny toward tech companies.

The Judiciary Committee hearing included discussion of several bills aimed at protecting children online, such as the Kids Online Safety Act, a measure that would create a “duty of care” requirement for platforms to shield children from harmful content. KOSA gained significant bipartisan traction during the previous session of Congress but ultimately failed to pass.

The bill’s co-sponsors — Sens. Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn. — emphasized the urgency of congressional action, pointing to research published Feb. 13 by the Centers for Disease Control and Prevention that showed a sharp increase in youth mental health challenges, particularly among girls and LGBTQ teens.

“It’s a public health emergency egregiously and knowingly exacerbated by Big Tech, aggravated by toxic content on eating disorders, bullying, even suicide — driven by Big Tech’s black box algorithms leading children down dark rabbit holes,” Blumenthal said.

In addition to social media’s impact on mental health, several senators focused on the issue of digital child sexual exploitation. Judiciary Committee Chair Dick Durbin, D-Ill, announced that he would be circulating the draft of a bill aimed at stopping the spread of online child sex abuse material by strengthening victim protection measures and platform reporting requirements. Sen. Lindsey Graham, R-S.C., said he was working with Sen. Elizabeth Warren, D-Mass., on a bill that would create a regulatory commission with the power to shut down digital platforms that failed to implement “best business practices to protect children from sexual exploitation online.”

Graham, the top Republican on the committee, added that he and Warren “have pretty divergent opinions except here — we have to do something, and the sooner the better.”

Bipartisan collaboration was a theme throughout the discussion. “I don’t know if any or all of you realize what you witnessed today, but this Judiciary Committee crosses the political spectrum — not just from Democrats to Republicans, but from real progressives to real conservatives — and what you heard was the unanimity of purpose,” Durbin said toward the end of the hearing.

Broad agreement for repealing Section 230, but not on its replacement

Some of the proposed social media bills discussed Tuesday would directly address the question of online platform immunity for third-party content. Several senators advocated for the EARN IT Act, which would assign platforms more responsibility for finding and removing child sexual abuse material — taking “a meaningful step toward reforming this unconscionably excessive Section 230 shield to Big Tech accountability,” Blumenthal argued.

The senators and witnesses who spoke at Tuesday’s hearing were largely united against Section 230. Witness Kristen Bride — whose son died by suicide after becoming the target of anonymous cyberbullying — said that her lawsuit against the anonymous messaging apps was dismissed based on Section 230 immunity.

“I think it is just absolutely vital that we change the law to allow suits like yours to go forward,” Sen. Josh Hawley, R-Mo., told Bride. “And if that means we have to repeal all of Section 230, I’m fine with it.”

However, Sen. Sheldon Whitehouse, D-R.I., noted that the primary barrier to Section 230 reform is disagreement over what should take its place. “I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” he said.

The Supreme Court is scheduled to hear a Section 230 case — Gonzalez v. Google — on Tuesday.

Other bills aim to protect kids online through age limits, privacy measures

Beyond the bills discussed at the hearing, several senators have recently proposed legislation aimed at protecting children’s online safety from several different angles.

On Tuesday, Hawley introduced a bill that would enforce a minimum age requirement of 16 for all users of social media platforms, as well as a bill that would commission a report on social media’s effects on underage users.

The former proposal, known as the MATURE Act, would require that users upload an image of government-issued identification in order to make an account on a social media platform, which has raised concerns among digital privacy advocates about the extent of personal data collection required.

Personal data collection was the subject of a different bill introduced the same week by Sen. Mazie Hirono, D- Hawaii, alongside Durbin and Blumenthal. The proposed Clean Slate for Kids Online Act would update the Children’s Online Privacy Protection Act of 1998 by giving individuals the right to demand that internet companies delete all personal information collected about them before the age of 13.

Discussion on the matter comes against the backdrop of a number of developments over the past year and a half, including state attorneys general investigating the impact of TikTok on kids and whistleblower testimony that alleged Facebook knew about the negative mental health impact its photo sharing app Instagram had on kids but didn’t take action on it.

]]>
https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/feed/ 0 48684
Jim Jordan Demands Social Media Documents from Biden Administration https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/?utm_source=rss&utm_medium=rss&utm_campaign=jim-jordan-demands-social-media-documents-from-biden-administration https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/#respond Thu, 09 Feb 2023 02:30:47 +0000 https://broadbandbreakfast.com/?p=48391 WASHINGTON, February 8, 2023 — House Judiciary Chairman Jim Jordan, R-Ohio, on Wednesday asked the Department of Justice to provide copies of all documents that have been produced in an ongoing lawsuit over alleged government collusion with social media companies.

“Congress has an important interest in protecting and advancing fundamental free speech principles, including by examining how the Executive Branch coordinates with or coerces private actors to suppress First Amendment-protected speech,” Jordan wrote in a letter to Brian Boynton, the principal deputy assistant attorney general in the civil division.

The attorneys general of Missouri and Louisiana filed suit against President Joe Biden and other government officials in May 2022, claiming that the administration had worked with tech companies to “censor free speech and propagandize the masses.”

Other officials named in the lawsuit include former White House Press Secretary Jen Psaki, U.S. Surgeon General Vivek Murthy and former Chief Medical Advisor Anthony Fauci. The suit also names the Department of Homeland Security and the Centers for Disease Control and Prevention, among other individuals and agencies.

Missouri Attorney General Andrew Bailey in January released a series of emails between White House officials and social media companies, arguing that they proved the Biden administration had been attempting to “censor opposing viewpoints on major social media platforms.”

Jordan requested that all other documents produced by the Department of Justice as part of the litigation be provided to the Judiciary Committee no later than Feb. 22.

“As Congress continues to examine how to best protect Americans’ fundamental freedoms, the documents discovered and produced during the Missouri v. Biden litigation are necessary to assist Congress in understanding the problem and evaluating potential legislative reforms,” he wrote.

Jordan is at the forefront of growing Republican hostility toward tech companies. In January, he listed “reining in Big Tech’s censorship of free speech” as a key issue to be addressed by the House Judiciary Committee during the coming year.

And in December, Jordan sent letters to the heads of Apple, Amazon, Alphabet, Meta and Microsoft to “request more information about the nature and extent of your companies’ collusion with the Biden Administration.”

]]>
https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/feed/ 0 48391
Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/?utm_source=rss&utm_medium=rss&utm_campaign=automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/#respond Thu, 02 Feb 2023 20:51:56 +0000 https://broadbandbreakfast.com/?p=48282 WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

]]>
https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/feed/ 0 48282
Must Internet Platforms Host Objectionable Content? Appeals Courts Consider ‘Must Carry’ Rules https://broadbandbreakfast.com/2023/01/must-internet-platforms-host-objectionable-content-appeals-courts-consider-must-carry-rules/?utm_source=rss&utm_medium=rss&utm_campaign=must-internet-platforms-host-objectionable-content-appeals-courts-consider-must-carry-rules https://broadbandbreakfast.com/2023/01/must-internet-platforms-host-objectionable-content-appeals-courts-consider-must-carry-rules/#respond Tue, 31 Jan 2023 00:57:45 +0000 https://broadbandbreakfast.com/?p=48202 WASHINGTON, January 30, 2023 — As the Supreme Court prepares to hear a pair of cases about online platform liability, it is also considering a separate pair of social media lawsuits that aim to push content moderation practices in the opposite direction, adding additional questions about the First Amendment and common carrier status to an already complicated issue.

The “must-carry” laws in Texas and Florida, both aimed at limiting online content moderation, met with mixed decisions in appeals courts after being challenged by tech industry groups NetChoice and the Computer & Communications Industry Association. The outcomes will likely end up “affecting millions of Americans and their ability to express themselves online,” said Chris Marchese, counsel at NetChoice, at a Broadband Breakfast Live Online event on Wednesday.

In September, a federal appeals court in the Fifth Circuit upheld the Texas law, ruling that social media platforms can be regulated as “common carriers,” or required to carry editorial programming as were cable television operators in the Turner Broadcasting System v. FCC decisions from the 1990s.

Dueling appeals court interpretations

By contrast, the judges overturning the Florida ruling held that social media platforms are not common carriers. Even if they were, the 11th Circuit Court judges held, “neither law nor logic recognizes government authority to strip an entity of its First Amendment rights merely by labeling it a common carrier.”

Whether social media platforms should be treated like common carriers is “a fair question to ask,” said Marshall Van Alstyne, Questrom chair professor at Boston University. It would be difficult to reach a broad audience online without utilizing one of the major platforms, he claimed.

However, Marchese argued that in the Texas ruling, the Fifth Circuit “to put it politely, ignored decades of binding precedent.” First Amendment protections have previously been extended to “what we today might think of as common carriers,” he said.

“I think we can safely say that Texas and Florida do not have the ability to force our private businesses to carry political speech or any type of speech that they don’t see fit,” Marchese said.

Ari Cohn, free speech counsel at TechFreedom, disagreed with the common carrier classification altogether, referencing an amicus brief arguing that “social media and common carriage are irreconcilable concepts,” filed by TechFreedom in the Texas case.

Similar ‘must-carry’ laws are gaining traction in other states

While the two state laws have the same general purpose of limiting moderation, their specific restrictions differ. The Texas law would ban large platforms from any content moderation based on “viewpoint.” Critics have argued that the term is so vague that it could prevent moderation entirely.

“In other words, if a social media service allows coverage of Russia’s invasion of Ukraine, it would also be forced to disseminate Russian propaganda about the war,” Marchese said. “So if you allow conversation on a topic, then you must allow all viewpoints on that topic, no matter how horrendous those viewpoints are.”

The Florida law “would require covered entities — including ones that you wouldn’t necessarily think of, like Etsy — to host all or nearly all content from so-called ‘journalistic enterprises,’ which is basically defined as anybody who has a small following on the internet,” Marchese explained. The law also prohibits taking down any speech from political candidates.

The impact of the two cases will likely be felt far beyond those two states, as dozens of similar content moderation bills have already been proposed in states across the country, according to Ali Sternburg, vice president of information policy for the CCIA.

But for now, both laws are blocked while the Supreme Court decides whether to hear the cases. On Jan. 23, the court asked for the U.S. solicitor general’s input on the decision.

“I think this was their chance to buy time because in effect, so many of these cases are actually asking the court to do opposite things,” Van Alstyne said.

Separate set of cases calls for more, not less, moderation

In February, the Supreme Court will hear two cases that effectively argue the reverse of the Texas and Florida laws by alleging that social media platforms are not doing enough to remove harmful content.

The cases were brought against Twitter and Google by family members of terror attack victims, who argue that the platforms knowingly allowed terrorist groups to spread harmful content and coordinate attacks. One case specifically looks at YouTube’s recommendation algorithms, asking whether Google can be held liable for not only hosting but promoting terrorist content.

Algorithms have become “the new boogeyman” in ongoing technology debates, but they essentially act like mirrors, determining content recommendations based on what users have searched for, engaged with and said about themselves, Cohn explained.

Reese Schonfeld, President of Cable News Network and Reynelda Nuse, weekend anchorwoman for CNN, stand at one of the many sets at the broadcast center in Atlanta on May 31, 1980. The network, owned by Ted Turner, began it’s 24-hour-a-day news broadcasts on Sunday in the afternoon. (AP Photo/Joe Holloway used with permission.)

“This has been litigated in a number of different contexts, and in pretty much all of them, the courts have said we can’t impose liability for the communication of bad ideas,” Cohn said. “You hold the person who commits the wrongful act responsible, and that’s it. There’s no such thing as negligently pointing to someone to bad information.”

A better alternative to reforming Section 230 would be implementing “more disclosures and transparency specifically around how algorithms are developed and data about enforcement,” said Jessica Dheere, director of Ranking Digital Rights.

Social media platforms have a business incentive to take down terrorist content, and Section 230 is what allows them to do so without over-moderating, Sternberg said. “No one wants to see this horrible extremist content on digital platforms, especially the services themselves.”

Holding platforms liable for all speech that they carry could have a chilling effect on speech by motivating platforms to err on the side of removing content, Van Alstyne said.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, January 25, 2023, 12 Noon ET – Section 230, Google, Twitter and the Supreme Court

The Supreme Court will soon hear two blockbuster cases involving Section 230 of the Telecommunications Act: Gonzalez v. Google on February 21, and  Twitter v. Taamneh on February 22. Both of these cases ask if tech companies can be held liable for terrorist content on their platforms. Also in play: Laws in Florida and in Texas (both on hold during the course of litigation) that would limit online platforms’ ability to moderate content. In a recent brief, Google argued that denying Section 230 protections for platforms “could have devastating spillover effects.” In advance of Broadband Breakfast’s Big Tech & Speech Summit on March 9, this Broadband Breakfast Live Online event will consider Section 230 and the Supreme Court.

Panelists:

  • Chris Marchese, Counsel, NetChoice
  • Ari Cohn, Free Speech Counsel, TechFreedom
  • Jessica Dheere, Director, Ranking Digital Rights
  • Ali Sternburg, Vice President of Information Policy, Computer & Communications Industry Association
  • Marshall Van Alstyne, Questrom Chair Professor, Boston University
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

Chris Marchese analyzes technology-related legislative and regulatory issues at both the federal and state level. His portfolio includes monitoring and analyzing proposals to amend Section 230 of the Communications Decency Act, antitrust enforcement, and potential barriers to free speech and free enterprise on the internet. Before joining NetChoice in 2019, Chris worked as a law clerk at the U.S. Chamber Litigation Center, where he analyzed legal issues relevant to the business community, including state-court decisions that threatened traditional liability rules.

Ari Cohn is Free Speech Counsel at TechFreedom. A nationally recognized expert in First Amendment law, he was previously the Director of the Individual Rights Defense Program at the Foundation for Individual Rights in Education (FIRE), and has worked in private practice at Mayer Brown LLP and as a solo practitioner, and was an attorney with the U.S. Department of Education’s Office for Civil Rights. Ari graduated cum laude from Cornell Law School, and earned his Bachelor of Arts degree from the University of Illinois at Urbana-Champaign.

Jessica Dheere is the director of Ranking Digital Rights, and co-authored RDR’s spring 2020 report “Getting to the Source of Infodemics: It’s the Business Model.” An affiliate at the Berkman Klein Center for Internet & Society, she is also founder, former executive director, and board member of the Arab digital rights organization SMEX, and in 2019, she launched the CYRILLA Collaborative, which catalogs global digital rights law and case law. She is a graduate of Princeton University and the New School.

Ali Sternburg is Vice President of Information Policy at the Computer & Communications Industry Association, where she focuses on intermediary liability, copyright, and other areas of intellectual property. Ali joined CCIA during law school in 2011, and previously served as Senior Policy Counsel, Policy Counsel, and Legal Fellow. She is also an Inaugural Fellow at the Internet Law & Policy Foundry.

Marshall Van Alstyne (@InfoEcon) is the Questrom Chair Professor at Boston University. His work explores how IT affects firms, innovation, and society with an emphasis on business platforms. He co-authored the international best seller Platform Revolution and his research influence ranks among the top 2% of all scientists globally.

Drew Clark (moderator) is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/01/must-internet-platforms-host-objectionable-content-appeals-courts-consider-must-carry-rules/feed/ 0 48202
Section 230 Interpretation Debate Heats Up Ahead of Landmark Supreme Court Case https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/#respond Wed, 25 Jan 2023 16:51:16 +0000 https://broadbandbreakfast.com/?p=48083 WASHINGTON, January 25, 2023 — With less than a month to go before the Supreme Court hears a case that could dramatically alter internet platform liability protections, speakers at a Federalist Society webinar on Tuesday were sharply divided over the merits and proper interpretation of Section 230 of the Communications Decency Act.

Gonzalez v. Google, which will go before the Supreme Court on Feb. 21, asks if Section 230 protects Google from liability for hosting terrorist content — and promoting that content via algorithmic recommendations.

If the Supreme Court agrees that “Section 230 does not protect targeted algorithmic recommendations, I don’t see a lot of the current social media platforms and the way they operate surviving,” said Ashkhen Kazaryan, a senior fellow at Stand Together.

Joel Thayer, president of the Digital Progress Institute, argued that the bare text of Section 230(c)(1) does not include any mention of the “immunities” often attributed to the statute, echoing an argument made by several Republican members of Congress.

“All the statute says is that we cannot treat interactive computer service providers or users — in this case, Google’s YouTube — as the publisher or speaker of a third-party post, such as a YouTube video,” Thayer said. “That is all. Warped interpretations from courts… have drastically moved away from the text of the statute to find Section 230(c)(1) as providing broad immunity to civil actions.”

Kazaryan disagreed with this claim, noting that the original co-authors of Section 230 — Sen. Ron Wyden, D-OR, and former Rep. Chris Cox, R-CA — have repeatedly said that Section 230 does provide immunity from civil liability under specific circumstances.

Wyden and Cox reiterated this point in a brief filed Thursday in support of Google, explaining that whether a platform is entitled to immunity under Section 230 relies on two prerequisite conditions. First, the platform must not be “responsible, in whole or in part, for the creation or development of” the content in question, as laid out in Section 230(f)(3). Second, the case must be seeking to treat the platform “as the publisher or speaker” of that content, per Section 230(c)(1).

The statute co-authors argued that Google satisfied these conditions and was therefore entitled to immunity, even if their recommendation algorithms made it easier for users to find and consume terrorist content. “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” they wrote.

Despite the support of Wyden and Cox, Randolph May, president of the Free State Foundation, predicted that the case was “not going to be a clean victory for Google.” And in addition to the upcoming Supreme Court cases, both Congress and President Joe Biden could potentially attempt to reform or repeal Section 230 in the near future, May added.

May advocated for substantial reforms to Section 230 that would narrow online platforms’ immunity. He also proposed that a new rule should rely on a “reasonable duty of care” that would both preserve the interests of online platforms and also recognize the harms that fall under their control.

To establish a good replacement for Section 230, policymakers must determine whether there is “a difference between exercising editorial control over content on the one hand, and engaging in conduct relating to the distribution of content on the other hand… and if so, how you would treat those different differently in terms of establishing liability,” May said.

No matter the Supreme Court’s decision in Gonzalez v. Google, the discussion is already “shifting the Overton window on how we think about social media platforms,” Kazaryan said. “And we already see proposed regulation legislation on state and federal levels that addresses algorithms in many different ways and forms.”

Texas and Florida have already passed laws that would significantly limit social media platforms’ ability to moderate content, although both have been temporarily blocked pending litigation. Tech companies have asked the Supreme Court to take up the cases, arguing that the laws violate their First Amendment rights by forcing them to host certain speech.

]]>
https://broadbandbreakfast.com/2023/01/section-230-interpretation-debate-heats-up-ahead-of-landmark-supreme-court-case/feed/ 0 48083
Supreme Court Seeks Biden Administration’s Input on Texas and Florida Social Media Laws https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/#respond Tue, 24 Jan 2023 16:36:52 +0000 https://broadbandbreakfast.com/?p=48045 WASHINGTON, January 24, 2023 — The Supreme Court on Monday asked for the Joe Biden administration’s input on a pair of state laws that would prevent social media platforms from moderating content based on viewpoint.

The Republican-backed laws in Texas and Florida both stem from allegations that tech companies are censoring conservative speech. The Texas law would restrict platforms with at least 50 million users from removing or demonetizing content based on “viewpoint.” The Florida law places significant restrictions on platforms’ ability to remove any content posted by members of certain groups, including politicians.

Two trade groups — NetChoice and the Computer & Communications Industry Association — jointly challenged both laws, meeting with mixed results in appeals courts. They, alongside many tech companies, argue that the law would violate platforms’ First Amendment right to decide what speech to host.

Tech companies also warn that the laws would force them to disseminate objectionable and even dangerous content. In an emergency application to block the Texas law from going into effect in May, the trade groups wrote that such content could include “Russia’s propaganda claiming that its invasion of Ukraine is justified, ISIS propaganda claiming that extremism is warranted, neo-Nazi or KKK screeds denying or supporting the Holocaust, and encouraging children to engage in risky or unhealthy behavior like eating disorders.”

The Supreme Court has not yet agreed to hear the cases, but multiple justices have commented on the importance of the issue.

In response to the emergency application in May, Justice Samuel Alito wrote that the case involved “issues of great importance that will plainly merit this Court’s review.” However, he disagreed with the court’s decision to block the law pending review, writing that “whether applicants are likely to succeed under existing law is quite unclear.”

Monday’s request asking Solicitor General Elizabeth Prelogar to weigh in on the cases allows the court to put off the decision for another few months.

“It is crucial that the Supreme Court ultimately resolve this matter: it would be a dangerous precedent to let government insert itself into the decisions private companies make on what material to publish or disseminate online,” CCIA President Matt Schruers said in a statement. “The First Amendment protects both the right to speak and the right not to be compelled to speak, and we should not underestimate the consequences of giving government control over online speech in a democracy.”

The Supreme Court is still scheduled to hear two other major content moderation cases next month, which will decide whether Google and Twitter can be held liable for terrorist content hosted on their respective platforms.

]]>
https://broadbandbreakfast.com/2023/01/supreme-court-seeks-biden-administrations-input-on-texas-and-florida-social-media-laws/feed/ 0 48045
Luke Lintz: The Dark Side of Banning TikTok on College Campuses https://broadbandbreakfast.com/2023/01/luke-lintz-the-dark-side-of-banning-tiktok-on-college-campuses/?utm_source=rss&utm_medium=rss&utm_campaign=luke-lintz-the-dark-side-of-banning-tiktok-on-college-campuses https://broadbandbreakfast.com/2023/01/luke-lintz-the-dark-side-of-banning-tiktok-on-college-campuses/#respond Sat, 21 Jan 2023 00:18:57 +0000 https://broadbandbreakfast.com/?p=48005 In recent months, there have been growing concerns about the security of data shared on the popular social media app TikTok. As a result, a number of colleges and universities have decided to ban the app from their campuses.

While these bans may have been implemented with the intention of protecting students’ data, they could also have a number of negative consequences.

Banning TikTok on college campuses could also have a negative impact on the inter-accessibility of the student body. Many students use the app to connect with others who share their interests or come from similar backgrounds. For example, international students may use the app to connect with other students from their home countries, or students from underrepresented groups may use the app to connect with others who share similar experiences.

By denying them access to TikTok, colleges may be inadvertently limiting their students’ ability to form diverse and supportive communities. This can have a detrimental effect on the student experience, as students may feel isolated and disconnected from their peers. Additionally, it can also have a negative impact on the wider college community, as the ban may make it more difficult for students from different backgrounds to come together and collaborate.

Furthermore, by banning TikTok, colleges may also be missing out on the opportunity to promote diverse events on their campuses. The app is often used by students to share information about events, clubs and other activities that promote diversity and inclusivity. Without this platform, it may be more difficult for students to learn about these initiatives and for organizations to reach a wide audience.

Lastly, it’s important to note that banning TikTok on college campuses could also have a negative impact on the ability of college administrators to communicate with students. Many colleges and universities have started to use TikTok as a way to connect with students and share important information and updates. The popularity of TikTok makes it the perfect app for students to use to reach large, campus-wide audiences.

TikTok also offers a unique way for college administrators to connect with students in a more informal and engaging way. TikTok allows administrators to create videos that are fun, creative and relatable, which can help to build trust and to heighten interaction with students. Without this platform, it may be more difficult for administrators to establish this type of connection with students.

Banning TikTok from college campuses could have a number of negative consequences for students, including limiting their ability to form diverse and supportive communities, missing out on future opportunities and staying informed about what’s happening on campus. College administrators should consider the potential consequences before making a decision about banning TikTok from their campuses.

Luke Lintz is a successful businessman, entrepreneur and social media personality. Today, he is the co-owner of HighKey Enterprises LLC, which aims to revolutionize social media marketing. HighKey Enterprises is a highly rated company that has molded its global reputation by servicing high-profile clients that range from A-listers in the entertainment industry to the most successful one percent across the globe. This piece is exclusive to Broadband Breakfast.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

]]>
https://broadbandbreakfast.com/2023/01/luke-lintz-the-dark-side-of-banning-tiktok-on-college-campuses/feed/ 0 48005
Google Defends Section 230 in Supreme Court Terror Case https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/?utm_source=rss&utm_medium=rss&utm_campaign=google-defends-section-230-in-supreme-court-terror-case https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/#respond Fri, 13 Jan 2023 13:57:54 +0000 https://broadbandbreakfast.com/?p=47756 WASHINGTON, January 13, 2023 – The Supreme Court could trigger a cascade of internet-altering effects that will encourage the proliferation of offensive speech and the suppression of speech and create a “litigation minefield” if it decides Google is liable for the results of terrorist attacks by entities publishing on its YouTube platform, the search engine company argued Thursday.

The high court will hear the case of an America family whose daughter Reynaldo Gonzalez was killed in an ISIS terrorist attack in Paris in 2015. The family sued Google under the AntiTerrorism Act for the death, alleging YouTube participated as a publisher of ISIS recruitment videos when it hosted them and its algorithm shared them on the video platform.

But in a brief to the court on Thursday, Google said it is not liable for the content published by third parties on its website according to Section 230 of the Communications Decency Act, and that deciding otherwise would effectively gut platform protection provision and “upend the internet.”

Denying the provision’s protections for platforms “could have devastating spillover effects,” Google argued in the brief. “Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user. If plaintiffs could evade Section 230(c)(1) by targeting how websites sort content or trying to hold users liable for liking or sharing articles, the internet would devolve into a disorganized mess and a litigation minefield.”

It would also “perversely encourage both wide-ranging suppression of speech and the proliferation of more offensive speech,” it added in the brief. “Sites with the resources to take down objectionable content could become beholden to heckler’s vetoes, removing anything anyone found objectionable.

“Other sites, by contrast, could take the see-no-evil approach, disabling all filtering to avoid any inference of constructive knowledge of third-party content,” Google added. “Still other sites could vanish altogether.”

Google rejected the argument that recommendations by its algorithms conveys an “implicit message,” arguing that in such a world, “any organized display [as algorithms do] of content ‘implicitly’ recommends that content and could be actionable.”

The Supreme Court is also hearing a similar case simultaneously in Twitter v. Taamneh.

The Section 230 scrutiny has loomed large since former President Donald Trump was banned from social media platforms for allegedly inciting the Capitol Hill riots in January 2021. Trump and conservatives called for rules limited that protection in light of the suspensions and bans, while the Democrats have not shied away from introducing legislation limited the provision if certain content continued to flourish on those platforms.

Supreme Court Justice Clarence Thomas early last year issued a statement calling for a reexamination of tech platform immunity protections following a Texas Supreme Court decision that said Facebook was shielded from liability in a trafficking case.

Meanwhile, startups and internet associations have argued for the preservation of the provision.

“These cases underscore how important it is that digital services have the resources and the legal certainty to deal with dangerous content online,” Matt Schruers, president of the Computer and Communications Industry Association, said in a statement when the Supreme Court decided in October to hear the Gonzalez case.

“Section 230 is critical to enabling the digital sector’s efforts to respond to extremist and violent rhetoric online,” he added, “and these cases illustrate why it is essential that those efforts continue.”

]]>
https://broadbandbreakfast.com/2023/01/google-defends-section-230-in-supreme-court-terror-case/feed/ 0 47756
CES 2023: Changing Section 230 Would Jeopardize Startup https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2023-changing-section-230-would-jeopardize-startup https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/#respond Fri, 06 Jan 2023 20:41:30 +0000 https://broadbandbreakfast.com/?p=47514 LAS VEGAS, January 6, 2021 – Removing Section 230’s protections for online platforms would expose small startups to crippling legal costs, said Kate Tummarello, executive director of Engine, a non-profit that advocates for startups, speaking on a Friday panel at the Consumer Electronics Show.

Section 230 of the Communications Decency Act, which became law in 1996, shields online platforms from civil liability for content posted by third-parties. While proponents say the provision is critical to the existence of platforms, public figures and policymakers on both right and left have, of late, advocated its repeal.

Tummarello argued that Section 230 allows young, resource-poor companies to combat lawsuits more efficiently, noting that a the costs of a full litigation could put a startup out of business. “Defending against a lawsuit over user content, even with 230 in place, still costs tens of thousands of dollars,” Tummarello said. She stated that even platforms whose actions are legally justified benefit from Section 230 since they could be subjected to and ruined by a frivolous lawsuit.

Section 230 will likely soon be subjected to judicial interpretation at the Supreme Court in a pair of cases, Gonzalez v. Google and Twitter v. Taamneh. Both cases question whether tech platforms are liable for hosting pro-terrorist third-party content.

Charlotte Slaiman, competition policy director at Public Knowledge, voiced concern over platforms’ content-moderation decisions that, she said, enable online misinformation harassment. However, she argued that directly regulating content moderation is “fraught,” instead calling for “competition-based” reform that will provide alternative services for users.

]]>
https://broadbandbreakfast.com/2023/01/ces-2023-changing-section-230-would-jeopardize-startup/feed/ 0 47514
CES 2023: Social Media Advertising Should Feel ‘Authentic’ https://broadbandbreakfast.com/2023/01/ces-2023-social-media-advertising-should-feel-authentic/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2023-social-media-advertising-should-feel-authentic https://broadbandbreakfast.com/2023/01/ces-2023-social-media-advertising-should-feel-authentic/#respond Thu, 05 Jan 2023 03:28:40 +0000 https://broadbandbreakfast.com/?p=47443 LAS VEGAS, January 4, 2023 – Brands that advertise in partnership with social media content creators should tailor their advertisements to appeal to those creators’ audiences, experts told an audience Wednesday at the 2023 Consumer Electronics Show.

Brands that partner on advertising campaigns with content creators must decide how much creative control to release, the panelists said, noting that campaigns that strictly adhere to a preexisting formula may seem canned and stale. When the influencer is given freedom to shape the advertisements, however, his or her followers are more likely to perceive the campaign as authentic.

“When we see [the advertising process] work really well, it’s because the brand is creating the outline and letting the influencer fill it in with their own personality, with the way they want to show up in market,” said Anthony Iaffaldano, vice president of sales, marketing, and insights at Fandom. “You want that person to have the freedom to express themselves in the way that got them the followers that they have in the first place,” he added.

Ashley Menschner, senior vice president of media at the Ad Council, said her organization has varied its advertising strategy between various social media platforms. “We’ve really leaned into…both micro and macro-influencers – who’s going to resonate with the audiences on those platforms and then build really integrated campaigns that have an authentic voice on those platforms,” Menschner said.

Danielle Johnsen Karr, the head of media company Team Whistle’s MAGNET agency, touted possibilities for “niche” content creation. Karr said that new technologies give creators freedom to tell stories in innovative ways.

]]>
https://broadbandbreakfast.com/2023/01/ces-2023-social-media-advertising-should-feel-authentic/feed/ 0 47443
Amid Big Tech Controversies, Section 230’s Future is Uncertain https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/?utm_source=rss&utm_medium=rss&utm_campaign=amid-big-tech-controversies-section-230s-future-is-uncertain https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/#respond Wed, 21 Dec 2022 03:46:20 +0000 https://broadbandbreakfast.com/?p=47091 From the 12 Days of Broadband:

The past year has seen many controversial decisions from big tech platforms, but 2022 might end up being the last year that such decisions are shielded by the liability protections of Section 230 of the Telecommunications Act.

Many actors are now calling for the statue’s repeal or reformulation. Conservative populists on the right argue that it enables social media giants to silence conservative speech. Progressives on the left believe it allows platforms to shirk responsibility for moderating hate speech and misinformation.

 Download the complete 12 Days of Broadband report

Of course, Section 230 still has defenders from across the political spectrum. Indeed, none of the many proposed bills for legislative change have garnered much traction. Furthermore, new Twitter CEO Elon Musk’s takeover has demonstrated the pitfalls of a pure “free speech” approach to content moderation: It took just days for his “comedy is legal again” declaration to turn into “tricking people is not OK” — during which time parody tweets reportedly cost advertisers billions. 

And despite Musk’s initially stated intention to allow all legally permissible content, he decided to suspend Ye (formerly Kanye West) from Twitter in December for tweeting a swastika graphic. Later that month, he took still bolder steps, blocking links to competitor platforms as well as suspending the accounts of several tech journalists and an account that tracked his private jet based on public flight data.

On a larger scale, Florida’s attorney general asked the Supreme Court to review a law that would limit online platforms’ ability to moderate content after an appeals court ruled that the law violated the First Amendment. A similar Texas law that forbids content moderation based on “viewpoint” is on hold pending an appeal to the Supreme Court. 

While the Court has not yet taken up those cases, it has agreed to hear two others related to Section 230: Gonzalez v. Google and Twitter v. Taamneh, both of which ask if tech companies can be held liable for terrorist content on their platforms.

Given the Court’s conservative majority, and the fact that at least one justice (Clarence Thomas) has openly argued that social media companies should be regulated as common carriers, Section 230’s 25-year reign might be coming to an end.

]]>
https://broadbandbreakfast.com/2022/12/amid-big-tech-controversies-section-230s-future-is-uncertain/feed/ 0 47091
New FTC Guidelines Proposes to Address Deceptive Endorsement Advertising on Social Media https://broadbandbreakfast.com/2022/12/new-ftc-guidelines-proposes-to-address-deceptive-endorsement-advertising-on-social-media/?utm_source=rss&utm_medium=rss&utm_campaign=new-ftc-guidelines-proposes-to-address-deceptive-endorsement-advertising-on-social-media https://broadbandbreakfast.com/2022/12/new-ftc-guidelines-proposes-to-address-deceptive-endorsement-advertising-on-social-media/#respond Mon, 12 Dec 2022 17:13:11 +0000 https://broadbandbreakfast.com/?p=46803 WASHINGTON, December 12, 2022 — The rapidly changing social media landscape has led to significant gray area surrounding endorsement advertising from both well-known household names and internet “microcelebrities,” with widespread deceptive practices being facilitated by vague rules and a lack of enforcement, according to experts on a Center for Data Innovation panel Thursday.

The Federal Trade Commission in July proposed changes to its endorsement guidelines for the first time since 2009, and is currently soliciting public comment on the proposal.

These changes are long overdue, said Christopher Terry, professor at the University of Minnesota’s Hubbard School of Journalism and Mass Communication. The government would not tolerate the deceptive endorsement practices by influencers “from an endorser in any other medium,” he said.

He said one of the primary challenges with endorsement advertising on social media is disclosing the financial relationship between brands and influencers in a way that will be understood by consumers. Recent research has demonstrated that many people cannot correctly identify sponsored content on social media, Terry said.

Children are particularly susceptible to endorsement advertising, added Irene Ly, policy counsel for Common Sense Media. Although the proposed guidelines’ inclusion of a new section about child-specific advertising is a positive step, she said, there is still a lack of specificity that might cause confusion for advertisers.

Existing legal standard for endorsement disclosure

There are different ways to provide disclosures on various platforms, and it can be difficult for influencers to figure out the best method for each, said Po Yi, a partner at Mannatt, Phelps & Phillips. However, it is broadly understood that paid posts need some form of disclosure, and most influencers are attempting to comply.

According to the FTC’s “Guides Concerning the Use of Endorsements and Testimonials in Advertising,” Section 255.1, “Endorsements must reflect the honest opinions, findings, beliefs, or experience of the endorser.”

Screenshot of panelists from the Center for Data Innovation event

The bigger challenge is that many influencers, especially those with smaller followings and less access to legal resources, don’t realize that their endorsements must be based on personal experience with the product, Yi said.

In November, Google and iHeartMedia had to pay millions after being sued by the FTC for deceptive endorsement advertising. Google provided iHeartMedia with scripts for on-air personalities and celebrities to endorse the Pixel 4 in ads that aired over 11,000 times, despite the fact that none of the endorsers had ever owned the phone.

So far, enforcement agencies have focused on going after companies rather than individual influencers.

Companies should educate influencers on the disclosure and personal experience requirements, but they also need to consistently monitor influencers to ensure continued compliance, Terry said.

There is often confusion about who is ultimately responsible for compliance, Yi said.

“If something goes wrong, the FTC will probably tell you right away, everyone in that chain is responsible, from the influencers to the media company to the agency to the advertiser,” she said.

Another question of liability arises with fake reviews: Should online platforms be responsible for verifying users’ identity, or does that fall to the brands?

Section 230 currently protects social media platforms from liability for fake reviews, Terry said. However, with new content moderation laws on the horizon, this responsibility could soon shift.

]]>
https://broadbandbreakfast.com/2022/12/new-ftc-guidelines-proposes-to-address-deceptive-endorsement-advertising-on-social-media/feed/ 0 46803
Tech Groups, Free Expression Advocates Support Twitter in Landmark Content Moderation Case https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/?utm_source=rss&utm_medium=rss&utm_campaign=tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/#respond Thu, 08 Dec 2022 20:05:59 +0000 https://broadbandbreakfast.com/?p=46735 WASHINGTON, December 8, 2022 — Holding tech companies liable for the presence of terrorist content on their platforms risks substantially limiting their ability to effectively moderate content without overly restricting speech, according to several industry associations and civil rights organizations.

The Computer & Communications Industry Association, along with seven other tech associations, filed an amicus brief Tuesday emphasizing the vast amount of online content generated on a daily basis and the existing efforts of tech companies to remove harmful content.

A separate coalition of organizations, including the Electronic Frontier Foundation and the Center for Democracy & Technology, also filed an amicus brief.

Supreme Court to hear two social media cases next year

The briefs were filed in support of Twitter as the Supreme Court prepares to hear Twitter v. Taamneh in 2023, alongside the similar case Gonzalez v. Google. The cases, brought by relatives of ISIS attack victims, argue that social media platforms allow groups like ISIS to publish terrorist content, recruit new operatives and coordinate attacks.

Both cases were initially dismissed, but an appeals court in June 2021 overturned the Taamneh dismissal, holding that the case adequately asserted its claim that tech platforms could be held liable for aiding acts of terrorism. The Supreme Court will now decide whether an online service can be held liable for “knowingly” aiding terrorism if it could have taken more aggressive steps to prevent such use of its platform.

The Taamneh case hinges on the Anti-Terrorism Act, which says that liability for terrorist attacks can be placed on “any person who aids and abets, by knowingly providing substantial assistance.” The case alleges that Twitter did this by allowing terrorists to utilize its communications infrastructure while knowing that such use was occurring.

Gonzalez is more directly focused on Section 230, a provision under the Communications Decency Act that shields platforms from liability for the content their users publish. The case looks at YouTube’s targeted algorithmic recommendations and the amplification of terrorist content, arguing that online platforms should not be protected by Section 230 immunity when they engage in such actions.

Justice Clarence Thomas tips his hand against Section 230

Supreme Court Justice Clarence Thomas wrote in 2020 that the “sweeping immunity” granted by current interpretations of Section 230 could have serious negative consequences, and suggested that the court consider narrowing the statute in a future case.

Experts have long warned that removing Section 230 could have the unintended impact of dramatically increasing the amount of content removed from online platforms, as liability concerns will incentivize companies to err on the side of over-moderation.

Without some form of liability protection, platforms “would be likely to use necessarily blunt content moderation tools to over-restrict speech or to impose blanket bans on certain topics, speakers, or specific types of content,” the EFF and other civil rights organizations argued.

Platforms are already self-motivated to remove harmful content because failing to do so can risk their user base, CCIA and the other tech organizations said.

There is an immense amount of harmful content to be found on online and moderating it is a careful, costly and iterative process, the CCIA brief said, adding that “mistakes and difficult judgement calls will be made given the vast amounts of expression online.”

]]>
https://broadbandbreakfast.com/2022/12/tech-groups-free-expression-advocates-support-twitter-in-landmark-content-moderation-case/feed/ 0 46735
Twitter Takeover by Elon Musk Forces Conflict Over Free Speech on Social Networks https://broadbandbreakfast.com/2022/11/twitter-takeover-by-elon-musk-forces-conflict-over-free-speech-on-social-networks/?utm_source=rss&utm_medium=rss&utm_campaign=twitter-takeover-by-elon-musk-forces-conflict-over-free-speech-on-social-networks https://broadbandbreakfast.com/2022/11/twitter-takeover-by-elon-musk-forces-conflict-over-free-speech-on-social-networks/#respond Thu, 24 Nov 2022 02:18:17 +0000 https://broadbandbreakfast.com/?p=45997 WASHINGTON, November 23, 2022 — As the Supreme Court prepares to hear two cases that may decide the future of content moderation, panelists on a Broadband Breakfast Live Online panel disagreed over the steps that platforms can and should take to ensure fairness and protect free speech.

Mike Masnick, founder and editor of Techdirt, argued that both sides of the aisle were attempting to control speech in one way or another, pointing to laws in California and New York as the liberal counterpoints to the laws in Texas and Florida that are headed to the Supreme Court.

“They’re not as blatant, but they are nudging companies to moderate in a certain way,” he said. “And I think those are equally unconstitutional.”

Censorship posed a greater threat to the ideal of free speech than would a law forcing platforms to carry certain content, said Bret Swanson, a nonresident senior fellow at the American Enterprise Institute.

“Free speech and pluralism, as an ethos for the country and really for the West, are in fact more important than the First Amendment,” he said.

At the same time, content moderation legislation is stalled by a sharp partisan divide, said Mark MacCarthy, a nonresident senior fellow in governance studies at the Brookings Institution’s Center for Technology Innovation.

“Liberals and progressives want action to remove lies and hate speech and misinformation from social media and the conservatives want equal time for conservative voices, so there’s a logjam gridlock that can’t move,” he said. “I think it might be broken if, as I predict, the Supreme Court says that the only way you can regulate social media companies is through transparency.”

Twitter’s past and current practices raise questions about bias and free speech

While talking about Elon Musk’s controversial changes to Twitter’s content moderation practices, panelists also discussed the impact of Musk’s rhetoric surrounding the topic more broadly.

“Declaring yourself as a free speech site without understanding what free speech actually means is something that doesn’t last very long,” Masnick said.

When a social media company like Twitter or Parler declares itself to be a “free speech site,” it is really just sending a signal to some of the worst people and trolls online to begin harassment, abuse and bigotry, he said.

That is not a sustainable business model, Masnick argued.

But Swanson took the opposite approach. He called Musk’s acquisition of Twitter “a real seminal moment in the history and the future of free speech,” and called it an antidote to “the most severe collapse of free speech maybe in American history.”

MacCarthy said he didn’t believe the oft-repeated assertion that Twitter was biased against conservatives before most Musk took over. “The only study I’ve seen of political pluralism on Twitter — and it was done by Twitter itself back when they had the staff to do that kind of thing — suggested that Twitter’s amplification and recommendation engines actually favored conservative tweets over liberal ones.”

Masnick agreed, pointing to other academic studies: “They seemed to bend over backwards to often allow conservatives to break the rules more than others,” he said.

Randolph May, president of The Free State Foundation, said that he was familiar with the studies but disagreed with their findings.

Citing the revelations from the laptop of Hunter Biden, a story that the New York Post broke in October 2020 about the Joe Biden’s son, May said: “To me, that that was a consequential censorship action. Then six months later before a congressional committee, [Twitter CEO] Jack Dorsey said, ‘Oops, we made we made a big mistake when we took down the New York Post stories.’”

Multiple possibilities for the future of content moderation

Despite his criticism of current practices, May said he did not believe platforms should eliminate content moderation practices altogether. He drew a distinction between topics subject to legitimate public debate and those posts that encourage terrorism or facilitate sex trafficking. Those kinds of posts should be subject to moderation practices, he said.

May made three suggestions for better content moderation practices: First, platforms should establish a presumption that they will not censor or downgrade material without clear evidence that their terms of service have been violated.

Second, platforms should work to enable tools that facilitate personalization of the user experience.

Finally, the current state of Section 230 immunity should be replaced with a “reasonableness standard,” he said.

Other panelists disagreed with the subjectivity of such a reasonableness standard. MacCarthy highlighted the Texas social media law, which bans discrimination based on viewpoint. “Viewpoint is undefined: What does that mean?” he asked.

“Does it mean you can’t get rid of Nazi speech, you can’t get rid of hate speech, you can’t get rid of racist speech? What does it mean? No one knows. And so here’s a requirement of government that no one can interpret. If I were the Supreme Court, I’d declare that void for vagueness in a moment.”

MacCarthy predicted that the Supreme Court would reject the content-based provisions in the Texas and Florida laws while upholding the transparency standard, opening the door, he argued, for bipartisan transparency legislation.

But to Masnick, even merely a transparency requirement would be an unsatisfactory result: “How would conservatives feel if the government said, ‘Fox News needs to be transparent about how they make their editorial decision making?’”

“I think everyone would recognize immediately that that is a huge First Amendment concern,” he said.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 23, 2022, 12 Noon ET – Elon and Ye and Donald, Oh My!

With Elon Musk finally taking the reins at Twitter after a tumultuous acquisition process, what additional new changes will come to the world’s de facto public square? The world’s richest man has already reinstated certain banned accounts, including that of former president Donald Trump. Trump has made his own foray into the world of conservative social media, as has politically polarizing rapper Ye, formerly Kanye West, currently in the process of purchasing right-wing alternative platform Parler. Ye is no stranger to testing the limits of controversial speech. With Twitter in the hands of Musk, Parler in the process of selling and Trump’s Truth Social sort-of-kind-of forging ahead in spite of false starts, is a new era of conservative social media upon us?

Panelists

  • Mark MacCarthy, Nonresident Senior Fellow in Governance Studies, Center for Technology Innovation, Brookings Institution
  • Mike Masnick, Founder and Editor, Techdirt
  • Randolph May, President, The Free State Foundation
  • Bret Swanson, Nonresident Senior Fellow, American Enterprise Institute
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources:

Mark MacCarthy is a Nonresident Senior Fellow in Governance Studies at the Center for Technology Innovation at Brookings. He is also adjunct professor at Georgetown University in the Graduate School’s Communication, Culture, & Technology Program and in the Philosophy Department. He teaches courses in the governance of emerging technology, AI ethics, privacy, competition policy for tech, content moderation for social media, and the ethics of speech. He is also a Nonresident Senior Fellow in the Institute for Technology Law and Policy at Georgetown Law.

Mike Masnick is the founder and editor of the popular Techdirt blog as well as the founder of the Silicon Valley think tank, the Copia Institute. In both roles, he explores the intersection of technology, innovation, policy, law, civil liberties, and economics. His writings have been cited by Congress and the EU Parliament. According to a Harvard Berkman Center study, his coverage of the SOPA copyright bill made Techdirt the most linked-to media source throughout the course of that debate.

Randolph May is founder and president of The Free State Foundation, an independent, non-profit free market-oriented think tank founded in 2006. He has practiced communications, administrative, and regulatory law as a partner at major national law firms. From 1978 to 1981, May served as Assistant General Counsel and Associate General Counsel at the Federal Communication Commission. He is a past Chair of the American Bar Association’s Section of  Administrative Law and Regulatory Practice.

Bret Swanson is president of the technology research firm Entropy Economics LLC, a nonresident senior fellow at the American Enterprise Institute, a visiting fellow at the Krach Institute for Tech Diplomacy at Purdue University and chairman of the Indiana Public Retirement System (INPRS). He writes the Infonomena newsletter at infonomena.substack.com.

Drew Clark (moderator) is CEO of Breakfast Media LLC, the Editor and Publisher of BroadbandBreakfast.com and a nationally-respected telecommunications attorney. Under the American Recovery and Reinvestment Act of 2009, he served as head of the State Broadband Initiative in Illinois. Now, in light of the 2021 Infrastructure Investment and Jobs Act, attorney Clark helps fiber-based and wireless clients secure funding, identify markets, broker infrastructure and operate in the public right of way.

Social media controversy has centered around Elon Musk’s Twitter, Ye’s new role in Parler, and former U.S. President Donald Trump

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2022/11/twitter-takeover-by-elon-musk-forces-conflict-over-free-speech-on-social-networks/feed/ 0 45997
Trump’s Twitter Account Reinstated as Truth Social Gets Merger Extension https://broadbandbreakfast.com/2022/11/trumps-twitter-account-reinstated-as-truth-social-gets-merger-extension/?utm_source=rss&utm_medium=rss&utm_campaign=trumps-twitter-account-reinstated-as-truth-social-gets-merger-extension https://broadbandbreakfast.com/2022/11/trumps-twitter-account-reinstated-as-truth-social-gets-merger-extension/#respond Tue, 22 Nov 2022 20:44:39 +0000 https://broadbandbreakfast.com/?p=45883 WASHINGTON, November 22, 2022 — Digital World Acquisition Corp. shareholders voted Tuesday to extend the Dec. 8 deadline for its merger with Truth Social, giving the platform a chance at survival as it faces financial and legal challenges.

The right-wing alternative social media platform championed by former President Donald Trump is currently under federal investigation for potential securities violations, which has delayed the merger and forced Truth Social to operate without $1.3 billion in expected funding.

The DWAC vote was delayed six times in order to raise the necessary support, with the company noting in a securities filing that it would be “forced to liquidate” if the vote was unsuccessful. Private investors have already withdrawn millions in funding.

Trump indicated on Truth Social in September that he was prepared to find alternative funding. “SEC trying to hurt company doing financing (SPAC),” he wrote. “Who knows? In any event, I don’t need financing, ‘I’m really rich!’ Private company anyone???”

Trump’s potential return to Twitter poses another risk for Truth Social

Meanwhile, under the new leadership of Elon Musk, Twitter reinstated Trump’s account, which was banned after then-Twitter executives alleged he stoked the January 6 riot at the Capitol. The reinstatement was made official after Musk asked in a public Twitter poll — which received around 15 million votes — whether he should allow the controversial former president back on the platform.

Trump’s potential return to Twitter could undermine Truth Social’s primary attraction, which could be another blow to the fledgling platform.

On Truth Social, the former president encouraged his followers to vote in the poll while indicating that he would not return to Twitter. But with 87 million followers on Twitter and fewer than 5 million on Truth Social, Trump may be tempted to make use of his newly reinstated account despite statements to the contrary, particularly in light of the official announcement of his 2024 presidential campaign.

The campaign could also allow him to bypass his agreement to first post all social media messages to Truth Social and wait six hours before sharing to other platforms. The agreement makes a specific exception for political messaging and fundraising, according to an SEC filing.

Musk’s decision to bring back Trump was one of many controversial decisions he’s made in his short tenure at the social media company — including a number of high-profile firings and the reinstatement of multiple formerly-banned accounts — which has led several major advertisers to pause spending.

Musk tweeted in October that he would convene a “content moderation council with widely diverse viewpoints” before making any “major content decisions or account reinstatements.” No such council has been publicly announced, and the Tweet appeared to have been deleted as of Tuesday.

Ye returns to Twitter while details of Parler acquisition remain uncertain

Trump’s reinstatement seems to have motivated at least one controversial figure to return to Twitter: Ye, formerly Kanye West, whose account was restricted in October after tweeting that he would go “death con 3 on JEWISH PEOPLE.” The restrictions were lifted prior to Musk’s acquisition of Twitter, but the rapper remained silent on the platform until Nov. 20.

“Testing Testing Seeing if my Twitter is unblocked,” he posted.

Right-wing social media platform Parler announced in October that Ye had agreed to purchase the company. Completion of the acquisition is expected by the end of December, but further details, including financial terms, have yet to be announced.

Twitter draws legislative attention, with changes to the social media landscape on the horizon

One of Musk’s first major changes to Twitter attempted to replace the existing verification system with a process through which anyone could pay $8 per month for a verified account. The initial rollout of paid verification sparked a swarm of accounts impersonating brands and public figures such as Sen. Ed Markey, D-Mass., who responded with a letter demanding answers about how the new verification process would prevent future impersonation.

Markey also co-signed a Nov. 17 letter written by Sen. Richard Blumenthal, D-Conn., asking the Federal Trade Commission to investigate Twitter for consumer protection violations in light of “serious, willful disregard for the safety and security of its users.”

Musk responded to the letter by posting a meme that mocked the senators’ priorities, but he later appeared to be rethinking the new verification process.

“Holding off relaunch of Blue Verified until there is high confidence of stopping impersonation,” Musk tweeted on Monday.

Other changes to the platform may be out of Musk’s hands, as state and federal legislators consider an increasing number of proposals for the regulation of digital platforms.

The Computer and Communications Industry Association released on Monday a summary of the trends in state legislation regarding content moderation. More than 250 such bills have been introduced during the past two years.

“As a result of the midterm elections, a larger number of states will have one party controlling both chambers of the legislature in addition to the governor’s seat,” CCIA State Policy Director Khara Boender said in a press release. “This, coupled with an increased interest in content moderation issues — on both sides of the aisle — leads us to believe this will be an increasingly hot topic.”

]]>
https://broadbandbreakfast.com/2022/11/trumps-twitter-account-reinstated-as-truth-social-gets-merger-extension/feed/ 0 45883