Big Tech – Broadband Breakfast https://broadbandbreakfast.com Better Broadband, Better Lives Wed, 03 Jan 2024 14:33:24 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://i0.wp.com/broadbandbreakfast.com/wp-content/uploads/2021/05/cropped-logo2.png?fit=32%2C32&ssl=1 Big Tech – Broadband Breakfast https://broadbandbreakfast.com 32 32 190788586 12 Days of Broadband: State Regulations and Children’s Safety Online https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/?utm_source=rss&utm_medium=rss&utm_campaign=12-days-of-broadband-state-regulations-and-childrens-safety-online https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/#respond Wed, 03 Jan 2024 14:31:39 +0000 https://broadbandbreakfast.com/?p=56649 January 3, 2024 – A nationwide push to restrict teenagers’ online actions gained ground in 2023 as several states implemented stringent laws targeting social media use among youth.

In March, Utah ventured into uncharted territory when Republican Gov. Spencer Cox signed two measures, H.B. 311 and S.B. 152, mandating parental consent for all minors – 17 and under – before they can register for platforms like TikTok and Meta’s Instagram. For decades, the default standard of the 1998 Children’s Online Privacy Protection Act has been no restrictions on social media use by kids 13 and over.

The pair of bills, which do not go into effect until March 2024, require individuals under 18 to gain parental consent to open a social media account, bar minors from accessing social media platforms between the hours of 10:30 p.m. and 6:30 a.m., and grant parents full access to their child’s social media accounts.

In October, Utah announced a lawsuit against TikTok, alleging that the app deploys addictive features to hook young users. The lawsuit raises additional concerns regarding user data and privacy, citing that TikTok’s China-based parent company, ByteDance, is legally binded with the Chinese Communist Party. 

Arkansas, Montana may be following Utah

Soon after, Arkansas took a similar step as Republican Gov. Sarah Huckabee Sanders signed Act 689, named the Social Media Safety Act, in April 2023. The newly approved act, aiming to mandate age verification and parental consent for social media users under 18, was set to come into effect on September 1. 

However, on that very day, U.S. District Judge Timothy Brooks granted a preliminary injunction following a petition from the tech trade industry group, NetChoice Litigation Center. Their contention was that the new law infringed upon the First Amendment’s freedom of expression guarantee.

In May, Montana Gov. Greg Ganforte signed legislation banning TikTok on all devices statewide, threatening fines up to $10,000 per violation for app providers like Google and Apple. Before the law took effect on January 1, Federal Judge Donald Molloy stopped the TikTok ban in late November, stating that the law exceeds state authority and violates the constitutional rights of users. 

Shortly after, TikTok filed a lawsuit against Montana. Judge Molloy found merit to numerous arguments raised by TikTok, including that TikTok has a number of safeguards in place surrounding user data.

Is Age verification a First Amendment issue?

Consumer groups, including the American Civil Liberties Union, have raised issues with the fact that many of these bills extend beyond merely mandating age verification solely for minors; they now necessitate age verification through proof of legal documents for anyone seeking to utilize social media within the states.

The issue was much discussed at a Broadband Breakfast Live Online session in November 2023, where child safety advocate Donna Rice Hughes and Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems were much more robust than from a generation ago, when the Supreme Court struck down one such scheme. They disagreed with civil liberties groups including the Electronic Frontier Foundation.

On TikTok, 13 states joined in enacting bans over the use of the Chinese-owned platform being installed on government-issued devices. That brings to 34 the total number of states that have banned TikTok on government devices due to national security concerns. Additionally, more than 40 public universities have barred TikTok from their on-campus Wi-Fi and university-owned computers in response to these state-level bans.

See “The Twelve Days of Broadband” on Broadband Breakfast

]]>
https://broadbandbreakfast.com/2024/01/12-days-of-broadband-state-regulations-and-childrens-safety-online/feed/ 0 56649
Diverse Groups File Amicus Briefs Against Florida and Texas Social Media Laws https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/?utm_source=rss&utm_medium=rss&utm_campaign=diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/#respond Fri, 08 Dec 2023 19:17:59 +0000 https://broadbandbreakfast.com/?p=56323 WASHINGTON, December 8, 2023 –  Industry, public interest, and conservative groups filed briefs with the Supreme Court this week arguing against Texas and Florida social media laws.

Drafted to combat what state legislators saw as the unfair treatment of right-wing content online, the 2021 laws would allow residents of those states to sue social media companies for suspending their accounts. Both have been blocked from going into effect after legal challenges from tech industry trade groups. The cases were initially separate, but the Supreme Court agreed in October to hear them together because they raise similar issues.

Industry groups argue the laws violate the First Amendment by forcing platforms to host speech they normally would not. The White House agrees – Solicitor General Elizabeth Prelogar asked the Court in August to take up the issue and strike down Texas’s law.

Consumer protection group Public Knowledge filed an amicus brief on Thursday in support of the tech trade groups, arguing the laws are unconstitutional and “driven by political animus.”

Center-right think tank TechFreedom filed a similar brief on Wednesday. 

“Only the state can ‘censor’ speech,” Corbin Barthold, the group’s director of appellate litigation, said in a statement. “And these states are doing so by trying to co-opt websites’ right to editorial control over the speech they disseminate.

Both groups also pushed against the states’ move to treat social media platforms as ‘common carrier’ services, a part of both laws. The legal designation, typically applied to services like railroads or voice telephone calls, requires a carrier to serve the public at just rates without unreasonable discrimination.

The states’ move to designate social media platforms as common carriers would make it more difficult for them to refuse their service to users. But the designation, the groups argued, does not map cleanly onto the service social media provides, as the platforms make editorial decisions about content they transmit – through moderation and recommendation – in a way companies like voice providers do not.

In all, at least 40 similar briefs have been filed arguing against the laws, according to the Computer and Communications Industry Association, one of the parties in the case.

A set of 15 states with Republican-led legislatures and former president Donald Trump, who had multiple social media accounts suspended after the January 2021 attack on the Capitol, have filed amicus briefs in support of Texas and Florida. The Court is expected to hear oral arguments in the case sometime in 2024.

]]>
https://broadbandbreakfast.com/2023/12/diverse-groups-file-amicus-briefs-against-florida-and-texas-social-media-laws/feed/ 0 56323
Improved Age Verification Allows States to Consider Restricting Social Media https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/?utm_source=rss&utm_medium=rss&utm_campaign=improved-age-verification-allows-states-to-consider-restricting-social-media https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/#respond Mon, 20 Nov 2023 16:00:14 +0000 https://broadbandbreakfast.com/?p=55701 WASHINGTON, November 20, 2023 — A Utah law requiring age verification for social media accounts is likely to face First Amendment lawsuits, experts warned during an online panel Wednesday hosted by Broadband Breakfast.

The law, set to take effect in March 2024, mandates that all social media users in Utah verify their age and imposes additional restrictions on minors’ accounts.

The Utah law raises the same constitutional issues that have led courts to strike down similar laws requiring age verification, said Aaron Mackey, free speech and transparency litigation director at the non-profit Electronic Frontier Foundation.

“What you have done is you have substantially burdened everyone’s First Amendment right to access information online that includes both adults and minors,” Mackey said. “You make no difference between the autonomy and First Amendment rights of older teens and young adults” versus young children, he said.

But Donna Rice Hughes, CEO of Enough is Enough, contended that age verification technology has successfully restricted minors’ access to pornography and could be applied to social media as well.

“Utah was one of the first states [to] have age verification technology in place to keep minor children under the age of 18 off of porn sites and it’s working,” she said.

Tony Allen, executive director of Age Check Certification Scheme, agreed that age verification systems had progressed considerably from a generation ago, when the Supreme Court in 2002’s Ashcroft v. American Civil Liberties Union, struck down the 1998 Child Online Protection Act. The law had been designed to shield minors from indecent material, but the court ruled that age-verification methods often failed at that task.

Andrew Zack, policy manager at the Family Online Safety Institute, said that his organization he welcomed interest in youth safety policies from Utah.

But Zack said, “We still have some concerns about the potential unintended consequences that come with this law,”  worrying particularly about potential unintended consequences for teen privacy and expression rights.

Taylor Barkley, director of technology and innovation at the Center for Growth and Opportunity, highlighted the importance of understanding the specific problems the law aims to address. “Policy Solutions have trade-offs.” urging that solutions be tailored to the problems identified.

Panelists generally agreed that comprehensive data privacy legislation could help address social media concerns without facing the same First Amendment hurdles.

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, November 15, 2023 – Social Media for Kids in Utah

In March 2023, Utah became the first state to adopt laws regulating kids’ access to social media. This legislative stride was rapidly followed by several states, including Arkansas, Illinois, Louisiana, and Mississippi, with numerous others contemplating similar measures. For nearly two decades, social media platforms enjoyed unbridled growth and influence. The landscape is now changing as lawmakers become more active in shaping the future of digital communication. This transformation calls for a nuanced evaluation of the current state of social media in the United States, particularly in light of Utah’s pioneering role. Is age verification the right way to go? What are the broader implications of this regulatory trend for the future of digital communication and online privacy across the country?

Panelists

  • Andrew Zack, Policy Manager, Family Online Safety Institute
  • Donna Rice Hughes, President and CEO of Enough Is Enough
  • Taylor Barkley, Director of Technology and Innovation, Center for Growth and Opportunity
  • Tony Allen, Executive Director, Age Check Certification Scheme
  • Aaron Mackey, Free Speech and Transparency Litigation Director, Electronic Frontier Foundation
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Andrew Zack is the Policy Manager for the Family Online Safety Institute, leading policy and research work relating to online safety issues, laws, and regulations. He works with federal and state legislatures, relevant federal agencies, and industry leaders to develop and advance policies that promote safe and positive online experience for families. Andrew joined FOSI after five years in Senator Ed Markey’s office, where he worked primarily on education, child welfare, and disability policies. Andrew studied Government and Psychology at the College of William and Mary.

Donna Rice Hughes, President and CEO of Enough Is Enough is an internationally known Internet safety expert, author, speaker and producer. Her vision, expertise and advocacy helped to birth the Internet safety movement in America at the advent of the digital age. Since 1994, she has been a pioneering leader on the frontlines of U.S. efforts to make the internet safer for children and families by implementing a three-pronged strategy of the public, the technology industry and legal community sharing the responsibility to protect children online.

Taylor Barkley is the Director of Technology and Innovation at the Center for Growth and Opportunity where he manages the research agenda, strategy, and represents the technology and innovation portfolio. His primary research and expertise are at the intersection of culture, technology, and innovation. Prior roles in tech policy have been at Stand Together, the Competitive Enterprise Institute, and the Mercatus Center at George Mason University.

Tony Allen a Chartered Trading Standards Practitioner and acknowledged specialist in age restricted sales law and practice. He is the Chair of the UK Government’s Expert Panel on Age Restrictions and Executive Director of a UKAS accredited conformity assessment body specialising in age and identity assurance testing and certification. He is the Technical Editor of the current international standard for Age Assurance Systems.

Aaron Mackey is EFF’s Free Speech and Transparency Litigation Director. He helps lead cases advancing free speech, anonymity, and privacy online while also working to increase public access to government records. Before joining EFF in 2015, Aaron was in Washington, D.C. where he worked on speech, privacy, and freedom of information issues at the Reporters Committee for Freedom of the Press and the Institute for Public Representation at Georgetown Law

Breakfast Media LLC CEO Drew Clark has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/11/improved-age-verification-allows-states-to-consider-restricting-social-media/feed/ 0 55701
Premium Shipping and Anti-discounting Policies Central to FTC’s Amazon Lawsuit https://broadbandbreakfast.com/2023/10/premium-shipping-and-anti-discounting-policies-central-to-ftcs-amazon-lawsuit/?utm_source=rss&utm_medium=rss&utm_campaign=premium-shipping-and-anti-discounting-policies-central-to-ftcs-amazon-lawsuit https://broadbandbreakfast.com/2023/10/premium-shipping-and-anti-discounting-policies-central-to-ftcs-amazon-lawsuit/#respond Fri, 20 Oct 2023 13:56:39 +0000 https://broadbandbreakfast.com/?p=54877 WASHINGTON, October 20, 2023 –While the Federal Trade Commission may have a hard time proving that Amazon has monopolistic power, some of its policies could be construed as anticompetitive.

That was the message antitrust experts delivered on Tuesday at an Information, Technology and Innovation Foundation panel on the FTC’s lawsuit against the online retailer in U.S. District Court in Seattle, Washington.

The agency’s complaint argues that the Amazon exerts unlawful monopoly power by forcing third party sellers to fulfill orders on Amazon and by preventing third parties selling products on Amazon from charging lower prices on other platforms.

The first policy coerces third-parties to fulfill orders on Amazon in order to get the e-commerce giant’s premium two-day shipping, the FTC has argued.

The second policy, dubbed anti-discounting, can be used as a form of price control despite having pro-competitive benefits like discouraging free riding and encouraging investment, said Kathleen Bradish, president of the Antitrust Institute.

Because Amazon requires merchants to maintain a price point on its marketplace, it can create barriers to entry when other marketplaces cannot attract merchants to sell their products at a lower price, she said.

A debate about anti-discounting

Steve Salop, professor of antitrust law at Georgetown University, added that “what Amazon does is it has algorithms that scrape all the relevant websites and if it discovers that the merchant’s product is being sold at a lower price anywhere else it contacts the merchant and says [that it has to] lower the price to [Amazon] or raise the price to” the consumer.

Herb Hovenkamp, an antitrust professor at the University of Pennsylvania, said that anti-discounting policies “only work on a product-by-product basis.”

When you look at each product Amazon sells, there may not be anticompetitive power impacting each product, said Hovenkamp.

Amazon sells almost 12 million products on their e-commerce site and its individual market shares for all those products varies, he said. That means it is hard to argue that Amazon holds a monopoly for every product it  sells.

Hovenkamp noted that while Amazon may succeed in areas such as streaming – which has no offline alternative – it struggles in “markets like try on clothing, tires, groceries…. Product by product, the question of how much competition Amazon faces from offline sellers varies immensely,” he said.

Bilal Sayyed, senior competition counsel at TechFreedom, a non-profit tech policy group, echoed this point: Anti-discounting policies can have anti-competitive consequences, but that they can also have pro-competitive benefits.

Sellers may not switch to other fulfillment companies because it does not make sense to do so given the “scale that Amazon has,” Bradish said, even if they prefer to use another e-commerce platform. But she acknowledged that having witnesses testify that those policies have impacted their behavior could favor the FTC’s point.

The role of Amazon’s fulfilment services

Amazon’s fulfillment services apply to several products it sells. But the FTC will need to demonstrate that monopoly prices are a result of those fulfillment services, said Hovenkamp.

“The hard part is going to be for the FTC to convince the fact finder that that’s a grouping of sales that’s capable of sustaining a monopoly markup,” he added. “It may be able to do that.”

While a large-scale operation like Amazon might have a cost advantage with fulfillment services, monopoly power will have to be determined by a finding of fact, he said.

By contrast, Sayyed argued that there is a clear pro-competitive justification for sellers using Amazon’s fulfillment services. That comes from the company’s reputation for quickly delivering goods to consumers.

“This idea that parties should be able to take advantage of the platform and the Amazon brand, but then [sell] their merchandise [through] a third party that may or may not meet the same fulfillment and delivery standards, really strikes me as very dangerous ground for the agency” to argue, said Sayyed.

]]>
https://broadbandbreakfast.com/2023/10/premium-shipping-and-anti-discounting-policies-central-to-ftcs-amazon-lawsuit/feed/ 0 54877
FTC Chair Warns Artificial Intelligence Industry of Vigorous Enforcement https://broadbandbreakfast.com/2023/10/ftc-chair-warns-artificial-intelligence-industry-of-vigorous-enforcement/?utm_source=rss&utm_medium=rss&utm_campaign=ftc-chair-warns-artificial-intelligence-industry-of-vigorous-enforcement https://broadbandbreakfast.com/2023/10/ftc-chair-warns-artificial-intelligence-industry-of-vigorous-enforcement/#respond Mon, 02 Oct 2023 16:37:34 +0000 https://broadbandbreakfast.com/?p=54477 WASHINGTON, October 2, 2023 – The chair of the Federal Trade Commission warned the artificial intelligence industry Wednesday that the agency is prepared to clamp down on any monopolistic practices, as she proposed more simplistic rules to avoid confrontation.

“We’re really firing on all cylinders to make sure that we’re meeting the moment and the enormous and urgent need for robust and vigorous enforcement,” Lina Khan said at the AI and Tech Summit hosted by Politico on Wednesday.

Khan emphasized that the FTC’s statute on consumer protection “prohibits unfair deceptive practices” and that provision extends to AI development.

The comments come as artificial intelligence products advance at a brisk pace. The advent of new chat bots – such as those from OpenAI and Google that are driven by the latest advances in large language models – has meant individuals can use AI to create content from basic text prompts.

Khan stated that working with Congress to administer “more simplicity in rules” to all businesses and market participants could promote a more equal playing field for competitors.

“It’s no secret that there are defendants that are pushing certain arguments about the FTC’s authority,” Khan said. “Historically we’ve seen that the rules that are most successful oftentimes are ones that are clear and that are simple and so a regime where you have bright line rules about what practices are permitted, what practices are prohibited, I think could provide a lot more clarity and also be much more administrable.”

Khan’s comments came the day before the agency and 17 states filed an antitrust lawsuit against Amazon, which is accusing the e-commerce giant of utilizing anticompetitive practices and unfair strategies to sustain its supremacy in the space.

“Obviously we don’t take on these cases lightly,” Khan said. “They are very resource intensive for us and so we think it’s a worthwhile use of those resources given just the significance of this market, the significance of online commerce, and the degree to which the public is being harmed and being deprived of the benefits of competition.”

Since being sworn in 2021, Khan’s FTC has filed antitrust lawsuits against tech giants Meta, Microsoft, and X, formerly known as Twitter.

]]>
https://broadbandbreakfast.com/2023/10/ftc-chair-warns-artificial-intelligence-industry-of-vigorous-enforcement/feed/ 0 54477
Senate Commerce Committee Passes Two Bills To Protect Children Online https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/?utm_source=rss&utm_medium=rss&utm_campaign=senate-commerce-committee-passes-two-bills-to-protect-children-online https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/#respond Thu, 27 Jul 2023 20:31:09 +0000 https://broadbandbreakfast.com/?p=52693 WASHINGTON, July 27, 2023 – The Senate Commerce committee on Thursday swiftly passed two pieces of legislation aimed to protect the safety and privacy of children online, exactly one year after the same bills passed the committee but failed to advance further.

The first bill to clear the committee was the Kids Online Safety Act, which requires social media sites to put in place safeguards protecting users under the age of 17 from content that promotes harmful behaviors, such as suicide and eating disorders. KOSA was first introduced in 2022 by Sen. Richard Blumenthal, D-Conn., and Sen. Marsha Blackburn, D-Tenn. It previously won bipartisan support but ultimately failed to become law.

The current version of the bill was reintroduced in May, gaining traction in several hearings, and picked up more than 30 co-sponsors. Several changes were made to the text, including a specific list of online harms and certain exemptions for support services, such as substance abuse groups that might unintentionally suffer from the bill’s requirements.

The bill was also amended Thursday to include a provision proposed by Sen. John Thune, R-S.D. that would require companies to disclose the use of algorithms for content filtering and give users the choice to opt out.

Critics of the bill, however, said the revised version largely resembled the original one and failed to address issues raised before. These concerns included sections that would require tech companies to collect more data to filter content and verify user age, as well as an infringement on children’s free speech.

Sen. Ted Cruz, R-Texas, supported the bill but agreed that more work needs to be done before it moves to the floor. Since the last committee’s markup of KOSA, several states have approved measures concerning children’s online safety that might be inconsistent with the existing provisions, he noted, proposing a preemptive provision to ensure the bill would be enforced regardless of state laws.

The Children and Teens’ Online Privacy Protection Act, or COPPA 2.0, introduced by Sen. Edward Markey, D-Mass., and Bill Cassidy, R-LA, was the second bill passed out of the committee. It expands on existing legislation that has been in effect since 2000 to protect children from harmful marketing. The bill would make it illegal for websites to collect data on children under the age of 16, outlaw marketing specifically aimed at kids, and allow parents to erase their kids’ information on the websites.

“It is time for Congress to meet this moment and to act with the urgency that these issues demand,” said Sen. Markey.

This pair of legislation is among many others that seek to protect children from online harms, none of which have made any headway in Congress so far.

]]>
https://broadbandbreakfast.com/2023/07/senate-commerce-committee-passes-two-bills-to-protect-children-online/feed/ 0 52693
UK’s Online Safety Bill Likely to Impact American User Experience https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/?utm_source=rss&utm_medium=rss&utm_campaign=uks-online-safety-bill-likely-to-impact-american-user-experience https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/#respond Fri, 21 Jul 2023 15:08:37 +0000 https://broadbandbreakfast.com/?p=52497 WASHINGTON, July 21, 2023 – The United Kingdom’s Online Safety Bill will impact the American-based user’s experience on various platforms, said panelist at a Broadband Breakfast Live Online event Wednesday.  

The Online Safety Bill is the UK’s response to concerns about the negative impact of various internet platforms and applications. The core of the bill addresses illegal content and content that is harmful to children. It places a duty of care on internet sites, including social media platforms, search engines, and online shopping centers, to provide risk assessments for their content, prevent access to illegal content, protect privacy, and prevent children from accessing harmful content. 

The legislation would apply to any business that has a substantial user base in the UK, having unforeseen impacts on the end user experience, said Amy Peikoff, Chief Policy Officer of UK-based video-streaming platform, BitChute. 

Even though the legislation is not U.S. legislation, it will affect the tone and content of discussion on U.S.-owned platforms that wish to continue offering their services in the jurisdictions where this legislation will be enacted, said Peikoff. Already, the European Union’s Digital Services Act, is affecting Twitter, which is “throttling its speech” to turn out statistics that say a certain percentage of their content is “healthy,” she claimed. 

Large social media companies as we know them are finished, Peikoff said.  

Ofcom, the UK’s communications regulator, will be responsible to provide guidelines and best practices as well as conduct investigations and auditing. It will be authorized to apprehend revenue if a company fails to adhere to laws and may enact rules that require companies to provide user data to the agency and/or screen user messages for harmful content. 

Peikoff claimed that the legislation could set off a chain of events, “namely, that platforms like BitChute would be required to affirmatively, proactively scan every single piece of content – comments, videos, whatever posted to the platform – and keep a record of any flags.” She added that U.S-based communication would not be exempt. 

Meta-owned WhatsApp, a popular messaging app, has warned that it will exit the UK market if the legislation requires it to release data about its users or screen their messages, claiming that doing so would “compromise” the privacy of all users and threaten the encryption on its platform. 

Matthew Lesh, director of public policy and communications at the UK think tank Institute of Economic Affairs, said that the bill is a “recipe for censorship on an industrial, mechanical scale.” He warned that many companies will choose to simply block UK-based users from using their services, harming UK competitiveness globally and discouraging investors.  

In addition, Lesh highlighted privacy concerns introduced by the legislation. By levying fines on platforms that host harmful content accessible by children, companies may have to screen for children by requiring users to present government-issued IDs, presenting a major privacy concern for users.  

The primary issue with the bill and similar policies, said Lesh, is that it enacts the same moderation policies to all online platforms, which can limit certain speech and stop healthy discussion and interaction cross political lines. 

The bill is currently in the final stages of the committee stage in the House of Lords, the UK’s second chamber of parliament. Following its passage, the bill will go to the House of Commons in which it will either be amended or be accepted and become law. General support in the UK’s parliament for the bill suggests that the bill will be implemented sometime next year. 

This follows considerable debate in the United States regarding content moderation, many of which discussions are centered around possible reform of Section 230. Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. 

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, July 19, 2023 – The UK’s Online Safety Bill

The UK’s Online Safety Bill seeks to make the country “the safest place in the world to be online” has seen as much upheaval as the nation itself in the last four years. Four prime ministers, one Brexit and one pandemic later, it’s just a matter of time until the bill finally passes the House of Lords and eventually becomes law. Several tech companies including WhatsApp, Signal, and Wikipedia have argued against its age limitation and breach of end-to-end encryption. Will this legislation serve as a model for governments worldwide to regulate online harms? What does it mean for the future of U.S. social media platforms?

Panelists

  • Amy Peikoff, Chief Policy Officer, BitChute
  • Matthew Lesh, Director of Public Policy and Communications at the Institute of Economic Affairs.
  • Drew Clark (moderator), Editor and Publisher, Broadband Breakfast

Panelist resources

Amy Peikoff is Chief Policy Officer for BitChute. She holds a BS in Math/Applied Science and a JD from UCLA, as well as a PhD in Philosophy from University of Southern California, and has focused in her academic work and legal activism on issues related to the proper legal protection of privacy. In 2020, she became Chief Policy Officer for the free speech social media platform, Parler, where she served until Parler was purchased in April 2023.

Matthew Lesh is the Director of Public Policy and Communications at the Institute of Economic Affairs. Matthew often appears on television and radio, is a columnist for London’s CityAM newspaper, and a regular writer for publications such as The TimesThe Telegraph and The Spectator. He is also a Fellow of the Adam Smith Institute and Institute of Public Affairs.

Drew Clark is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

 

 

 

Illustration from the Spectator

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/07/uks-online-safety-bill-likely-to-impact-american-user-experience/feed/ 0 52497
New Tool Measures Economic Impact of Internet Shutdowns https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/?utm_source=rss&utm_medium=rss&utm_campaign=new-tool-measures-economic-impact-of-internet-shutdowns https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/#respond Mon, 10 Jul 2023 20:36:06 +0000 https://broadbandbreakfast.com/?p=52234 July 10, 2023 – New measuring tool NetLoss launched by the Internet Society shows the impacts of internet shutdowns on economies including Iraq, Sudan and Pakistan, where government-mandated outages have cost millions of dollars in a matter of hours or days.

NetLoss, launched on June 28, calculated a four-hour shutdown in July in Iraq, implemented by the government to prevent cheating during high school exam season, resulted in an estimated loss of $1.6 million. In May, a shutdown in Pakistan cost more than $13 million over the span of four days, while a five-day internet outage in Sudan in April cost the economy more than $4 million and resulted in the loss of 560 jobs.

NetLoss is unique among other internet assessment tools as it also includes subsequent economic impacts on the unemployment rate, foreign direct investments, and the risk of future shutdowns, claimed the advocacy group Internet Society. It provides data on both ongoing and anticipated shutdowns, drawing from historical dataset of over 90 countries dating back to 2019.

“The calculator is a major step forward for the community of journalists, policymakers, technologists and other stakeholders who are pushing back against the damaging practice of Internet shutdowns,” said Andrew Sullivan, CEO of the Internet Society. “Its groundbreaking and fully transparent methodology will help show governments around the world that shutting down the Internet is never a solution.”

The tool relies on open-access databases, including the Internet Society Pulse’s Shutdown data, the World Bank’s economic indicators, the Armed Conflict Location and Event Data Project’s civil unrest data, Yale University’s election data, and other relevant socioeconomic factors. To stay up to date with real-time changes, the data will be updated quarterly.

According to the press release, internet shutdowns worldwide peaked in 2022 with governments increasingly blocking internet services due to concerns over civil unrest or cybersecurity threats. These disruptions are extremely damaging to the economy, read the document, as they impede online commercial activities and expose companies and the economy to financial and reputational risks.

]]>
https://broadbandbreakfast.com/2023/07/new-tool-measures-economic-impact-of-internet-shutdowns/feed/ 0 52234
Meta’s New Platform Threads is Called a Potential ‘Twitter-Killer’ https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/?utm_source=rss&utm_medium=rss&utm_campaign=metas-new-platform-threads-is-called-a-potential-twitter-killer https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/#respond Fri, 07 Jul 2023 18:27:57 +0000 https://broadbandbreakfast.com/?p=52207 WASHINGTON, July 7, 2023 – Meta’s new social media platform released on Wednesday, Threads, is the potential end of Twitter, said panelists at a National Digital Roundtable Advisory Board event on Friday. 

The app provides billions of users with an alternative to Twitter amidst growing dissatisfaction with the Elon Musk-owned social media platform. Outrage ensued when Musk announced on July 1 that most Twitter users would be limited to reading just 600 tweets per day on a tier system that limits tweets based on verification status and length of subscription. 

In an official release, Twitter claimed the tweet limit was “to ensure the authenticity of our user base” and to “remove spam and bots from our platform.” The company’s new CEO, Linda Yaccarino, tweeted that “when you have a mission like Twitter – you need to make big moves to keep strengthening the platform.”  

Thread took advantage of Musk’s announcement and paid off in how many users immediately joined, said Kevin Coroneos, director of digital advocacy strategy at the Investment Company Institute.  

According to Meta CEO Mark Zuckerberg, 10 million people signed up for Thread within hours of its release. The numbers continue to soar, surpassing 20 million sign-ups and placing as the number one app on the Google Play Store and App Store.  

Thread bears a resemblance to Twitter in terms of appearance, allowing users to post messages, engage in conversations with others and express appreciation through likes or reports. However, it has a fundamental difference in that the account is intertwined with the user’s Instagram account, meaning that Instagram followers are automatically transferred to Thread. 

The blend of two platforms that are typically personal (Instagram) and professional (Twitter) will create an unique platform that is likely to grow larger, said Patrick Kane, head of digital at British Embassy Washington. It also has the added benefit that new users do not start at square one, but instead come onto the unfamiliar platform with connections and followers from their Instagram account. 

We may see more influencers moving into a world of text-based posts which they didn’t have the platform for before, said Kane.  

Although it is uncertain whether Threads will prove to be the “Twitter-killer” that many propose it will be, its potential to do so will be confirmed if Threads is able to build an advertising-revenue model, said Coroneos. 

Twitter is reactive and fast and it will put up a good fight, added Kane. Meta has a good chance as it already has the infrastructure to do content moderation and advertising campaigns as well as an established and engaged user base. 

For some brands, Threads is the advertising platform that they were looking for, added Coroneos , suggesting that the platform may take off for companies that rely on text-heavy advertising or that market to an intellectually inclined audience base. 

Thread does not currently have a large global influence, as it is not yet approved for use in the European Union and is only available to customers in the U.S. and United Kingdom. 

“Our vision is to take the best parts of Instagram and create a new experience for text, ideas and discussing what’s on your mind,” Zuckerberg said in an Instagram post. “I think the world needs this kind of friendly community, and I’m grateful to all of you who are part of Threads from day one.” 

]]>
https://broadbandbreakfast.com/2023/07/metas-new-platform-threads-is-called-a-potential-twitter-killer/feed/ 0 52207
Experts Advocate Federal Agency to Oversee Children Online Safety https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/?utm_source=rss&utm_medium=rss&utm_campaign=experts-advocate-federal-agency-to-oversee-children-online-safety https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/#respond Thu, 15 Jun 2023 19:53:52 +0000 https://broadbandbreakfast.com/?p=51701 WASHINGTON, June 15, 2023 – Kids safety experts urged the government Tuesday to establish a federal agency dedicated to targeting online sexual predators in a Tuesday webinar hosted by the Cato Institute.

The federal agency would be more “effective” than the disjointed, state-by-state legislative approach in addressing the problem of children’s internet safety, according to experts.

Growing concerns about social media’s harms have prompted lawmakers to propose several pieces of legislation to protect children safety and privacy on the internet. Most of these proposals, however, have stalled in Congress, leaving no clear path forward for the federal government to address the issue.

Several states have thus taken matters into their own hands. Montana’s TikTok ban will become effective on Jan 1, 2024. A number of states like Utah, Arkansas, California, and most recently Louisiana, have passed laws imposing age limits or requiring parental consents to open kids accounts on certain platforms.

However, these state bills have come under fire for having wildly varying criteria. Experts also worry they risk infringing on children’s free speech and privacy rights as companies have to collect more data from users to comply with such laws.

“The minute we get a legislation in one state or a judge in another state to weigh in with ideas that really don’t make sense and aren’t enforceable, it’s just going to create more chaos,” said child welfare expert Maureen Flatley during the webinar.

Flatley argued that these measures are mostly “performative” and will not be helpful since they do not address the underlining criminal activity. She said she believed the problems with child safety do not lie with social media companies, but rather online predators who take advantage of those platforms to prey on children. To this end, she advocated for a government agency specifically tasked with investigating and prosecuting internet sexual abusers of children.

Andrew Zack, policy manager at Family Online Safety Institute, also echoed the same opinion, calling for a “chief online safety officer” to deal with child online sexual abuse materials.

“I think that’s where we should be focusing our efforts first and most vociferously and energetically when it comes to safety online for teens and kids,” said Zack.

Earlier in May, the Joe Biden administration announced an interagency task force on kids online health and safety led by the Department of Health and Human Services. It will examine internet threats to minors, recommend methods to address harms, and publish standards for transparency reports and audits by spring 2024.

]]>
https://broadbandbreakfast.com/2023/06/experts-advocate-federal-agency-to-oversee-children-online-safety/feed/ 0 51701
Experts Debate TikTok Ban, Weighing National Security Against Free Speech https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/?utm_source=rss&utm_medium=rss&utm_campaign=experts-debate-tiktok-ban-weighing-national-security-against-free-speech https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/#respond Fri, 26 May 2023 18:19:20 +0000 https://broadbandbreakfast.com/?p=51218 WASHINGTON, May 26, 2023 — With lawmakers ramping up their rhetoric against TikTok, industry and legal experts are divided over whether a ban is the best solution to balance competing concerns about national security and free speech.

Proponents of a TikTok ban argue that the app poses an “untenable threat” because of the amount of data it collects — including user location, search history and biometric data — as well as its relationship with the Chinese government, said Joel Thayer, president of the Digital Progress Institute, at a debate hosted Wednesday by Broadband Breakfast.

These fears have been cited by state and federal lawmakers in a wide range of proposals that would place various restrictions on TikTok, including a controversial bill that would extend to all technologies connected to a “foreign adversary.” More than two dozen states have already banned TikTok on government devices, and Montana recently became the first state to ban the app altogether.

TikTok on Monday sued Montana over the ban, arguing that the “unprecedented and extreme step of banning a major platform for First Amendment speech, based on unfounded speculation about potential foreign government access to user data and the content of the speech, is flatly inconsistent with the Constitution.”

Thayer contested the lawsuit’s claim, saying that “the First Amendment does not prevent Montana or the federal government from regulating non expressive conduct, especially if it’s illicit.”

However, courts have consistently held that the act of communicating and receiving information cannot be regulated separately from speech, said David Greene, civil liberties director and senior staff attorney at the Electronic Frontier Foundation.

“This is a regulation of expression — it’s a regulation of how people communicate with each other and how they receive communications,” he said.

Stringent regulations could protect privacy without suppressing speech

A complete ban of TikTok suppresses far more speech than is necessary to preserve national security interests, making less intrusive options preferable, said Daniel Lyons, nonresident senior fellow at the American Enterprise Institute.

TikTok is currently engaged in a $1.5 billion U.S. data security initiative that will incorporate several layers of government and private sector oversight into its privacy and content moderation practices, in addition to moving all U.S. user data to servers owned by an Austin-based software company.

This effort, nicknamed Project Texas, “strikes me as a much better alternative that doesn’t have the First Amendment problems that an outright TikTok ban has,” Lyons said.

Greene noted that many online platforms — both within and outside the U.S. — collect and sell significant amounts of user data, creating the potential for foreign adversaries to purchase it.

“Merely focusing on TikTok is an underinclusive way of addressing these concerns about U.S. data privacy,” he said. “It would be really great if Congress would actually take a close look at comprehensive data privacy legislation that would address that problem.”

Greene also highlighted the practical barriers to banning an app, pointing out that TikTok is accessible through a variety of alternative online sources. These sources tend to be much less secure than the commonly used app stores, meaning that a ban focused on app stores is actually “making data more vulnerable to foreign exploitation,” he said.

TikTok risks severe enough to warrant some action, panelists agree

Although concerns about suppressing speech are valid, the immediate national security risks associated with the Chinese government accessing a massive collection of U.S. user data are severe enough to warrant consideration of a ban, said Anton Dahbura, executive director of the Johns Hopkins University Information Security Institute.

“Will it hurt people who are building businesses from it? Absolutely,” he said. “But until we have safeguards in place, we need to be cautious about business as usual.”

These safeguards should include security audits, data flow monitoring and online privacy legislation, Dahbura continued.

Thayer emphasized the difference between excessive data collection practices and foreign surveillance.

“I think we all agree that there should be a federal privacy law,” he said. “That doesn’t really speak to the fact that there are potential backdoors, that there are these potential avenues to continue to surveil… So I say, why not both?”

Lyons agreed that TikTok’s “unique threat” might warrant action beyond a general privacy law, but maintained that a nationwide ban was “far too extreme.”

Even if further action against TikTok is eventually justified, Greene advocated for federal privacy legislation to be the starting point.  “We’re spending a lot of time talking about banning TikTok, which again, is going to affect millions of Americans… and we’re doing nothing about having data broadly collected otherwise,” he said. “At a minimum, our priorities are backwards.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, May 24, 2023 – Debate: Should the U.S. Ban TikTok?

Since November, more than two dozen states have banned TikTok on government devices. Montana recently became the first state to pass legislation that would ban the app altogether, and several members of Congress have advocated for extending a similar ban to the entire country. Is TikTok’s billion-dollar U.S. data security initiative a meaningful step forward, or just an empty promise? How should lawmakers navigate competing concerns about national security, free speech, mental health and a competitive marketplace? This special session of Broadband Breakfast Live Online will engage advocates and critics in an Oxford-style debate over whether the U.S. should ban TikTok.

Panelists

Pro-TikTok Ban

  • Anton Dahbura, Executive Director, Johns Hopkins University Information Security Institute
  • Joel Thayer, President, Digital Progress Institute

Anti-TikTok Ban

  • David Greene, Civil Liberties Director and Senior Staff Attorney, Electronic Frontier Foundation
  • Daniel Lyons, Nonresident Senior Fellow, American Enterprise Institute

Moderator

  • Drew Clark, Editor and Publisher, Broadband Breakfast

Anton Dahbura serves as co-director of the Johns Hopkins Institute for Assured Autonomy, and is the executive director of the Johns Hopkins University Information Security Institute. Since 2012, he has been an associate research scientist in the Department of Computer Science. Dahbura is a fellow at the Institute of Electrical and Electronics Engineers, served as a researcher at AT&T Bell Laboratories, was an invited lecturer in the Department of Computer Science at Princeton University and served as research director of the Motorola Cambridge Research Center.

Joel Thayer, president of the Digital Progress Institute, was previously was an associate at Phillips Lytle. Before that, he served as Policy Counsel for ACT | The App Association, where he advised on legal and policy issues related to antitrust, telecommunications, privacy, cybersecurity and intellectual property in Washington, DC. His experience also includes working as legal clerk for FCC Chairman Ajit Pai and FTC Commissioner Maureen Ohlhausen.

David Greene, senior staff attorney and civil liberties director at the Electronic Frontier Foundation, has significant experience litigating First Amendment issues in state and federal trial and appellate courts. He currently serves on the steering committee of the Free Expression Network, the governing committee of the ABA Forum on Communications Law, and on advisory boards for several arts and free speech organizations across the country. Before joining EFF, David was for twelve years the executive director and lead staff counsel for First Amendment Project.

Daniel Lyons is a professor and the Associate Dean of Academic Affairs at Boston College Law School, where he teaches telecommunications, administrative and cyber law. He is also a nonresident senior fellow at the American Enterprise Institute, where he focuses on telecommunications and internet regulation. Lyons has testified before Congress and state legislatures, and has participated in numerous proceedings at the Federal Communications Commission.

Drew Clark (moderator) is CEO of Breakfast Media LLC. He has led the Broadband Breakfast community since 2008. An early proponent of better broadband, better lives, he initially founded the Broadband Census crowdsourcing campaign for broadband data. As Editor and Publisher, Clark presides over the leading media company advocating for higher-capacity internet everywhere through topical, timely and intelligent coverage. Clark also served as head of the Partnership for a Connected Illinois, a state broadband initiative.

Graphic by SF Freelancer/Adobe Stock used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/05/experts-debate-tiktok-ban-weighing-national-security-against-free-speech/feed/ 0 51218
Supreme Court Sides With Google and Twitter, Leaving Section 230 Untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/#respond Fri, 19 May 2023 02:02:43 +0000 https://broadbandbreakfast.com/?p=51050 WASHINGTON, May 18, 2023 — The Supreme Court on Thursday sided with Google and Twitter in a pair of high-profile cases involving intermediary liability for user-generated content, marking a significant victory for online platforms and other proponents of Section 230.

In Twitter v. Taamneh, the court ruled that Twitter could not be held liable for abetting terrorism by hosting terrorist content. The unanimous decision was written by Justice Clarence Thomas, who had previously signaled interest in curtailing liability protections for online platforms.

“Notably, the two justices who have been most critical of Section 230 and internet platforms said nothing of the sort here,” said Ari Cohn, free speech counsel at TechFreedom.

In a brief unsigned opinion remanding Gonzalez v. Google to the Ninth Circuit, the court declined to address Section 230, saying that the case “appears to state little, if any, plausible claim for relief.”

A wide range of tech industry associations and civil liberties advocates applauded the decision to leave Section 230 untouched.

“Free speech online lives to fight another day,” said Patrick Toomey, deputy director of the ACLU’s National Security Project. “Twitter and other apps are home to an immense amount of protected speech, and it would be devastating if those platforms resorted to censorship to avoid a deluge of lawsuits over their users’ posts.”

John Bergmayer, legal director at Public Knowledge, said that lawmakers should take note of the rulings as they continue to debate potential changes to Section 230.

“Over the past several years, we have seen repeated legislative proposals that would remove Section 230 protections for various platform activities, such as content moderation decisions,” Bergmayer said. “But those activities are fully protected by the First Amendment, and removing Section 230 would at most allow plaintiffs to waste time and money in court, before their inevitable loss.”

Instead of weakening liability protections, Bergmayer argued that Congress should focus on curtailing the power of large platforms by strengthening antitrust law and promoting competition.

“Many complaints about Section 230 and content moderation policies amount to concerns about competition and the outsize influence of major platforms,” he said.

The decision was also celebrated by Sen. Ron Wyden, D-Ore., one of the statute’s original co-authors.

“Despite being unfairly maligned by political and corporate interests that have turned it into a punching bag for everything wrong with the internet, the law Representative [Chris] Cox and I wrote remains vitally important to allowing users to speak online,” Wyden said in a statement. “While tech companies still need to do far better at policing heinous content on their sites, gutting Section 230 is not the solution.”

However, other lawmakers expressed disappointment with the court’s decision, with some — including Rep. Cathy McMorris Rodgers, R-Wash., chair of the House Energy and Commerce Committee — saying that it “underscores the urgency for Congress to enact needed reforms to Section 230.”

]]>
https://broadbandbreakfast.com/2023/05/supreme-court-sides-with-google-and-twitter-leaving-section-230-untouched/feed/ 0 51050
White House Meets AI Leaders, FTC Claims Meta Violated Privacy Order, Graham Targets Section 230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/?utm_source=rss&utm_medium=rss&utm_campaign=white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230 https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/#respond Fri, 05 May 2023 13:40:46 +0000 https://broadbandbreakfast.com/?p=50637 May 5, 2023 — Vice President Kamala Harris and other senior officials on Thursday met with the CEOs of Alphabet, Anthropic, Microsoft and OpenAI to discuss the risks associated with artificial intelligence technologies, following the administration’s announcement of $140 million in funding for national AI research.

President Joe Biden briefly stopped by the meeting, telling the tech leaders that “what you’re doing has enormous potential and enormous danger.”

Government officials emphasized the importance of responsible leadership and called on the CEOs to be more transparent about their AI systems with both policymakers and the general public.

“The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products,” Harris said in a statement after the meeting.

In addition to the new investment in AI research, the White House announced that the Office of Management and Budget would be releasing proposed policy guidance on government usage of AI systems for public comment.

The initiatives announced Thursday are “an important first step,” wrote Adam Conner, vice president of technology policy at the Center for American Progress. “But the White House can and should do more. It’s time for President Joe Biden to issue an executive order that requires federal agencies to implement the Blueprint for an AI Bill of Rights and take other key actions to address the challenges and opportunities of AI.”

FTC claims Facebook violated privacy order

The Federal Trade Commission on Wednesday proposed significant modifications to its 2020 privacy settlement with Facebook, accusing the company of violating children’s privacy protections and improperly sharing user data with third parties.

The suggested changes would include a blanket prohibition against monetizing the data of underage users and limits on the uses of facial recognition technology, among several other constraints.

“Facebook has repeatedly violated its privacy promises,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection. “The company’s recklessness has put young users at risk, and Facebook needs to answer for its failures.”

Although the agency voted unanimously to issue the order, Commissioner Alvaro Bedoya expressed concerns about whether the changes exceeded the FTC’s limited order modification authority. “I look forward to hearing additional information and arguments and will consider these issues with an open mind,” he said.

Meta responded to the FTC’s action with a lengthy statement calling it a “political stunt” and outlining the changes that have been implemented since the original order.

“Let’s be clear about what the FTC is trying to do: usurp the authority of Congress to set industry-wide standards and instead single out one American company while allowing Chinese companies, like TikTok, to operate without constraint on American soil,” wrote Andy Stone, Meta’s director of policy communications, in a statement posted to Twitter.

Meta now has thirty days to respond to the proposed changes. “We will vigorously fight this action and expect to prevail,” Stone said.

Sen. Graham threatens to repeal Section 230 if tech lobby kills EARN IT Act

The Senate Judiciary Committee on Thursday unanimously approved the Eliminating Abusive and Rampant Neglect of Interactive Technologies Act, a controversial bill that would create new carveouts to Section 230 in an attempt to combat online child sexual abuse material.

But Sen. Lindsey Graham, R-S.C., the bill’s cosponsor and ranking member of the committee, expressed doubt about the legislation’s future, claiming that “the political and economic power of social media companies is overwhelming.”

“I have little hope that common-sense proposals like this will ever become law because of the lobbying power these companies have at their disposal,” he said in a statement on Thursday. “My next approach is going to be to sunset Section 230 liability protection for social media companies.”

If Congress fails to pass legislation regulating social media companies, Graham continued, “it’s time to open up the American courtrooms as a way to protect consumers.”

However, large tech companies are not the only critics of the EARN IT Act. The American Civil Liberties Union on Thursday urged Congress to reject the proposed legislation, alongside two other bills related to digital privacy.

“These bills purport to hold powerful companies accountable for their failure to protect children and other vulnerable communities from dangers on their services when, in reality, increasing censorship and weakening encryption would not only be ineffective at solving these concerns, it would in fact exacerbate them,” said Cody Venzke, ACLU senior policy counsel.

]]>
https://broadbandbreakfast.com/2023/05/white-house-meets-ai-leaders-ftc-claims-meta-violated-privacy-order-graham-targets-section-230/feed/ 0 50637
FCC RDOF Penalties, KOSA Reintroduced, Lawmakers Explore AI Regulation https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/#respond Tue, 02 May 2023 17:28:22 +0000 https://broadbandbreakfast.com/?p=50555 May 2, 2023 — The Federal Communications Commission on Monday proposed more than $8 million in fines against 22 applicants for the Rural Digital Opportunity Fund Phase I auction, alleging that they violated FCC requirements by defaulting on their bids.

The defaults prevented an estimated 293,128 locations in 31 states from receiving new investments in broadband infrastructure, according to a press release from the FCC.

“When applicants fail to live up to their obligations in a broadband deployment program, it is a setback for all of us,” Commissioner Geoffrey Starks said in a statement. “Defaulting applicants pay a fine, but rural communities that have already waited too long for broadband pay a larger toll.”

The FCC has previously put forward penalties against several other RDOF applicants for defaulting, including a proposed $4.3 million in fines against 73 applicants in July.

These enforcement actions intends to show that the agency “takes seriously its commitment to hold applicants accountable and ensure the integrity of our universal service funding,” said FCC Chairwoman Jessica Rosenworcel.

Kids Online Safety Act reintroduced

The Kids Online Safety Act was reintroduced on Tuesday by Sens. Marsha Blackburn, R-Tenn., and Richard Blumenthal, D-Conn., sparking a mix of praise and criticism from a broad range of youth health, civil liberties and technology organizations.

Although KOSA ultimately failed to pass in 2022, it won rare bipartisan support and continued to gain momentum even before its official reintroduction during the current session of Congress through energetic promotion in both House and Senate hearings.

“We need to hold these platforms accountable for their role in exposing our kids to harmful content, which is leading to declining mental health, higher rates of suicide, and eating disorders… these new laws would go a long way in safeguarding the experiences our children have online,” said Johanna Kandel, CEO of the National Alliance for Eating Disorders, in a Tuesday press release applauding the legislation.

However, KOSA’s opponents expressed disappointment that the reintroduced bill appeared largely similar to the original version, failing to substantially address several previous criticisms.

“KOSA’s sponsors seem determined to ignore repeated warnings that KOSA violates the First Amendment and will in fact harm minors,” said Ari Cohn, free speech counsel at TechFreedom, in a press release. “Their unwillingness to engage with these concerns in good faith is borne out by their superficial revisions that change nothing about the ultimate effects of the bill.”

Cohn also claimed that the bill did not clearly establish what constitutes reason for a platform to know that a user is underage.

“In the face of that uncertainty, platforms will clearly have to age-verify all users to avoid liability — or worse, avoid obtaining any knowledge whatsoever and leave minors without any protections at all,” he said. “The most ‘reasonable’ and risk-averse course remains to block minors from accessing any content related to disfavored subjects, ultimately to the detriment of our nation’s youth.”

In addition, the compliance obligations imposed by KOSA could actually undermine teens’ online privacy, argued Matt Schruers, president of the Computer & Communications Industry Association

“Governments should avoid compliance requirements that would compel digital services to collect more personal information about their users — such as geolocation information and a government-issued identification — particularly when responsible companies are instituting measures to collect and store less data on customers,” Schruers said in a statement.

Lawmakers introduce series of bills targeting AI

Amid growing calls for federal regulation of artificial intelligence, Rep. Yvette Clarke, D-N.Y., on Tuesday introduced a bill that would require disclosure of AI-generated content in political ads.

“Unfortunately, our current laws have not kept pace with the rapid development of artificial intelligence technologies,” Clarke said in a press release. “If AI-generated content can manipulate and deceive people on a large scale, it can have devastating consequences for our national security and election security.

Other lawmakers have taken a broader approach regulating the rapidly evolving technology. Legislation introduced Friday by Sen. Michael Bennet, D-Colo., would create a cabinet-level AI task force to recommend specific legislative and regulatory reforms for AI-related privacy protections, biometric identification standards and risk assessment frameworks.

“As the deployment of AI accelerates, the federal government should lead by example to ensure it uses the technology responsibly,” Bennet said in a press release. “Americans deserve confidence that our government’s use of AI won’t violate their rights or undermine their privacy.”

Earlier in April, Sen. Chuck Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability.

]]>
https://broadbandbreakfast.com/2023/05/fcc-proposes-rdof-penalties-kosa-reintroduced-to-continued-controversy-lawmakers-explore-ai-regulation/feed/ 0 50555
FTC Funding Request Harshly Criticized by Republican Lawmakers https://broadbandbreakfast.com/2023/04/ftc-funding-request-harshly-criticized-by-republican-lawmakers/?utm_source=rss&utm_medium=rss&utm_campaign=ftc-funding-request-harshly-criticized-by-republican-lawmakers https://broadbandbreakfast.com/2023/04/ftc-funding-request-harshly-criticized-by-republican-lawmakers/#respond Wed, 19 Apr 2023 22:48:24 +0000 https://broadbandbreakfast.com/?p=50345 WASHINGTON, April 19, 2023 — House Republicans expressed skepticism about the Federal Trade Commission’s requested budget increase during a Tuesday hearing, accusing the agency of overstepping its jurisdiction in pursuit of a progressive enforcement agenda.

The hearing of the Innovation, Data and Commerce Subcommittee showcased sharp partisan tension over Chair Lina Khan’s aggressive approach to antitrust — heightened by the fact that both Republican seats on the five-member agency remain vacant.

Khan, alongside Democratic Commissioners Rebecca Slaughter and Alvaro Bedoya, argued that the $160 million budget increase was necessary for maintaining existing enforcement efforts as well as “activating additional authorities that Congress has given us.”

But Republican lawmakers seemed unwilling to grant the requested funds, which would bring the agency’s total annual budget to $590 million.

“You seem to be squandering away the resources that we currently give you in favor of pursuing unprecedented progressive legal theories,” said Subcommittee Chair Gus Bilirakis, R-Fla.

“What is clearly needed — before Congress considers any new authorities or funding — are reforms, more guardrails and increased transparency to ensure you are accountable to the American people,” said Rep. Cathy McMorris Rodgers, R-Wash., chair of the Energy and Commerce Committee.

Rep. Frank Pallone, D-N.J., ranking member of the full committee, defended the funding request by saying the FTC has “one of the broadest purviews of any federal agency: fighting deceptive and unfair business practices and anti-competitive conduct across the entire economy.”

“Managing this portfolio with less than fourteen hundred employees is no small feat,” Pallone said, noting that the FTC currently has fewer employees than it did 45 years ago.

FTC highlights potential AI threats, other tech developments

FTC staff and Democratic lawmakers have been flagging concerns about understaffing at the agency for years, arguing that rapid technological and market changes have increased the scope and complexity of the agency’s role.

“The same lawyers who ensure that social media companies have robust privacy and data security programs are making sure labels on bed linens are correct,” testified former Chief Technologist Ashkan Soltani at a Senate hearing in 2021.

In their written testimony, commissioners detailed several emerging priorities related to technological developments — such as combatting online harms to children and protecting sensitive consumer data shared with health websites — and emphasized the corresponding need for increased resources.

The agency is also preparing to pursue violations related to artificial intelligence technologies, Khan said, as the “turbocharging of fraud and scams that could be enabled by these tools are a serious concern.”

But several tech-focused trade groups, including the Computer & Communications Industry Association, have signaled opposition to FTC expansion.

“The FTC can best carry out its mission if it heeds the committee’s call to return its focus to consumer needs and consumer fraud — rather than pursuing cases rooted in novel theories against American companies,” CCIA President Matt Schruers said after the hearing.

The Consumer Technology Association urged lawmakers to reject the requested budget increase in a letter sent Friday.

“In 2022, agency data shows consumers reported losing almost $8.8 billion to scams… Despite this mounting caseload of fraud, identity theft and related cases, the FTC appears more interested in attacking U.S. tech companies, to the detriment of consumers who have benefitted from an unparalleled explosion of innovative, online-based products and services,” CTA President Gary Shapiro wrote.

]]>
https://broadbandbreakfast.com/2023/04/ftc-funding-request-harshly-criticized-by-republican-lawmakers/feed/ 0 50345
Google CEO Promotes AI Regulation, GOP Urges TikTok Ban for Congress Members, States Join DOJ Antitrust Suit https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/?utm_source=rss&utm_medium=rss&utm_campaign=google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/#respond Tue, 18 Apr 2023 21:35:24 +0000 https://broadbandbreakfast.com/?p=50323 April 18, 2023 — Google CEO Sundar Pichai on Sunday called for increased regulation of artificial intelligence, warning that the rapidly developing technology poses broad societal risks.

“The pace at which we can think and adapt as societal institutions compared to the pace at which the technology’s evolving — there seems to be a mismatch,” Pichai said in an interview with CBS News.

Watch Broadband Breakfast on April 26, 2023 – Should AI Be Regulated?
What are the risks associated with artificial intelligence deployment, and which concerns are just fearmongering?

Widespread AI applications could lead to a dramatic uptick in online disinformation, as it becomes increasingly easy to create and spread fake news, images and videos, Pichai warned.

Google recently released a series of recommendations for regulating AI, advocating for “a sectoral approach that builds on existing regulation” and cautioning against “over-reliance on human oversight as a solution to AI issues.”

But the directive also noted that “while self-regulation is vital, it is not enough.”

Pichai emphasized this point, calling for broad multisector collaboration to best determine the shape of AI regulation.

“The development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on,” he said. “And I think these are all things society needs to figure out as we move along — it’s not for a company to decide.”

Republicans call to ban members of Congress from personal TikTok use

A group of Republican lawmakers on Monday urged the House and Senate rules committees to ban members of Congress from using TikTok, citing national security risks and the need to “lead by example.”

Congress banned use of the app on government devices in late 2022, but several elected officials have maintained accounts on their personal devices.

In Monday’s letter, Republican lawmakers argued that the recent hearing featuring TikTok CEO Shou Zi Chew made it “blatantly clear to the public that the China-based app is mining data and potentially spying on American citizens.”

“It is troublesome that some members continue to disregard these clear warnings and are even encouraging their constituents to use TikTok to interface with their elected representatives – especially since some of these users are minors,” the letter continued.

TikTok is facing hostility from the other side of the aisle as well. On Thursday, Rep. Frank Pallone, D-N.J., sent Chew a list of questions about the app’s privacy and safety practices that House Democrats claimed were left unanswered at the March hearing.

Meanwhile, Montana lawmakers voted Friday to ban TikTok on all personal devices, becoming the first state to pass such legislation. The bill now awaits the signature of Gov. Greg Gianforte — who was one of several state leaders last year to mimic Congress in banning TikTok from government devices.

Nine additional states join DOJ’s antitrust lawsuit against Google

The Justice Department announced on Monday that nine additional states joined its antitrust lawsuit over Google’s alleged abuse of the digital advertising market.

The Attorneys General of Arizona, Illinois, Michigan, Minnesota, Nebraska, New Hampshire, North Carolina, Washington and West Virginia joined the existing coalition of California, Colorado, Connecticut, New Jersey, New York, Rhode Island, Tennessee and Virginia.

“We look forward to litigating this important case alongside our state law enforcement partners to end Google’s long-running monopoly in digital advertising technology markets,” said Doha Mekki, principal deputy assistant attorney general of the Justice Department’s Antitrust Division.

The lawsuit alleges that Google monopolizes digital advertising technologies used for both buying and selling ads, said Jonathan Kanter, assistant attorney general of the Justice Department’s Antitrust Division, when the suit was filed in January.

“Our complaint sets forth detailed allegations explaining how Google engaged in 15 years of sustained conduct that had — and continues to have — the effect of driving out rivals, diminishing competition, inflating advertising costs, reducing revenues for news publishers and content creators, snuffing out innovation, and harming the exchange of information and ideas in the public sphere,” Kanter said.

]]>
https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/feed/ 0 50323
House Democrats Interrogate TikTok as Montana Moves Toward Complete Ban https://broadbandbreakfast.com/2023/04/house-democrats-interrogate-tiktok-as-montana-moves-toward-complete-ban/?utm_source=rss&utm_medium=rss&utm_campaign=house-democrats-interrogate-tiktok-as-montana-moves-toward-complete-ban https://broadbandbreakfast.com/2023/04/house-democrats-interrogate-tiktok-as-montana-moves-toward-complete-ban/#respond Thu, 13 Apr 2023 22:44:03 +0000 https://broadbandbreakfast.com/?p=50267 WASHINGTON, April 13, 2023 — Rep. Frank Pallone, D-N.J., on Thursday requested additional information about TikTok’s privacy and safety practices, claiming that TikTok CEO Shou Zi Chew left many Democrats’ questions unanswered during his recent appearance before Congress.

The March 23 hearing of the House Energy and Commerce Committee, on which Pallone serves as the ranking member, marked the TikTok executive’s first Congressional testimony — and a pivotal moment in the growing bipartisan push for the app to be nationally banned.

In a letter addressed to Chew, Pallone wrote that the hearing “reinforced Americans’ fears that social media platforms, including TikTok, have been collecting, using, sharing, and selling their data without meaningful limits… These industry-wide concerns are heightened when it comes to TikTok given your China-based parent company and its susceptibility to the Chinese Communist Party’s influence.”

The letter included a list of thirty questions, mostly regarding TikTok’s data collection practices, underage user protections and content moderation policies — particularly in regard to Spanish language disinformation. Pallone also asked for more information about Project Texas, TikTok’s $1.5 billion U.S. data security initiative.

Many lawmakers used the hearing as an opportunity to air their grievances with TikTok, often leaving Chew little time to speak. Committee Chair Cathy McMorris Rodgers, R-Wash., consistently denied Chew’s requests to answer questions or respond to allegations.

“Shou came prepared to answer questions from Congress, but, unfortunately, the day was dominated by political grandstanding,” TikTok spokesperson Brooke Oberwetter said in a statement after the hearing.

Montana nears complete TikTok ban, sparking practical and ideological opposition

While Congress considers legislation that would grant the Commerce Department broad authority to restrict tech platforms that threaten national security, some state lawmakers are targeting TikTok more directly.

The Montana State House on Thursday held a second hearing on legislation that would ban TikTok from operating in the state, advancing it to a final vote. The bill has already passed the State Senate.

The proposed legislation would target both TikTok and any app stores carrying it by instituting a $10,000 penalty per violation — defined as “each time that a user accesses TikTok, is offered the ability to access TikTok or is offered the ability to download TikTok.”

Tech companies and industry groups have raised pragmatic concerns about the bill’s implementation. At a March hearing, a representative from trade group TechNet claimed that it would be impossible for app stores to restrict TikTok on a state-by-state basis.

AT&T lobbyists successfully pushed for the removal of language that would have extended liability for facilitating TikTok access to internet service providers. But otherwise, attempts at weakening the bill — including extensive efforts from TikTok itself — have failed.

On Tuesday, a coalition of free speech and civil rights organizations urged Montana lawmakers to oppose the bill, arguing that it constituted “censorship” and a violation of the First Amendment.

“The government cannot impose a total ban on a communications platform like TikTok unless it is necessary to prevent extremely serious, immediate harm to national security,” the coalition wrote. “But there’s no public evidence of harm that would meet the high bar set by the U.S. and Montana Constitutions, and a total ban would not be the only option for addressing such harm if it did exist.”

]]>
https://broadbandbreakfast.com/2023/04/house-democrats-interrogate-tiktok-as-montana-moves-toward-complete-ban/feed/ 0 50267
Narrowing Section 230 Could Destroy Smaller Platforms, Warns Nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/?utm_source=rss&utm_medium=rss&utm_campaign=narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/#respond Tue, 04 Apr 2023 23:02:22 +0000 https://broadbandbreakfast.com/?p=50055 WASHINGTON, April 4, 2023 — Narrowing Section 230 protections for online services could have significant economic repercussions, particularly for smaller platforms that rely on content curation as a business model, according to experts at a panel hosted by the Computer & Communications Industry Association Research Center on Tuesday.

“There’s really unintended consequences for the smaller players if you take a ‘one size fits all’ approach here,” said Laura Bisesto, global head of policy, privacy and regulatory compliance for Nextdoor.

Many small to mid-sized platforms operate on a business model that relies on content moderation, Bisesto explained. For example, Reddit hosts thousands of active forums that are each dedicated to a stated topic, and consumers join specific forums for the purpose of seeing content related to those topics.

Similarly, Bisesto claimed that Nextdoor’s proximity-based content curation is what makes the platform competitive.

“We want to make sure you’re seeing relevant, very hyper-local content that’s very timely as well,” she said. “It’s really important to us to be able to continue to use algorithms to provide useful content that’s relevant, and any narrowing of Section 230 could really impede that ability.”

Algorithmic organization is also crucial for large platforms that host a broad range of content, said Ginger Zhe Jin, a professor of economics at the University of Maryland. The sheer volume of content on platforms such as YouTube — which sees 500 hours of new video uploaded each minute — would make it “impossible for consumers to choose and consume without an algorithm to sort and list.”

Without Section 230, some companies’ platforms might choose to forgo the use of algorithms altogether, which Jin argued would “undermine the viability of the internet businesses themselves.”

The alternative would be for companies to broadly remove any content that could potentially generate controversy or be misinterpreted.

“Either way, we’re going to see maybe less content creation and less content consumption,” Jin said. “This would be a dire situation, in my opinion, and would reduce the economic benefits the internet has brought to many players.”

Who should be updating Section 230?

In February, the Section 230 debate finally reached the Supreme Court in a long-awaited case centered around intermediary liability. But some industry experts — and even multiple Supreme Court justices — have cast doubt on whether the court is the right venue for altering the foundational internet law.

Bisesto argued that the question should be left to Congress. “They drafted the law, and I think if it needs to be changed, they should be the ones to look at it,” she said.

However, she expressed skepticism about whether lawmakers would be able to reach a consensus, highlighting the “fundamental disagreement” between the general Republican aim of leaving more content up and Democratic aim of taking more content down.

If the Supreme Court refrains from major changes, “pressure will increase for Congress to do something as the 50 different states are passing different statutes on content moderation,” said Sarah Oh Lam, a senior fellow at the Technology Policy Institute.

]]>
https://broadbandbreakfast.com/2023/04/narrowing-section-230-could-destroy-smaller-platforms-warns-nextdoor/feed/ 0 50055
Congress Grills TikTok CEO Over Risks to Youth Safety and China https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/?utm_source=rss&utm_medium=rss&utm_campaign=congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/#respond Fri, 24 Mar 2023 23:08:05 +0000 https://broadbandbreakfast.com/?p=49865 WASHINGTON, March 24, 2023 — TikTok CEO Shou Zi Chew faced bipartisan hostility from House lawmakers during a high-profile hearing on Thursday, struggling to alleviate concerns about the platform’s safety and security risks amid growing calls for the app to be banned from the United States altogether.

For more than five hours, members of the House Energy and Commerce Committee lobbed criticisms at TikTok, often leaving Chew little or no time to address their critiques.

“TikTok has repeatedly chosen the path for more control, more surveillance and more manipulation,” Chair Cathy McMorris Rodgers, R-Wash., told Chew at the start of the hearing. “Your platform should be banned. I expect today you’ll say anything to avoid this outcome.”

“Shou came prepared to answer questions from Congress, but, unfortunately, the day was dominated by political grandstanding,” TikTok spokesperson Brooke Oberwetter said in a statement after the hearing.

In a viral TikTok video posted Tuesday, and again in his opening statement, Chew noted that the app has over 150 million active monthly users in the United States. TikTok has also become a place where “close to 5 million American businesses — mostly small businesses — go to find new customers and to fuel their growth,” he said.

But McMorris Rodgers argued that the platform’s significant reach only “emphasizes the urgency for Congress to act.”

Lawmakers condemn TikTok’s impact on youth safety and mental health

One of the top concerns highlighted by both Republicans and Democrats was the risk TikTok poses to the wellbeing of children and teens.

“Research has found that TikTok’s addictive algorithms recommend videos to teens that create and exacerbate feelings of emotional distress, including videos promoting suicide, self-harm and eating disorders,” said Ranking Member Frank Pallone, D-N.J.

Chew emphasized TikTok’s commitment to removing explicitly harmful or violative content. The company is also working with entities such as the Boston Children’s Hospital to find models for content that might harm young viewers if shown too frequently, even if the content is not inherently negative — for example, videos of extreme fitness regimens, Chew explained.

In addition, Chew listed several safeguards that TikTok has recently implemented for underage users, such as daily default time limits and the prevention of private messaging for users under 16.

However, few lawmakers seemed interested in these measures, with some noting that they appeared to lack enforceability. Others emphasized the tangible costs of weak safety policies, pointing to multiple youth deaths linked to the app.

Rep. Gus Bilirakis, R-Fla., shared the story of a 16-year-old boy who died by suicide after being served hundreds of TikTok videos glorifying suicidal ideation, self-harm and depression — even though such content was unrelated to his search history, according to a lawsuit filed by his parents against the platform.

At the hearing, Bilirakis underscored his concern by playing a series of TikTok videos with explicit descriptions of suicide, accompanied by messages such as “death is a gift” and “Player Tip: K!ll Yourself.”

“Your company destroyed their lives,” Bilirakis told Chew, gesturing toward the teen’s parents. “Your technology is literally leading to death, Mr. Chew.”

Watch Rep. Bilirakis’ keynote address from the Big Tech & Speech Summit.

Other lawmakers noted that this death was not an isolated incident. “There are those on this committee, including myself, who believe that the Chinese Communist Party is engaged in psychological warfare through Tik Tok to deliberately influence U.S. children,” said Rep. Buddy Carter, R-Ga.

TikTok CEO emphasizes U.S. operations, denies CCP ties

Listing several viral “challenges” encouraging dangerous behaviors and substance abuse, Carter questioned why TikTok “consistently fails to identify and moderate these kinds of harmful videos” — and claimed that no such content was present on Douyin, the version of the app available in China.

Screenshot of Rep. Buddy Carter courtesy of CSPAN

Chew urged legislators to compare TikTok’s practices with those of other U.S. social media companies, rather than a version of the platform operating in an entirely different regulatory environment. “This is an industry challenge for all of us here,” he said.

Douyin heavily restricts political and controversial content in order to comply with China’s censorship regime, while the U.S. currently grants online platforms broad liability for third-party content.

In response to repeated accusations of CCP-driven censorship, particularly regarding the Chinese government’s human rights abuses against the Uyghur population, Chew maintained that related content “is available on our platform — you can go and search it.”

“We do not promote or remove content at the request of the Chinese government,” he repeatedly stated.

A TikTok search for “Uygher genocide” on Thursday morning primarily displayed videos that were critical of the Chinese government, Broadband Breakfast found. The search also returned a brief description stating that China “has committed a series of ongoing human rights abuses against Uyghers and other ethnic and religious minorities,” drawn from Wikipedia and pointing users to the U.S.-based website’s full article on the topic.

TikTok concerns bolster calls for Section 230 reform

Although much of the hearing was specifically targeted toward TikTok, some lawmakers used those concerns to bolster an ongoing Congressional push for Section 230 reform.

“Last year, a federal judge in Pennsylvania found that Section 230 protected TikTok from being held responsible for the death of a 10-year-old girl who participated in a blackout challenge,” said Rep. Bob Latta, R-Ohio. “This company is a picture-perfect example of why this committee in Congress needs to take action immediately to amend Section 230.”

In response, Chew referenced Latta’s earlier remarks about Section 230’s historical importance for online innovation and growth.

“As you pointed out, 230 has been very important for freedom of expression on the internet,” Chew said. “[Free expression] is one of the commitments we have given to this committee and our users, and I do think it’s important to preserve that. But companies should be raising the bar on safety.”

Rep. John Curtis, R-Utah., asked if TikTok’s use of algorithmic recommendations should forfeit the company’s Section 230 protections — echoing the question at the core of Gonzalez v. Google, which was argued before the Supreme Court in February.

Other inquiries were more pointed. Chew declined to answer a question from Rep. Randy Weber, R-Texas, about whether “censoring history and historical facts and current events should be protected by Section 230’s good faith requirement.”

Weber’s question seemed to incorrectly suggest that the broad immunity provided by Section 230 (c)(1) is conditioned on the “good faith” referenced in in part (c)(2)(A) of the statute.

Ranking member says ongoing data privacy initiative is unacceptable

Chew frequently pointed to TikTok’s “Project Texas” initiative as a solution to a wide range of data privacy concerns. “The bottom line is this: American data, stored on American soil, by an American company, overseen by American personnel,” he said.

All U.S. user data is now routed by default to Texas-based company Oracle, Chew added, and the company aims to delete legacy data currently stored in Virginia and Singapore by the end of the year.

Several lawmakers pointed to a Thursday Wall Street Journal article in which China’s Commerce Ministry reportedly said that a sale of TikTok would require exporting technology, and therefore would be subject to approval from the Chinese government.

When asked if Chinese government approval was required for Project Texas, Chew replied, “We do not believe so.”

But many legislators remained skeptical. “I still believe that the Beijing communist government will still control and have the ability to influence what you do, and so this idea — this ‘Project Texas’ — is simply not acceptable,” Pallone said.

]]>
https://broadbandbreakfast.com/2023/03/congress-grills-tiktok-ceo-over-risks-to-youth-safety-and-china/feed/ 0 49865
Additional Content Moderation for Section 230 Protection Risks Reducing Speech on Platforms: Judge https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/?utm_source=rss&utm_medium=rss&utm_campaign=additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/#respond Mon, 13 Mar 2023 19:05:29 +0000 https://broadbandbreakfast.com/?p=49309 WASHINGTON, March 13, 2023 – Requiring companies to moderate more content as a condition of Section 230 legal liability protections runs the risk of alienating users from platforms and discouraging communications, argued a judge of the District of Columbia Court of Appeal last week.

“The criteria for deletion are vague and difficult to parse,” Douglas Ginsburg, a Ronald Reagan appointee, said at a Federalist Society event on Wednesday. “Some of the terms are inherently difficult to define and policing what qualifies as hate speech is often a subjective determination.”

“If content moderation became very rigorous, it is obvious that users would depart from platforms that wouldn’t run their stuff,” Ginsburg added. “And they will try to find more platforms out there that will give them a voice. So, we’ll have more fragmentation and even less communication.”

Ginsburg noted that the large technology platforms already moderate a massive amount of content, adding additional moderation would be fairly challenging.

“Twitter, YouTube and Facebook  remove millions of posts and videos based on those criteria alone,” Ginsburg noted. “YouTube gets 500 hours of video uploaded every minute, 3000 minutes of video coming online every minute. So the task of moderating this is obviously very challenging.”

John Samples, a member of Meta’s Oversight Board – which provides direction for the company on content – suggested Thursday that out-of-court dispute institutions for content moderation may become the preferred method of settlement.

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples.

“It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world, referring to the European Union’s Digital Services Act that went into effect in November of 2022, which requires platforms to remove illegal content and ensure that users can contest removal of their content.

]]>
https://broadbandbreakfast.com/2023/03/additional-content-moderation-for-section-230-protection-risks-reducing-speech-on-platforms-judge/feed/ 0 49309
Panel Disagrees on Antitrust Bills’ Promotion of Competition https://broadbandbreakfast.com/2023/03/panel-dissents-on-antitrust-bills-promotion-of-competition/?utm_source=rss&utm_medium=rss&utm_campaign=panel-dissents-on-antitrust-bills-promotion-of-competition https://broadbandbreakfast.com/2023/03/panel-dissents-on-antitrust-bills-promotion-of-competition/#respond Mon, 13 Mar 2023 14:22:24 +0000 https://broadbandbreakfast.com/?p=49305 WASHINGTON, March 10, 2023 – In a fiery debate Thursday, panelists at Broadband Breakfast’s Big Tech and Speech Summit disagreed on the effect of bills intended to promote competition and innovation in the Big Tech platform space, particularly for search engines.  

One such innovation is new artificial intelligence technology being designed to pull everything a user searches for into a single page, said Cheyenne Hunt-Majer, big tech accountability advocate with Public Citizen. It is built to keep users on the site and will drastically change competition in the search engine space, she said, touting the advancement of two bills currently awaiting Senate vote.  

Photo of Adam Kovacevich of Chamber of Progress, Berin Szoka of TechFreedom, Cheyenne Hunt-Majer of Public Citizen, Sacha Haworth of Tech Oversight Project, Christine Bannan of Proton (left to right)

The first, the American Innovation and Choice Online Act, would prohibit tech companies from self-preferencing their own products on their platforms over third-party competition. The second, the Open App Markets Act, would prevent app stores from requiring private app developers to use the app stores’ in-app payment system. 

Hunt-Majer said she believes that the bills would benefit consumers by kindling more innovation in big tech. “Perfect should not be the enemy of change,” she said, claiming that Congress must start somewhere, even if the bills are not perfect. 

“We are seeing a jump ahead in a woefully unprepared system to face these issues and the issues it is going to pose for a healthy market of competition and innovation,” said Hunt-Majer. 

It is good for consumers to be able to find other ways to search that Google isn’t currently providing, agreed Christine Bannan, U.S. public policy manager at privacy-focused email service Proton. The fundamental goal of these bills is directly at odds with big companies, which suggests its importance to curb anti-competitive behavior, she said. 

No need to rewrite or draft new laws for competition

But while Berin Szoka, president of non-profit technology organization TechFreedom, said competition concerns are valid, the Federal Trade Commission is best equipped to deal with disputes without the need to rewrite or draft new laws. Congress must legislate carefully to avoid unintended consequences that fundamentally harm businesses and no legislation has done so to date, he said. 

Both bills have broad anti-discrimination provisions which will affect Big Tech partnerships, Szoka continued. 

Not all experts believe that AI will replace search engines, however. Google has already adopted specialized search results that directly answer search queries, such as math problems, instead of resulting in several links to related webpages, said Adam Kovacevich, CEO of Chamber of Progress, a center-left tech policy coalition.  

Kovacevich said he believes that some search queries demand direct answers while others demand a wide range of sources, answers, and opinions. He predicts that there will be a market for both AI and traditional search engines like Google. 

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/panel-dissents-on-antitrust-bills-promotion-of-competition/feed/ 0 49305
Preview the Start of Broadband Breakfast’s Big Tech & Speech Summit​ https://broadbandbreakfast.com/2023/03/preview-the-start-of-broadband-breakfasts-big-tech-speech-summit/?utm_source=rss&utm_medium=rss&utm_campaign=preview-the-start-of-broadband-breakfasts-big-tech-speech-summit https://broadbandbreakfast.com/2023/03/preview-the-start-of-broadband-breakfasts-big-tech-speech-summit/#respond Fri, 10 Mar 2023 20:45:45 +0000 https://broadbandbreakfast.com/?p=49284 WASHINGTON, March 10, 2023 – Watch the beginning of the Big Tech & Speech Summit from Thursday, March 9, 2023.

This is the first 10 minutes. To see the full stream, register for a free trial of the Breakfast Club.

Photo of House Energy and Commerce Subcommittee Chairman Mike Bilirakis by Tim Su.

High-resolution videos will be available soon.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

Panelists Recommend More Concentrated Focus on Federal Privacy Legislation

Creating Institutions for Resolving Content Moderation Disputes Out-of-Court

Section 230 Shuts Down Conversation on First Amendment, Panel Hears

Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist

]]>
https://broadbandbreakfast.com/2023/03/preview-the-start-of-broadband-breakfasts-big-tech-speech-summit/feed/ 0 49284
Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/#respond Fri, 10 Mar 2023 20:31:59 +0000 https://broadbandbreakfast.com/?p=49188 WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.

Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.  

Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.

Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence. 

Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States. 

Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/feed/ 0 49188
Experts Clash Over Federal Preemption and State Laws on Privacy https://broadbandbreakfast.com/2023/03/experts-clash-over-federal-preemption-and-state-laws-on-privacy/?utm_source=rss&utm_medium=rss&utm_campaign=experts-clash-over-federal-preemption-and-state-laws-on-privacy https://broadbandbreakfast.com/2023/03/experts-clash-over-federal-preemption-and-state-laws-on-privacy/#respond Fri, 10 Mar 2023 14:15:00 +0000 https://broadbandbreakfast.com/?p=49178 WASHINGTON, March 9, 2023 – After a panel of experts recommended Thursday that Congress focus on passing federal privacy legislation, another group of experts wrangled with whether it is feasible for there to be a coexistence of federal and state privacy laws.

A number of states have already passed or are in the midst of passing their own privacy laws, with versions in California, Colorado, Connecticut, Utah, and Virginia having already come into – or are coming into effect – this year. Simultaneously, Congress is seized with pushing forth one blanket law for the entire country, with commitments from members of an innovation subcommittee to resurrect such a law. All laws address the collection, use, storage and sharing of data.

Photo of “Regulating Data Privacy” panelists Shane Tews, India McKinney, Alan Butler, Carl Szabo, Sara Collins and John Verdi (moderator) by Drew Clark

The last federal privacy proposal, called the American Data Privacy and Protection Act, did not pass both chambers before legislative turnover. One sticking point for lawmakers on that law was the possibility of pre-empting state laws, including California’s, whose members were vocal about their opposition on that front. Rep. Anna Eshoo, D-Calif., proposed her own amendment that would make the federal law a baseline and states could add provisions on top.

On Thursday at the Big Tech & Speech Summit, panelists grappled with the implications of that. On one side was the Electronic Frontier Foundation, represented by director of federal affairs India McKinney, who argued for more state consumer protection laws. McKinney suggested that a layering of privacy laws would actually increase consumer protections. The EFF has argued for a floor, not a ceiling, for any version of a federal privacy law.

The organizations has argued that some provisions in the aforementioned state laws are stronger than the ADPPA. Part of the argument is that states are knowledgeable of their own problems and should be able to fix them themselves.

Differing perspectives from ‘pro-privacy’ panelists

On the other side were Carl Szabo, vice president and general counsel for free speech and enterprise trade association NetChoice, and Shane Tews, nonresident senior fellow at the think tank American Enterprise Institute. Szabo pressed for a federal privacy law with pre-emption because of what he argued was a problematic patchwork of various laws that create regulatory and financial burdens on businesses.

Szabo piggybacked off a point made by Cathy Gellis, a lawyer and moderator for another panel, who argued that having different state laws would put her clients in a difficult spot because she wouldn’t be able to advise them in different jurisdictions in which she isn’t licensed to practice. In other words, the client would have to see more lawyers to ensure compliance.

Dane Snowden, senior advisor at telecom-focused law firm Wilkinson Barker Knauer, argued in an earlier panel that it is unfeasible for a product traveling through multiple jurisdictions to have to comply with various different privacy laws.

Szabo said having lots of privacy laws in the country is good for lining the pockets of lawyers, but bad for businesses.

Tews, who also argued for a federal privacy law, touched on the issue from a consumer perspective.

“Consistency, I think, is one thing that’s very important,” Tews said. “From a consumer’s perspective…it is that the information is consistent…we need to have a consistent national perspective on this.”

Tews also touched on this being an international trade issue.

Years ago, for example, the European Union made a fuss about its trading partners not having sufficient data privacy protections for its citizens. Its data protection law, the General Data Protection Regulation, was leveraged as a trade issue to ensure its citizens had data privacy protections for goods and services transacting across borders.

“We need to think about where this information is flowing and where this is actually ending at the end of the day,” Tews said.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

</div
]]>
https://broadbandbreakfast.com/2023/03/experts-clash-over-federal-preemption-and-state-laws-on-privacy/feed/ 0 49178
Section 230 Shuts Down Conversation on First Amendment, Panel Hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/?utm_source=rss&utm_medium=rss&utm_campaign=section-230-shuts-down-conversation-on-first-amendment-panel-hears https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/#respond Fri, 10 Mar 2023 02:10:38 +0000 https://broadbandbreakfast.com/?p=49171 WASHINGTON, March 9, 2023 – Section 230 as it is written shuts down the conversation about the first amendment, claimed experts in a debate at Broadband Breakfast’s Big Tech & Speech Summit Thursday.  

Matthew Bergman, founder of the Social Media Victims Law Center, suggested that section 230 avoids discussion on the appropriate weighing of costs and benefits that exist in allowing big tech companies litigation immunity in moderation decisions on their platforms. 

We need to talk about what level of the first amendment is necessary in a new world of technology, said Bergman. This discussion happens primarily in an open litigation process, he said, which is not now available for those that are caused harm by these products. 

Photo of Ron Yokubaitis of Texas.net, Ashley Johnson of Information Technology and Innovation Foundation, Emma Llanso of Center for Democracy and Technology, Matthew Bergman of Social Media Victims Law Center, and Chris Marchese of Netchoice (left to right)

All companies must have reasonable care, Bergman argued. Opening litigation doesn’t mean that all claims are necessarily viable, only that the process should work itself out in the courts of law, he said. 

Eliminating section 230 could lead to online services being “over correct” in moderating speech which could lead to suffocating social reform movements organized on those platforms, argued Ashley Johnson of research institution, Information Technology and Innovation Foundation. 

Furthermore, the burden of litigation would fall disproportionally on the companies that have fewer resources to defend themselves, she continued. 

Bergman responded, “if a social media platform is facing a lot of lawsuits because there are a lot of kids who have been hurt through the negligent design of that platform, why is that a bad thing?” People who are injured have the right by law to seek redress against the entity that caused that injury, Bergman said. 

Emma Llanso of the Center for Democracy and Technology suggested that platforms would change the way they fundamentally operate to avoid threat of litigation if section 230 were reformed or abolished, which could threaten freedom of speech for its users. 

It is necessary for the protection of the first amendment that the internet consists of many platforms with different content moderation policies to ensure that all people have a voice, she said. 

To this, Bergman argued that there is a distinction between algorithms that suggest content that users do not want to see – even that content that exists unbeknownst to the seeker of that information – and ensuring speech is not censored.  

It is a question concerning the faulty design of a product and protecting speech, and courts are where this balancing act should take place, said Bergman. 

This comes days after law professionals urged Congress to amend the statue to specify that it applies only to free speech, rather than the negligible design of product features that promote harmful speech. The discussion followed a Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube.   

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/section-230-shuts-down-conversation-on-first-amendment-panel-hears/feed/ 0 49171
Creating Institutions for Resolving Content Moderation Disputes Out-of-Court https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/?utm_source=rss&utm_medium=rss&utm_campaign=creating-institutions-for-resolving-content-moderation-disputes-out-of-court https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/#respond Fri, 10 Mar 2023 02:07:02 +0000 https://broadbandbreakfast.com/?p=49168 WASHINGTON, March 9, 2023 – A member of Meta’s oversight board, John Samples, suggested that out-of-court dispute institutions for content moderation may become the preferred method of settlement in Broadband Breakfast’s Big Tech & Speech Summit Thursday. 

Meta’s oversight board was created by the company to support free speech by upholding or reversing Facebook’s content moderation decisions. It works independently of the company and hosts 40 members around the world.  

The European Union’s Digital Services Act, which came into force in November of 2022, requires platforms to remove illegal content and ensure that users can contest removal of their content. It clarifies that platforms are only liable for users’ unlawful behavior if they are aware of it and fail to remove it. 

The Act specifies illegal speech to include speech that does harm to the electoral system, hate speech, and speech that harms fundamental rights. The appeals process allows citizens to go directly to the company, the national courts, or out-of-court dispute resolution institutions, none of which currently exist in Europe. 

According to Samples, the Act opens the way for private organizations like the oversight board to play a part in moderation disputes. “Meta has a tremendous advantage here as a first mover,” said Samples, “and the model of the oversight board may well spread to Europe and perhaps other places.” 

The United States may adopt European processes in the future as it takes the lead in moderating big tech, claimed Samples. “It would largely be a private system,” he said, and could unify and centralize social media moderation across platforms and around the world.  

The private option of self-regulation has worked well, said Samples. “It may well be expanding throughout much of the world. If it goes to Europe, it could go throughout.” 

Currently, of the media that Meta reviews for moderation, only one percent is restricted, either by taking down the content or reducing the size of the audience exposed to it, said Samples. The oversight board primarily rules against Meta’s decisions and accepts comments from independent interests.  

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/creating-institutions-for-resolving-content-moderation-disputes-out-of-court/feed/ 0 49168
Panelists Recommend More Concentrated Focus on Federal Privacy Legislation https://broadbandbreakfast.com/2023/03/panelists-recommend-more-concentrated-focus-on-federal-privacy-legislation/?utm_source=rss&utm_medium=rss&utm_campaign=panelists-recommend-more-concentrated-focus-on-federal-privacy-legislation https://broadbandbreakfast.com/2023/03/panelists-recommend-more-concentrated-focus-on-federal-privacy-legislation/#respond Thu, 09 Mar 2023 18:27:27 +0000 https://broadbandbreakfast.com/?p=49161 WASHINGTON, March 9, 2023 – The party stalemate on addressing Section 230 concerns in Washington gives lawmakers an opportunity to focus on crafting federal privacy legislation, according to panelists at Broadband Breakfast’s Big Tech and Speech Summit on Thursday.

The Democrats and the Republicans have taken opposite positions on what to do with the liability provision under the Communications Decency Act, which shields technology platforms from the legal consequences of what their users post. That division – to allow more or less content moderation on platforms – coupled with the Republicans taking back the House, means the issue may not be resolved in a timely manner – if ever.

That’s why a focus on federal privacy legislation should grip lawmakers to avoid the negative effects of a patchwork of different state laws with different interpretations of privacy, according to panelists at the event Thursday.

Photo of Subcommittee Chair Gus Bilirakis at Big Tech & Speech Summit Thursday by Tim Su

“You cannot have 50 different regimes to manage the privacy and data breach regulations of the companies,” said Steve DelBianco, president and CEO of NetChoice, a trade association for free speech and enterprise. “I am not so worried about Section 230 because the two parties that run this country have completely opposite aims in mind for 230.” NetChoice has been one of the main opponents to social media laws in Florida and Texas that would restrict certain moderation practices by tech platforms.

Dane Snowden, senior advisor at telecom law firm Wilkinson Barker Knauer, also noted that there’s no common definition of the Section 230 problem. “The challenge that we have right now is there’s not a common definition of the problem that you’re trying to fix…until you have that, you’re going to have both parties going in opposite directions on 230.

“I think privacy is the number one thing we should focus on – we need to have a national privacy framework.” Snowden illustrated the problem by using the example of a product that must go through multiple jurisdictions to get to its destination. He said this is unfeasible when those jurisdictions have different laws.

But Eli Noam, the director of Columbia University’s Institute for Tele-Information – who gave a keynote speech on the use of artificial intelligence for the metaverse – said there may be some upside with state privacy laws because it would allow them to explore and experiment on privacy rules.

On Section 230, Amy Peikoff, head of policy and legal of social media company Parler, noted she’s glad there’s a stalemate because she said “any amendment that would come forth right now would make it worse.”

Earlier this month, members of the House Innovation, Data and Commerce subcommittee reiterated their support for a federal privacy legislation and discussed how to build on the previously introduced American Data Privacy and Protection Act before the midterm-induced turnover in Congress.

The ADPPA addressed algorithmic bias testing, limits on targeted advertising to kids and a pre-emption provision that would allow the federal law to usurp state law.

Subcommittee Chair Gus Bilirakis opened the summit with remarks about the need to amend Section 230 to address problems associated with kids’ use of social media, including suicidal ideations.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/panelists-recommend-more-concentrated-focus-on-federal-privacy-legislation/feed/ 0 49161
Watch the Webinar of Big Tech & Speech Summit for $9 and Receive Our Breakfast Club Report https://broadbandbreakfast.com/2023/03/watch-the-webinar-of-big-tech-speech-summit-for-9-and-receive-our-breakfast-club-report/?utm_source=rss&utm_medium=rss&utm_campaign=watch-the-webinar-of-big-tech-speech-summit-for-9-and-receive-our-breakfast-club-report https://broadbandbreakfast.com/2023/03/watch-the-webinar-of-big-tech-speech-summit-for-9-and-receive-our-breakfast-club-report/#respond Thu, 09 Mar 2023 04:45:54 +0000 https://broadbandbreakfast.com/?p=49150 WASHINGTON, March 8, 2023 – The Big Tech & Speech Summit is Thursday, March 9. Broadband Breakfast is making the live webinar of the summit available for ONLY $9. Follow the discussion and tweet at #BTSS.

The Big Tech & Speech Summit is an exclusive forum addressing the red-hot controversies impacting Big Tech in Washington. In-person registration for the event at Clyde’s of Gallery Place is still available from 8:30 a.m. to 3:30 p.m. The full-day program is available for $299, with breakfast and lunch included in the price. In person registrants will receive unlimited access to the event videos and two months’ complimentary membership in the Breakfast Club.

For those who can’t make it to Washington, Broadband Breakfast is making a webinar of the summit available for ONLY $9. Webinar registrants will also receive access to the Breakfast Club’s March report on Content Moderation, Section 230 and the Future of Online Speech.

This comprehensive report examines the extremely timely issue of content moderation and Section 230 from multiple angles.

Conference sessions include four panels, a keynote by House Energy and Commerce Committee Chair, and three special addresses

Rep. Gus M. Bilirakis, R-Florida, will offer the keynote address, soon after the conference begins at 8:30 a.m. ET. Bilirakis, who represents the 12th Congressional District. serves as a aenior Member of the Energy and Commerce Committee, and chairman of the Innovation, Data and Commerce Subcommittee. He will be followed by Eli Noam, director of the Columbia Institute for Tele-Information, at 9 a.m.

Panel 1, The Big Picture for Big Tech,  will be moderated by Broadband Breakfast Editor and Publisher Drew Clark, and includes a diversity of panelists including Steve DelBianco, President and CEO, NetChoice, Willmary Escoto, U.S. Policy Analyst, Access Now, Amy Peikoff, Head of Policy and Legal, Parler, and Dane Snowden, Senior Advisor, Wilkinson Barker Knauer.

Next up will be a special address by John Samples, vice president of the Cato Institute and a member of Facebook’s independent Oversight Board, which provides final and binding decisions on whether specific content should be allowed or removed from Facebook and Instagram.

The next session, panel 2 at 10:45 a.m., will feature The Fragility of Section 230, and be moderated by Attorney Cathy Gellis. Her panel is certain to feature disagreements among Matthew Bergman, Founding Attorney, Social Media Victims Law Center; Ashley Johnson, Senior Policy Analyst, Information Technology and Innovation Foundation, Emma Llansó, Director, Free Expression Project, Center for Democracy & Technology, Chris Marchese, Counsel, NetChoice, and Ron Yokubaitis, Founder, Texas.net, Inc.

Following lunch, we’ll hear our third special address, by Adam Conner, vice president for technology policy at American Progress. Conner founded Facebook’s Washington office and spent several years on the public policy team, and will speak on a progressive technology policy platform.

The afternoon features back-to-back panels on data privacy and competition. Panel 3, Regulating Data Privacy at 1 p.m., will be moderated by John Verdi, Senior Vice President of Policy, Future of Privacy Forum, with panelists Alan Butler, Executive Director and President, Electronic Privacy Information Center, Sara Collins, Senior Policy Counsel, Public Knowledge, India McKinney, Director of Federal Affairs, Electronic Frontier Foundation, Carl Szabo, Vice President & General Counsel, NetChoice, and Shane Tews, Nonresident Senior Fellow, American Enterprise Institute.

Innovation, Competition and Future Tech, Panel 4 at 2:15 p.m., will be moderated by Sara Morrison, Senior Reporter, Recode by Vox, and feature Christine Bannan, U.S. Public Policy Manager, Proton, Sacha Haworth, Executive Director, Tech Oversight Project, Cheyenne Hunt-Majer, Big Tech Accountability Advocate, Public Citizen, Adam Kovacevich, CEO, Chamber of Progress, and Berin Szóka, President, TechFreedom Foundation.

See the web page for the event.

Register in person, on online for $9

The registration page for the summit permits you to purchase a ticket to attend in person at Clyde’s of Gallery Place at 707 7th Street NW, Washington, DC 20006.

It will also allow you to purchase a livestream ticket for $9.

Alternatively, the registration page for the summit now has a button: “Join Webinar for $9.” This takes you to a Zoom registration page, allowing you to pay $9 and join the webinar at 8:30 a.m. ET on Thursday.

We hope you will take advantage of this opportunity to learn about the red-hot controversies impacting Big Tech in Washington from the experts.

]]>
https://broadbandbreakfast.com/2023/03/watch-the-webinar-of-big-tech-speech-summit-for-9-and-receive-our-breakfast-club-report/feed/ 0 49150
Congress Should Amend Section 230, Senate Subcommittee Hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-amend-section-230-senate-subcommittee-hears https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/#respond Thu, 09 Mar 2023 00:08:28 +0000 https://broadbandbreakfast.com/?p=49135 WASHINGTON, March 8, 2023 – Law professionals at a Senate Subcommittee on Privacy, Technology and the Law hearing on Wednesday urged Congress to amend Section 230 to specify that it applies only to free speech, rather than the promotion of misinformation.

Section 230 protects platforms from being treated as a publisher or speaker of information originating from a third party, thus shielding it from liability for the posts of the latter. Mary Anne Franks, professor of law at the University of Miami School of Law, argued that there is a difference between protecting free speech and protecting information and the harmful dissemination of that information.

Hany Farid, professor at University of California, Berkley, argued that there should be a distinction between a negligently designed product feature and a core component to the platform’s business. For example, YouTube’s video recommendations is a product feature rather than an essential function as it is designed solely to maximize advertising revenue by keeping users on the platform, he said.

YouTube claims that the algorithm to recommend videos is unable to distinguish between two different videos. This, argued Farid, should be considered a negligently designed feature as YouTube knew or should have reasonably known that the feature could lead to harm.

Section 230, said Farid, was written to immunize tech companies from defamation litigation, not to immunize tech companies from any wrongdoing, including negligible design of its features.

“At a minimum,” said Franks, returning the statue to its original intention “would require amending the statute to make clear that the law’s protections only apply to speech and to make clear that platforms that knowingly promote harmful content are ineligible for immunity.”

In an State of the Net conference earlier this month, Frank emphasized the “good Samaritan” aspect of the law, claiming that it is supposed to “provide incentives at platforms to actually do the right thing.” Instead, the law does not incentivize platforms to moderate its content, she argued.

Jennifer Bennett of national litigation boutique Gupta Wessler suggested that Congress uphold what is known as the Henderson framework, which would hold a company liable if it materially contributes to what makes content unlawful, including the recommendation and dissemination of the content.

Unfortunately, lamented Eric Schnapper, professor of law at University of Washington School of Law, Section 230 has barred the right of Americans to get redress if they’ve been harmed by big tech. “Absolute immunity breeds absolute irresponsibility,” he said.

Senator Richard Blumenthal, R-Connecticut, warned tech companies that “reform is coming” at the onset of the hearing.

This comes weeks after the Supreme Court decision to provide immunity to Google for recommending terrorist videos on its video platform YouTube. The case saw industry dissention on whether section 230 protects algorithmic recommendations. Justice Brett Kavanaugh claimed that YouTube forfeited its protection by using recommendation algorithms but was overturned in the court ruling.

]]>
https://broadbandbreakfast.com/2023/03/congress-should-amend-section-230-senate-subcommittee-hears/feed/ 0 49135
Content Moderation, Section 230 and the Future of Online Speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/?utm_source=rss&utm_medium=rss&utm_campaign=content-moderation-section-230-and-the-future-of-online-speech https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/#respond Wed, 08 Mar 2023 23:23:43 +0000 https://broadbandbreakfast.com/?p=49138

In the 27 years since the so-called “26 words that created the internet” became law, rapid technological developments and sharp partisan divides have fueled increasingly complex content moderation dilemmas.

Earlier this year, the Supreme Court tackled Section 230 for the first time through a pair of cases regarding platform liability for hosting and promoting terrorist content. In addition to the court’s ongoing deliberations, Section 230—which protects online intermediaries from liability for third-party content—has recently come under attack from Congress, the White House and multiple state legislatures.

Members of the Breakfast Club also have access to high-resolution videos from the Big Tech & Speech Summit!

Join to receive your copy of the Breakfast Club Exclusive Report!

]]>
https://broadbandbreakfast.com/2023/03/content-moderation-section-230-and-the-future-of-online-speech/feed/ 0 49138
Industry Experts Caution Against Extreme Politicization in Section 230 Debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=industry-experts-caution-against-extreme-politicization-in-section-230-debate https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/#respond Tue, 07 Mar 2023 21:59:01 +0000 https://broadbandbreakfast.com/?p=49096 WASHINGTON, March 7, 2023 — Congress should reject the heavily politicized rhetoric surrounding Section 230 and instead consider incremental reforms that are narrowly targeted at specific problems, according to industry experts at State of the Net on Monday.

“What I really wish Congress would do, since 230 has become this political football, is put the football down for a second,” said Billy Easley, senior public policy lead at Reddit.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

Instead of starting from Section 230, Easley suggested that Congress methodically identify specific problems and consider how each could best be addressed. With many issues, he claimed that there are “a slew of policy options” more effective than changing Section 230.

Much of the discussion about Section 230 is “intentionally being pitted into binaries,” said Yaël Eisenstat, head of the Anti-Defamation League’s Center for Technology and Society. In reality, she continued, many proposals exist somewhere between keeping Section 230 exactly as it is and throwing it out altogether.

Eisenstat expressed skepticism about the often-repeated claim that changing Section 230 will “break the internet.”

“Let’s be frank — the tobacco industry, the automobile industry, the oil and gas industry, the food industry also did not want to be regulated and claimed it would completely destroy them,” she said. “And guess what? They all still exist.”

Joel Thayer, president of the Digital Progress Institute, claimed that many arguments against Section 230 reform are “harkening back to a more libertarian view, which is ‘let’s not touch it because bad things can happen.”

“I think that’s absurd,” he said. “I think even from a political standpoint, that’s just not the reality.”

Potential reforms should be targeted and consider unintended consequences

While Section 230 has performed “unbelievably well” for a law dating back to 1996, it should at least be “tweaked” to better reflect the present day, said Matt Perault, director of the Center on Technology Policy at the University of North Carolina.

But Perault acknowledged that certain proposed changes would create a significant compliance burden for smaller platforms, unlike large companies with “huge legal teams, huge policy teams, huge communications teams.”

Concerns about the impact of Section 230 reform on small businesses can be addressed by drawing distinct guidelines about which types of companies are included in any given measure, Thayer said.

Easley warned that certain proposals could lead to major unintended consequences. While acknowledging Republican concerns about “censorship” of conservative content on social media platforms, he argued that removing Section 230 protections was not the best way to address the issue — and might completely backfire.

“There’s going to be less speech in other areas,” Easley said. “We saw this with SESTA/FOSTA, we’ve seen this in other sorts of proposals as well, and I just really wish that Congress would keep that in mind.”

Thayer suggested that future legislative efforts start with increasing tech companies’ transparency, building off of the bipartisan momentum from the previous session of Congress.

Easley agreed, adding that increased access to data will allow lawmakers to more effectively target other areas of concern.

]]>
https://broadbandbreakfast.com/2023/03/industry-experts-caution-against-extreme-politicization-in-section-230-debate/feed/ 0 49096
TikTok Security Officer Touts New Oversight Framework as Congress Pushes for Ban https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/?utm_source=rss&utm_medium=rss&utm_campaign=tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/#respond Tue, 07 Mar 2023 19:16:20 +0000 https://broadbandbreakfast.com/?p=49081 WASHINGTON, March 7, 2023 — As lawmakers grow increasingly wary of TikTok’s risks to national security, the company is developing a complex framework with significant government and third-party oversight in a bid to continue its United States operations.

“It’s going to be an unprecedented amount of transparency,” said Will Farrell, interim security officer at TikTok, in a keynote address at State of the Net on Monday.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

TikTok’s efforts to win U.S. government approval come in the face of growing Congressional hostility toward the platform. Sens. Mark Warner, D-Va., and John Thune, R-S.D., on Tuesday unveiled a bill aimed at giving President Joe Biden the ability to impose a complete ban of the app.

Farrell claimed the new framework would be a comprehensive answer to widespread concerns of unauthorized access to data and Chinese state influence over content. “I can’t explain how hard and complex this is… We’ve been working on this for close to two years,” he said.

TikTok’s U.S. data security initiative — internally named “Project Texas” — is largely a product of the company’s ongoing negotiations with the inter-agency Committee on Foreign Investment in the United States, which first opened an investigation into TikTok’s national security risks in 2019.

‘Project Texas’ will emphasize third-party oversight

The initiative’s title references its partnership with Austin-based software company Oracle, which will house U.S. user data and review TikTok source code.

In June 2022, TikTok wrote in a letter to several senators that all U.S. user data was being being routed to Oracle by default and that the company would eventually “delete U.S. users’ protected data from our own systems and fully pivot to Oracle cloud servers located in the U.S.”

Another key component of Project Texas is a new subsidiary entity, TikTok U.S. Data Security, Inc., which will replicate many of TikTok’s existing processes for U.S. users with several additional layers of oversight. USDS will be governed by an independent board of directors, which in turn will report to CFIUS.

Including Oracle, USDS and CFIUS, Farrell said that “at least seven independent third parties” would be overseeing TikTok’s U.S. data security operations.

“We’re breaking new ground here — no one’s ever done anything like this before,” Farrell said. “Essentially what we’re doing is every single line of code… every single line of code has to be inspected by Oracle and another third-party source code inspector approved by the U.S. government.”

Oracle and the third-party inspector will also thoroughly check the moderation models and recommendation algorithms to ensure that they don’t have “a bias or political agenda,” Farrell said.

Many lawmakers still skeptical about TikTok’s data security practices

Despite TikTok’s efforts, the legislation proposed by Warner and Thune sets the stage for a national ban of the platform — and several other members of Congress have previously indicated their potential support.

In February, Sens. Richard Blumenthal, D-Conn., and Jerry Moran, R-Kan., urged CFIUS to “swiftly conclude its investigation and impose strict structural restrictions between TikTok’s American operations and its Chinese parent company, ByteDance.”

In a letter to Treasury Secretary and CFIUS Chair Janet Yellen, the senators expressed “profound concern” about TikTok’s future U.S. operations and warned that the committee “should not put its imprimatur on a deal with TikTok if it cannot fully ensure our personal data and access to information is free from spying and interference from the Chinese government.”

“Moreover, monitoring and hosting requirements will never address the distrust earned from ByteDance’s past conduct,” the senators added.

In December 2022, the chairs of the House Foreign Affairs Committee and the House Armed Services Committee sent a letter to Yellen and other officials saying that the reported negotiations were “deeply concerning.”

“At present, it does not appear the draft agreement reportedly favored by Treasury would require ByteDance, and by extension [People’s Republic of China] authorities, to give up control of its algorithm,” wrote Reps. Michael McCaul, R-Texas, and Mike Rogers, R-Ala.

]]>
https://broadbandbreakfast.com/2023/03/tiktok-security-officer-touts-new-oversight-framework-as-congress-pushes-for-ban/feed/ 0 49081
State of the Net Panelists Clash Over Section 230 Interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/?utm_source=rss&utm_medium=rss&utm_campaign=state-of-the-net-panelists-clash-over-section-230-interpretations https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/#respond Mon, 06 Mar 2023 21:19:55 +0000 https://broadbandbreakfast.com/?p=49063 Gonzalez v. Google.]]> WASHINGTON, March 6, 2023 — Experts at the State of the Net conference on Monday expressed a wide range of viewpoints about how Section 230 should be interpreted in the context of Gonzalez v. Google, an intermediary liability case recently argued before the Supreme Court.

If the justices want to understand Section 230’s original intent, NetChoice CEO Steve DelBianco said, they should turn to the law’s original co-authors — Sen. Ron Wyden, D-Ore., and former Rep. Chris Cox, now on the NetChoice board of directors. In January, Wyden and Cox filed an amicus brief urging the Supreme Court to uphold Section 230 of the Communications Decency Act.

Don’t miss the Big Tech & Speech Summit on Thursday, March 9 from 8:30 a.m. to 3:30 p.m. Broadband Breakfast is making a webinar of the summit available. Registrants and webinar participants receive two months’ complimentary membership in the Broadband Breakfast Club.

But Mary Anne Franks, professor at the University of Miami School of Law, argued that a modern-day interpretation of the law should be based on several factors other than the author’s explanation, such as the statute’s actual wording and its legislative history. “The law does not have to be subject to revisionist or self-serving interests of interpretations after the fact,” she said.

Franks emphasized the “Good Samaritan” aspect of Section 230, claiming that the law is supposed to “provide incentives for platforms to actually do the right thing.”

Alex Abdo, litigation director at Columbia University’s Knight First Amendment Institute, said he was sympathetic to Franks’ concerns and agreed that tech companies are generally governed by financial motivations, rather than a dedication to free speech or the public interest. Not only can online platforms be exploited to cause harm, he said, they often amplify sensationalized and provocative speech by design.

However, Abdo maintained that Section 230 played a key role in protecting unpopular online speech — including content posted by human rights activists, government whistleblowers and dissidents — by making it less likely that social media platforms would feel the need to remove it.

DelBianco expressed measured optimism about the justices’ approach to Section 230, noting that Justice Clarence Thomas seemed to reject some of the algorithmic harm claims despite his previously expressed interest in altering Section 230. DelBianco also highlighted Justice Amy Coney Barrett’s line of questioning about whether an individual can be held liable for simply liking or retweeting content, calling it “one of the most surprising questions” of the oral arguments.

But despite their appreciation for certain aspects of the justices’ approach, multiple panelists agreed that changing Section 230 should be a careful and deliberate process, better suited to Congress than the courts. “I would much prefer a scalpel to a sledgehammer,” said Matt Wood, vice president of policy and general counsel at Free Press.

The Senate Judiciary Subcommittee on Privacy, Technology and the Law will hold a hearing on Wednesday to examine platform liability, focusing on Gonzalez.

]]>
https://broadbandbreakfast.com/2023/03/state-of-the-net-panelists-clash-over-section-230-interpretations/feed/ 0 49063
House Innovation, Data, and Commerce Chairman Gus Bilirakis to Keynote Big Tech & Speech Summit https://broadbandbreakfast.com/2023/03/house-innovation-data-and-commerce-chairman-gus-bilirakis-to-keynote-big-tech-speech-summit/?utm_source=rss&utm_medium=rss&utm_campaign=house-innovation-data-and-commerce-chairman-gus-bilirakis-to-keynote-big-tech-speech-summit https://broadbandbreakfast.com/2023/03/house-innovation-data-and-commerce-chairman-gus-bilirakis-to-keynote-big-tech-speech-summit/#respond Thu, 02 Mar 2023 19:52:38 +0000 https://broadbandbreakfast.com/?p=48969 WASHINGTON, March 2, 2023 – House Innovation, Data and Commerce Subcommittee Chairman Gus Bilirakis, R-Fla., will provide the keynote address at Broadband Breakfast’s Big Tech & Speech Summit on March 9, Breakfast Media LLC announced Thursday.

Bilirakis, who represents the 12th Congressional District of Florida, has served in Congress since 2007, and has been a member of the House Energy and Commerce Committee for the past 10 years. He became chairman of the subcommittee in January after Republicans took the majority of the chamber.

The Congressman will speak at 8:45 a.m. at the summit, taking place at Clyde’s of Gallery Place, 707 7th Street NW in Washington. He joins an impressive list of speakers and panelists, including Cato Institute Vice President John Samples and NetChoice CEO Steve DelBianco as well as experts from the Center for Democracy & Technology, Information Technology and Innovation Foundation, Future of Privacy Forum, American Enterprise Institute and more.

See the complete lineup at the Big Tech & Speech Summit.

Register now for the Big Tech & Speech Summit.

Bilirakis has long been an advocate for holding Big Tech accountable. “Make no mistake about it, Big Tech has been given far too much leeway… They have proven themselves incapable or unwilling to act appropriately, especially when it comes to protecting children, and it is obvious that Congressional action is necessary,” he said in July 2021, after sending a series of letters warning social media platforms to better prevent underage users.

In 2022, Bilirakis helped lead the Big Tech Task Force in unveiling a legislative package aimed at combatting social media platforms’ alleged negative impact on youth mental health. He also worked to strengthen coordination between tech companies and law enforcement entities in order to better combat illegal online activities.

Now leading the Innovation, Data and Commerce Subcommittee, Bilirakis continues to be a leading voice in tech policy issues. “It is imperative that this committee establishes foundational frameworks for deploying emerging technologies,” he said at a February hearing. “We came close last Congress when we passed the bipartisan and bicameral American Data Privacy and Protection Act, but this Congress we need to ensure it gets across the finish line.”

Speaking at another hearing on Wednesday, Bilirakis emphasized the importance of consumer choice when it comes to online privacy, pointing to the increasing prevalence of targeted advertising. “To some, these practices may be viewed as more convenient for their shopping or useful for how they digest information, but others may find this practice invasive and unsolicited,” he said.

At the same time, Bilirakis cautioned against placing an undue burden on businesses. “Companies, especially small startups, shouldn’t be subject to random or punitive letters in the mail notifying them that certain practices could be unfair or deceptive,” he said. “It is essential that the [Federal Trade Commission] enforce the laws that we as a Congress enact and specifically authorize, but not go rogue beyond the rules of the road we provide.”

]]>
https://broadbandbreakfast.com/2023/03/house-innovation-data-and-commerce-chairman-gus-bilirakis-to-keynote-big-tech-speech-summit/feed/ 0 48969
Supreme Court Considers Liability for Twitter Not Removing Terrorist Content https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-considers-liability-for-twitter-not-removing-terrorist-content https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/#respond Thu, 23 Feb 2023 22:55:58 +0000 https://broadbandbreakfast.com/?p=48774 Twitter v. Taamneh hinged on specific interpretations of the Anti-Terrorism Act.]]> WASHINGTON, February 22, 2023 — In the second of two back-to-back cases considering online intermediary liability, Supreme Court justices on Wednesday sought the precise definitions of two words — “substantial” and “knowingly” — in order to draw lines that could have major implications for the internet as a whole.

The oral arguments in Twitter v. Taamneh closely examined the text of the Anti-Terrorism Act, considering whether the social media platform contributed to a 2017 terrorist attack by hosting terrorist content and failing to remove ISIS-affiliated accounts — despite the absence of a direct link to the attack. The hearing followed Tuesday’s arguments in Gonzalez v. Google, a case stemming from similar facts but primarily focused on Section 230.

Many of Wednesday’s arguments hinged on specific interpretations of the ATA, which states that liability for injuries caused by international terrorism “may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism.”

Seth Waxman, the attorney representing Twitter, argued that Twitter should not be held liable unless it knew that it was substantially assisting the act of terrorism that injured the plaintiff.

“But [it’s] not enough to know that you’re providing substantial assistance to a group that does this kind of thing?” Justice Ketanji Brown Jackson asked.

“Of course not,” Waxman said.

Jackson was unconvinced, saying that she did not see a clear distinction.

Justice Amy Coney Barrett questioned whether the means of communication to individuals planning a terrorist attack would be considered “substantial assistance.” Waxman replied that it would depend on how significant and explicit the communications were.

Clashing interpretations of Anti-Terrorism Act left unresolved

At one point, Justice Neil Gorsuch suggested that Waxman was misreading the law by taking the act of terrorism as the object of the “aiding and abetting” clause, rather than the person who committed the act.

The latter reading would help Twitter, the justice said, because the plaintiff would then have to prove that the company aided a specific person, rather than an abstract occurrence.

However, Waxman doubled down on his original reading.

“Are you sure you want to do that?” Gorsuch asked, drawing laughs from the gallery.

Waxman also pushed back against assertions that he claimed were “combining silence or inaction with affirmative assistance.” If Twitter said that its platform should not be used to support terrorist groups or acts, Waxman argued, the company should not be held liable for any potential terrorist content, even if it did nothing at all to enforce that rule.

Justice Elena Kagan disagreed. “You’re helping by providing your service to those people with the explicit knowledge that those people are using it to advance terrorism,” she said.

Justices expressed concern over broad scope of potential liability

Unlike in the Gonzalez arguments, where the government largely supported increasing platform liability, Deputy Solicitor General Edwin Kneedler defended Twitter, saying that holding the company liable could result in hindering “legitimate and important activities by businesses, charities and others.”

Several justices raised similar concerns about the decision’s potentially far-reaching impacts.

“If we’re not pinpointing cause and effect or proximate cause for specific things, and you’re focused on infrastructure or just the availability of these platforms, then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abettor in those instances,” Justice Clarence Thomas told Eric Schnapper, the attorney representing the plaintiffs.

Schnapper agreed that this would be the case, but proposed setting reasonable boundaries around liability by using a standard of “remoteness in time, weighed together with the volume of activity.”

Justice Samuel Alito proposed a scenario in which a police officer tells phone companies, gas stations, restaurants and other businesses to stop serving individuals who are broadly suspected of committing a crime. Would the businesses have to comply, Alito questioned, to avoid liability for aiding and abetting?

Schnapper did not answer directly. “That’s a difficult question,” he said. “But clearly, at one end of the spectrum… If you provide a gun to someone who you know is a murderer, I think you could be held liable for aiding and abetting.”

]]>
https://broadbandbreakfast.com/2023/02/supreme-court-considers-liability-for-twitter-not-removing-terrorist-content/feed/ 0 48774
Bret Swanson: Censors Target Internet Talkers With AI Truth Scores https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/?utm_source=rss&utm_medium=rss&utm_campaign=bret-swanson-censors-target-internet-talkers-with-ai-truth-scores https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/#respond Thu, 23 Feb 2023 14:44:09 +0000 https://broadbandbreakfast.com/?p=48750 Elon Musk’s purchase of Twitter may have capped the opening chapter in the Information Wars, where free speech won a small but crucial battle. Full spectrum combat across the digital landscape, however, will only intensify, as a new report from the Brookings Institution, a key player in the censorship industrial complex, demonstrates.

First, a review.

Reams of internal documents, known as the Twitter Files, show that social media censorship in recent years was far broader and more systematic than even we critics suspected. Worse, the files exposed deep cooperation – even operational integration – among Twitter and dozens of government agencies, including the FBI, Department of Homeland Security, DOD, CIA, Cybersecurity Infrastructure Security Agency (CISA), Department of Health and Human Services, CDC, and, of course, the White House.

Government agencies also enlisted a host of academic and non-profit organizations to do their dirty work. The Global Engagement Center, housed in the State Department, for example, was originally launched to combat international terrorism but has now been repurposed to target Americans. The U.S. State Department also funded a UK outfit called the Global Disinformation Index, which blacklists American individuals and groups and convinces advertisers and potential vendors to avoid them. Homeland Security created the Election Integrity Partnership (EIP) –  including the Stanford Internet Observatory, the University of Washington’s Center for an Informed Public, and the Atlantic Council’s DFRLab – which flagged for social suppression tens of millions of messages posted by American citizens.

George Orwell

Even former high government U.S. officials got in on the act – appealing directly (and successfully) to Twitter to ban mischief-making truth-tellers.

With the total credibility collapse of legacy media over the last 15 years, people around the world turned to social media for news and discussion. When social media then began censoring the most pressing topics, such as Covid-19, people increasingly turned to podcasts. Physicians and analysts who’d been suppressed on Twitter, Facebook, and YouTube, and who were of course nowhere to be found in legacy media, delivered via podcasts much of the very best analysis on the broad array of pandemic science and policy.

Which brings us to the new report from Brookings, which concludes that one of the most prolific sources of ‘misinformation’ is now – you guessed it – podcasts. And further, that the under-regulation of podcasts is a grave danger.

In “Audible reckoning: How top political podcasters spread unsubstantiated and false claims,” Valerie Wirtschafter writes:

  • Due in large part to the say-whatever-you-want perceptions of the medium, podcasting offers a critical avenue through which unsubstantiated and false claims proliferate. As the terms are used in this report, the terms “false claims,” “misleading claims,” “unsubstantiated claims” or any combination thereof are evaluations by the research team of the underlying statements and assertions grounded in the methodology laid out below in the research design section and appendices. Such claims, evidence suggests, have played a vital role in shaping public opinion and political behavior. Despite these risks, the podcasting ecosystem and its role in political debates have received little attention for a variety of reasons, including the technical difficulties in analyzing multi-hour, audio-based content and misconceptions about the medium.

To analyze the millions of hours of audio content, Brookings used natural language processing to search for key words and phrases. It then relied on self-styled fact-checking sites Politifact and Snopes – pause for uproarious laughter…exhale – to determine the truth or falsity of these statements. Next, it deployed a ‘cosine similarity’ function to detect similar false statements in other podcasts.

The result: “conservative podcasters were 11 times more likely than liberal podcasters to share claims fact-checked as false or unsubstantiated.”

One show Brookings misclassified as “conservative” is the Dark Horse science podcast hosted by Bret Weinstein and Heather Heying. Over the past three years, they meticulously explored the complex world of Covid, delivering scintillating insights and humbly correcting their infrequent missteps. Brookings, however, determined 13.8 percent of their shows contained false information.

What would the Brookings methodology, using a different set of fact checkers, spit out if applied to CNN, the Washington Post, the FDA, CDC, or hundreds of blogs, podcasts, TV doctors, and “science communicators,” who got nearly everything wrong?

Speaking on journalist Matt Taibbi’s podcast, novelist Walter Kirn skewered the new A.I. fact-checking scheme. It pretends to turn censorship into a “mathematical, not Constitutional, concern” – or, as he calls it, “sciency, sciency, sciency bullshit.”

The daisy chain of presumptuous omniscience, selection bias, and false precision employed to arrive at these supposedly quantitative conclusions about the vast, diverse, sometimes raucous, and often enlightening world of online audio is preposterous.

And yet it is deadly serious.

The collapse of support for free speech among Western pseudo-elites is the foundation of so many other problems, from medicine to war. Misinformation is the natural state of the world. Open science and vigorous debate are the tools we deploy to become less wrong over time. Individual and collective decision-making depend on them.

Bret Swanson is an analyst of technology & the economy, president of Entropy Economics, fellow at the American Enterprise Institute, and chairman of the Indiana Public Retirement System. This article originally appeared on Infonomena by Bret Swanson on Substack on February 22, 2023, and is reprinted with permission.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views reflected in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

]]>
https://broadbandbreakfast.com/2023/02/bret-swanson-censors-target-internet-talkers-with-ai-truth-scores/feed/ 0 48750
Supreme Court Justices Express Caution About Entering Section 230 Debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/?utm_source=rss&utm_medium=rss&utm_campaign=supreme-court-justices-express-caution-about-entering-into-section-230-debate https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/#respond Thu, 23 Feb 2023 01:01:27 +0000 https://broadbandbreakfast.com/?p=48728 Gonzalez v. Google, justices repeatedly voiced concerns about potential unintended consequences.]]> WASHINGTON, February 22, 2023 — Supreme Court justices expressed broad skepticism about removing liability protections for websites that automatically recommend user-generated content, marking a cautious start to a pair of long-awaited cases involving platform liability for terrorist content.

Gonzalez v. Google, argued on Tuesday, hinges on whether YouTube’s use of recommendation algorithms puts it outside the scope of Section 230, which generally provides platforms with immunity for third-party content.

A separate case involving terrorism and social media, Twitter v. Taamneh, was argued on Wednesday. Although the basic circumstances of the cases are similar — both brought against tech companies by the families of terrorist attack victims — the latter focuses on what constitutes “aiding and abetting” under the Anti-Terrorism Act.

Section 230 arguments central to Gonzalez

Section 230 protections are at the heart of Gonzalez. The provision, one of the few surviving components of the 1996 Communications Decency Act, is credited by many experts with facilitating the internet’s development and enabling its daily workings.

But the plaintiffs in Gonzalez argued that online platforms such as YouTube should be held accountable for actively promoting harmful content.

As oral arguments commenced, Justice Elena Kagan repeatedly raised concerns that weakening Section 230 protections could have a wider impact than intended. “Every time anybody looks at anything on the internet, there is an algorithm involved… everything involves ways of organizing and prioritizing material,” she said.

These organization methods are essential for making platforms user-friendly, argued Lisa Blatt, the attorney representing Google. “There are a billion hours of videos watched each day on YouTube, and 500 hours uploaded every minute,” she said.

Justice Brett Kavanaugh pointed to the inclusion of platforms that “pick, choose, analyze or digest content” in the statutory definition of covered entities. Claiming that YouTube forfeited Section 230 protections by using recommendation algorithms, Kavanaugh said, “would mean that the very thing that makes the website an interactive computer service also means that it loses the protection of 230.”

Eric Schnapper, the attorney representing the plaintiffs, argued that the provision in question was only applicable to software providers and YouTube did not qualify.

Justices concerned about unintended impacts of weakening Section 230

Despite Schnapper’s interpretation of the statute’s intent, Kavanaugh maintained his concerns about altering it. “It seems that you continually want to focus on the precise issue that was going on in 1996, but… to pull back now from the interpretation that’s been in place would create a lot of economic dislocation, would really crash the digital economy,” he said.

Weakening Section 230 could also open the door to “a world of lawsuits,” Kagan predicted. “Really, anytime you have content, you also have these presentational and prioritization choices that can be subject to suit,” she said, pointing to search engines and social media platforms as other services that could be impacted.

Deputy Solicitor General Malcolm Stewart, who primarily sided with the plaintiff, argued that even if such lawsuits were attempted, “they would not be suits that have much likelihood of prevailing.”

Justice Amy Coney Barrett noted that the text of Section 230 explicitly includes users of online platforms in addition to the platforms themselves. If the statute was changed, Barrett questioned, could individual users be held liable for any content that they liked, reposted or otherwise engaged with?

“That’s content you’ve created,” Schnapper replied.

‘Confusion’ about the case and the court’s proper role

Throughout the hearing, several justices expressed confusion at the complexities of the case.

During an extended definition of YouTube “thumbnails” — which Schnapper described as a “joint creation” because of the platform-provided URLs accompanying user-generated media — Justice Samuel Alito told Schnapper that the justice was “completely confused by whatever argument you’re making at the present time.”

At another point, Justice Ketanji Brown Jackson said she was “thoroughly confused” by the way that two different questions — whether Google could claim immunity under Section 230 and whether the company aided terrorism — were seemingly being conflated.

Just minutes later, after Stewart presented his argument on behalf of the Justice Department, Justice Clarence Thomas began his line of questioning with, “Well, I’m still confused.”

In addition to frequent references to confusion, multiple justices suggested that some aspects of the case might be better left to Congress.

“I don’t have to accept all of [Google’s] ‘the sky is falling’ stuff to accept… there is a lot of uncertainty about going the way you would have us go, in part just because of the difficulty of drawing lines in this area,” Kagan said. “Isn’t that something for Congress to do, not the court?”

Kavanaugh echoed those concerns, saying that the case would require “a very precise predictive judgment” and expressing uncertainty about whether the court could adequately consider the implications.

But Chief Justice John Roberts seemed equally hesitant to hand off the decision. “The amici suggest that if we wait for Congress to make that choice, the internet will be sunk,” he said.

]]>
https://broadbandbreakfast.com/2023/02/supreme-court-justices-express-caution-about-entering-into-section-230-debate/feed/ 0 48728
Bipartisan Alarm Over Social Media’s Harms to Children Prompts Slew of Proposed Legislation https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/?utm_source=rss&utm_medium=rss&utm_campaign=bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/#respond Mon, 20 Feb 2023 13:40:17 +0000 https://broadbandbreakfast.com/?p=48684 WASHINGTON, February 20, 2023 — Senators from both sides of the aisle came together on Tuesday to condemn social media platforms’ failure to protect underage users, demonstrating bipartisan collaboration and underscoring a trend of increased government scrutiny toward tech companies.

The Judiciary Committee hearing included discussion of several bills aimed at protecting children online, such as the Kids Online Safety Act, a measure that would create a “duty of care” requirement for platforms to shield children from harmful content. KOSA gained significant bipartisan traction during the previous session of Congress but ultimately failed to pass.

The bill’s co-sponsors — Sens. Richard Blumenthal, D-Conn., and Marsha Blackburn, R-Tenn. — emphasized the urgency of congressional action, pointing to research published Feb. 13 by the Centers for Disease Control and Prevention that showed a sharp increase in youth mental health challenges, particularly among girls and LGBTQ teens.

“It’s a public health emergency egregiously and knowingly exacerbated by Big Tech, aggravated by toxic content on eating disorders, bullying, even suicide — driven by Big Tech’s black box algorithms leading children down dark rabbit holes,” Blumenthal said.

In addition to social media’s impact on mental health, several senators focused on the issue of digital child sexual exploitation. Judiciary Committee Chair Dick Durbin, D-Ill, announced that he would be circulating the draft of a bill aimed at stopping the spread of online child sex abuse material by strengthening victim protection measures and platform reporting requirements. Sen. Lindsey Graham, R-S.C., said he was working with Sen. Elizabeth Warren, D-Mass., on a bill that would create a regulatory commission with the power to shut down digital platforms that failed to implement “best business practices to protect children from sexual exploitation online.”

Graham, the top Republican on the committee, added that he and Warren “have pretty divergent opinions except here — we have to do something, and the sooner the better.”

Bipartisan collaboration was a theme throughout the discussion. “I don’t know if any or all of you realize what you witnessed today, but this Judiciary Committee crosses the political spectrum — not just from Democrats to Republicans, but from real progressives to real conservatives — and what you heard was the unanimity of purpose,” Durbin said toward the end of the hearing.

Broad agreement for repealing Section 230, but not on its replacement

Some of the proposed social media bills discussed Tuesday would directly address the question of online platform immunity for third-party content. Several senators advocated for the EARN IT Act, which would assign platforms more responsibility for finding and removing child sexual abuse material — taking “a meaningful step toward reforming this unconscionably excessive Section 230 shield to Big Tech accountability,” Blumenthal argued.

The senators and witnesses who spoke at Tuesday’s hearing were largely united against Section 230. Witness Kristen Bride — whose son died by suicide after becoming the target of anonymous cyberbullying — said that her lawsuit against the anonymous messaging apps was dismissed based on Section 230 immunity.

“I think it is just absolutely vital that we change the law to allow suits like yours to go forward,” Sen. Josh Hawley, R-Mo., told Bride. “And if that means we have to repeal all of Section 230, I’m fine with it.”

However, Sen. Sheldon Whitehouse, D-R.I., noted that the primary barrier to Section 230 reform is disagreement over what should take its place. “I would be prepared to make a bet that if we took a vote on a plain Section 230 repeal, it would clear this committee with virtually every vote,” he said.

The Supreme Court is scheduled to hear a Section 230 case — Gonzalez v. Google — on Tuesday.

Other bills aim to protect kids online through age limits, privacy measures

Beyond the bills discussed at the hearing, several senators have recently proposed legislation aimed at protecting children’s online safety from several different angles.

On Tuesday, Hawley introduced a bill that would enforce a minimum age requirement of 16 for all users of social media platforms, as well as a bill that would commission a report on social media’s effects on underage users.

The former proposal, known as the MATURE Act, would require that users upload an image of government-issued identification in order to make an account on a social media platform, which has raised concerns among digital privacy advocates about the extent of personal data collection required.

Personal data collection was the subject of a different bill introduced the same week by Sen. Mazie Hirono, D- Hawaii, alongside Durbin and Blumenthal. The proposed Clean Slate for Kids Online Act would update the Children’s Online Privacy Protection Act of 1998 by giving individuals the right to demand that internet companies delete all personal information collected about them before the age of 13.

Discussion on the matter comes against the backdrop of a number of developments over the past year and a half, including state attorneys general investigating the impact of TikTok on kids and whistleblower testimony that alleged Facebook knew about the negative mental health impact its photo sharing app Instagram had on kids but didn’t take action on it.

]]>
https://broadbandbreakfast.com/2023/02/bipartisan-alarm-over-social-medias-harms-to-children-prompts-slew-of-proposed-legislation/feed/ 0 48684
Growing Lineup of Divergent Speakers, Panelists and Sponsors at Big Tech & Speech Summit https://broadbandbreakfast.com/2023/02/growing-lineup-of-divergent-speakers-panelists-and-sponsors-at-big-tech-speech-summit/?utm_source=rss&utm_medium=rss&utm_campaign=growing-lineup-of-divergent-speakers-panelists-and-sponsors-at-big-tech-speech-summit https://broadbandbreakfast.com/2023/02/growing-lineup-of-divergent-speakers-panelists-and-sponsors-at-big-tech-speech-summit/#respond Tue, 14 Feb 2023 17:59:03 +0000 https://broadbandbreakfast.com/?p=48479 WASHINGTON, February 14, 2023 — The future of online speech and technological innovation in the United States is very much in flux. Everyone wants to be on the side of “free speech.” But online platforms are increasingly viewed as a source of real-world harms and a threat to individual privacy.

That’s why now — during a two-day flash sale – is the best time to register to attend Broadband Breakfast’s upcoming Big Tech & Speech Summit. The event will take place from 8:30 a.m. to 3:30 p.m. on Thursday, March 9, 2023 in Washington, D.C.

Broadband Breakfast is excited to announce its first round of speakers, panelists and sponsors, showcasing the diversity of opinion represented at this must-attend event. 

Panelists will include Amy Peikoff, head of policy and legal for Parler, and Steve DelBianco, president and CEO of NetChoice, as well as experts from the Center for Democracy & Technology, American Enterprise Institute, Future of Privacy Forum, Electronic Frontier Foundation and more.

The event will also feature addresses from Eli Noam, director of the Columbia Institute for Tele-Information, and John Samples, vice president of the Cato Institute and a member of Facebook’s Oversight Board.

Sponsors of the event include both NetChoice and Texas.net, Inc., two organizations that take highly divergent approaches to questions of Section 230 and tech platforms’ approach to free speech.

Take advantage of the ‘flash sale’ and get access to the Breakfast Club

Those who register for the Big Tech & Speech Summit by February 15 at 11:59 p.m. PST will be able to purchase a day-long pass to the summit — including breakfast and lunch — for only $99. That’s just a third of the full $299 registration cost.

Additionally, registered attendees (including “flash sale” registrants) will receive two months’ access to the Breakfast Club, a $198 value.

The Breakfast Club is Broadband Breakfast’s membership service for tech industry professionals, offering premium content, in-depth reports, group coaching and more. Membership also includes exclusive unlimited access to video recordings of the Big Tech & Speech Summit, as well as past and future conferences.

About the Big Tech & Speech Summit

In a January 11 op-ed, President Joe Biden made the case that “The risks Big Tech poses for ordinary Americans are clear,” and alleged widespread harms including cyberstalking, child sexual exploitation, worsening mental health and “toxic online echo chambers.” The President presented a three-fold challenge to Big Tech on Section 230, privacy and competition.

Broadband Breakfast’s Big Tech & Speech Summit will tackle all three of these subjects, beginning with an introductory Panel 1 on the big picture involving Big Tech. This will be followed by sessions on Section 230, privacy and competition.

The agenda as of February 14 is below. Visit Big Tech & Speech Summit for the updated program speakers and sponsors.

Welcome and Introduction (8:30 a.m.) – Drew Clark, Editor and Publisher, Broadband Breakfast

PANEL 1: THE BIG PICTURE FOR BIG TECH (9:30 a.m.)

  • Ellery Roberts Biddle, Senior Editor, Coda Media
  • Amy Peikoff, Head of Policy and Legal, Parler
  • Steve DelBianco, President & CEO, NetChoice
  • Willmary Escoto, U.S. Policy Analyst, Access Now
  • Others have been invited

Mini Keynotes

  • Eli Noam, Director, Columbia Institute for Tele-Information
  • John Samples, Vice President, Cato Institute and Member of Facebook’s Oversight Board

PANEL 2: THE FRAGILITY OF SECTION 230 (10:45 a.m.)

  • Cathy Gellis, Attorney
  • Ashley Johnson, Senior Policy Analyst, Information Technology and Innovation Foundation
  • Emma Llansó, Director, Free Expression Project, Center for Democracy & Technology
  • Ron Yokubaitis, Founder, Texas.net, Inc.
  • Others have been invited

PANEL 3: REGULATING DATA PRIVACY (1:00 p.m.)

  • John Verdi, Senior Vice President of Policy, Future of Privacy Forum (moderator)
  • Shane Tews, Nonresident Senior Fellow, American Enterprise Institute
  • India McKinney, Director of Federal Affairs, Electronic Frontier Foundation
  • Others have been invited

PANEL 4: INNOVATION, COMPETITION AND FUTURE TECH (2:15 p.m.)

  • Adam Kovacevich, CEO, Chamber of Progress
  • Christine Bannan, U.S. Public Policy Manager, Proton
  • Berin Szóka, President, TechFreedom Foundation
  • Others have been invited

Full bios of the confirmed speakers, panelists and sponsors are available at the Big Tech & Speech Summit for the updated program speakers and sponsors.

]]>
https://broadbandbreakfast.com/2023/02/growing-lineup-of-divergent-speakers-panelists-and-sponsors-at-big-tech-speech-summit/feed/ 0 48479
Jim Jordan Demands Social Media Documents from Biden Administration https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/?utm_source=rss&utm_medium=rss&utm_campaign=jim-jordan-demands-social-media-documents-from-biden-administration https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/#respond Thu, 09 Feb 2023 02:30:47 +0000 https://broadbandbreakfast.com/?p=48391 WASHINGTON, February 8, 2023 — House Judiciary Chairman Jim Jordan, R-Ohio, on Wednesday asked the Department of Justice to provide copies of all documents that have been produced in an ongoing lawsuit over alleged government collusion with social media companies.

“Congress has an important interest in protecting and advancing fundamental free speech principles, including by examining how the Executive Branch coordinates with or coerces private actors to suppress First Amendment-protected speech,” Jordan wrote in a letter to Brian Boynton, the principal deputy assistant attorney general in the civil division.

The attorneys general of Missouri and Louisiana filed suit against President Joe Biden and other government officials in May 2022, claiming that the administration had worked with tech companies to “censor free speech and propagandize the masses.”

Other officials named in the lawsuit include former White House Press Secretary Jen Psaki, U.S. Surgeon General Vivek Murthy and former Chief Medical Advisor Anthony Fauci. The suit also names the Department of Homeland Security and the Centers for Disease Control and Prevention, among other individuals and agencies.

Missouri Attorney General Andrew Bailey in January released a series of emails between White House officials and social media companies, arguing that they proved the Biden administration had been attempting to “censor opposing viewpoints on major social media platforms.”

Jordan requested that all other documents produced by the Department of Justice as part of the litigation be provided to the Judiciary Committee no later than Feb. 22.

“As Congress continues to examine how to best protect Americans’ fundamental freedoms, the documents discovered and produced during the Missouri v. Biden litigation are necessary to assist Congress in understanding the problem and evaluating potential legislative reforms,” he wrote.

Jordan is at the forefront of growing Republican hostility toward tech companies. In January, he listed “reining in Big Tech’s censorship of free speech” as a key issue to be addressed by the House Judiciary Committee during the coming year.

And in December, Jordan sent letters to the heads of Apple, Amazon, Alphabet, Meta and Microsoft to “request more information about the nature and extent of your companies’ collusion with the Biden Administration.”

]]>
https://broadbandbreakfast.com/2023/02/jim-jordan-demands-social-media-documents-from-biden-administration/feed/ 0 48391