Artificial Intelligence – Broadband Breakfast https://broadbandbreakfast.com Better Broadband, Better Lives Fri, 12 Jan 2024 22:15:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://i0.wp.com/broadbandbreakfast.com/wp-content/uploads/2021/05/cropped-logo2.png?fit=32%2C32&ssl=1 Artificial Intelligence – Broadband Breakfast https://broadbandbreakfast.com 32 32 190788586 CES 2024: Senators Talk Priorities on AI, Broadband Connectivity https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2024-senators-talk-priorities-on-ai-broadband-connectivity https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/#respond Fri, 12 Jan 2024 22:15:19 +0000 https://broadbandbreakfast.com/?p=57090 LAS VEGAS, January 12, 2024 – U.S. senators highlighted their tech policy priorities on artificial intelligence and broadband connectivity at CES on Friday.

Sens. Ben Luján, D-New Mexico, Cynthia Lummis, R-Wyoming, and John Hickenlooper, D-Colorado, sat on a panel moderated by Senator Jacky Rosen, D-Nevada.

Promise and perils of AI

The lawmakers highlighted their focus on mitigating the potential risks of implementing AI. 

Hickenlooper touted the AI Research, Innovation and Accountability Act, which he introduced in November with Luján and other members of the Senate Commerce, Science and Transportation Committee.

That law would require businesses deploying AI in relation to critical infrastructure operation, biometric data collection, criminal justice, and other “critical-impact” uses to submit risk assessments to the Commerce Department. The National Institute of Standards and Technology, housed in the department, would be tasked with developing standards for authenticating human and AI-generated content online.

“AI is everywhere,” Hickenlooper said. “And every application comes with incredible opportunity, but also remarkable risks.”

Connectivity

Luján and Rosen expressed support for recent legislation introduced to extend the Affordable Connectivity Program. The fund, which provides a $30 monthly internet subsidy to 23 million low-income households, is set to dry up in April 2024 without more money from Congress.

The ACP Extension Act would provide $7 billion to keep the program afloat through 2024. It was first stood up with $14 billion from the Infrastructure Act in late 2021. 

“There are a lot of us working together,” Luján said, to keep the program alive for “people across America who could not connect, not because they didn’t have a connection to their home or business, but because they couldn’t afford it.”

Lawmakers, advocates, the Biden administration, and industry groups have been calling for months for additional funding, but the bill faces an uncertain future as House Republicans look to cut back on domestic spending.

Luján also stressed the need to reinstate the Federal Communications Commission’s spectrum auction authority.

“I’m ashamed to say it’s lapsed, but we need to get this done,” he said.

The Commission’s authority to auction off and issue licenses for the commercial use of electromagnetic spectrum expired for the first time in March 2023 after Congress failed to renew it. A stopgap law permitting the agency to issue already purchased licenses passed in December, but efforts at blanket reauthorization have stalled.

]]>
https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/feed/ 0 57090
12 Days: Is ChatGPT Artificial General Intelligence or Not? https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/?utm_source=rss&utm_medium=rss&utm_campaign=12-days-is-chatgpt-artificial-general-intelligence-or-not https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/#respond Thu, 21 Dec 2023 20:13:26 +0000 https://broadbandbreakfast.com/?p=56608 December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.

This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.

For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.

Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in a sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.

Which makes the Chatbots seem, well, a little bit more human.

Drama at OpenAI

At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligence.

The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.

More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.

In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.

“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.

Washington and the FCC wake up to AI

And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.

In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.

But it’s not clear which side legislators on Capitol Hill might take in the future.

One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.

AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.

Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.

But isn’t ChatGPT a form of artificial general intelligence?

As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond passing the Turing test is the acclaimed concept of “artificial general intelligence.” That presumably means that it is a little bit smarter than ChatGPT-4.

Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”

Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.

AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”

That sound like something that an AGI-capable machine would very much want the world to believe.

Additional reporting provided on this story by Reporter Jericho Casper.

See “The Twelve Days of Broadband” on Broadband Breakfast

]]>
https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/feed/ 0 56608
Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/?utm_source=rss&utm_medium=rss&utm_campaign=sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/#respond Wed, 22 Nov 2023 17:21:47 +0000 https://broadbandbreakfast.com/?p=55792 November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

]]>
https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/feed/ 0 55792
Sam Altman to Join Microsoft, New FCC Broadband Map, Providers Form 4.9 GHz Coalition https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/?utm_source=rss&utm_medium=rss&utm_campaign=sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/#respond Mon, 20 Nov 2023 20:18:16 +0000 https://broadbandbreakfast.com/?p=55716 November 20, 2023 – Microsoft CEO Satya Nadella announced in an X post Monday that former OpenAI CEO Sam Altman will be joining Microsoft after being fired from the machine learning company. 

Over the course of the last four days, OpenAI has undergone several shifts in leadership, which includes OpenAI investor Microsoft acquiring OpenAI president and chairman Greg Brockman to lead an AI research team alongside Altman

Brockman, who had been concurrently relieved from his role as chairman of the OpenAI board, announced his resignation Friday via X, upon learning that the board had decided to fire Altman. 

OpenAI said in a blog post Friday that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

OpenAI then notified The Information Saturday that Emmett Shear, co-founder of streaming site Twitch, would serve as CEO after having CTO Mira Murati serve that role in the interim.  

Following Nadella’s announcement Monday morning, nearly 500 of the 700 OpenAI employees were signatories to a letter threatening to leave their roles to work under Altman and Brockman at Microsoft unless all of the current board members resign. 

As of Monday, OpenAI board member Ilya Sutskever posted a message of regret on X regarding the board decision to remove Altman and Brockman. The phrase “OpenAI is nothing without its people,” is now emerging from employee’s X accounts.  

FCC announces new national broadband map

The head of the Federal Communication Commission announced Friday the third iteration of its national broadband map, showing just over 7.2 million locations lack access to high-speed internet. 

That is less than the 8.3 million identified in May.   

FCC Chairwoman Jessica Rosenworcel noted that map data continue to fluctuate less between iterations, showing improvements in map accuracy. 

Previous iterations of the national broadband map had been criticized for not accurately depicting areas with and without service, with widespread concern that that would impact the allocation of Broadband Equity, Access and Deployment funding. 

The map outlines where adequate broadband service is and is not available throughout the nation and provides viewers with information on the providers who service those areas and the technology used to do so. 

Providers form spectrum advocacy coalition 

A group of telecom industry players including Verizon and T-Mobile announced Thursday the formation of the Coalition for Emergency Response and Critical Infrastructure to advocate for select use of the 4.9 GigaHertz (GHz) spectrum band. 

The coalition is in support of prioritizing state and local public safety agencies as main users of the 4.9 GHz band, while ensuring that non-public safety licensees operate on the band to avoid interference. 

“Public Safety agencies have vastly different needs from jurisdiction to jurisdiction, and they should decide what compatible non-public-safety use means within their jurisdictions,” read the coalition’s letter.  

In January of this year, the FCC adopted a report to manage the use of the 4.9 GHz band, while seeking comment on the role a band manager would play in facilitating license allocation between public safety and non-public safety entities. 

It had proposed two methods of operation for the band manager in which it would either lease access rights from public-safety entities and then sublease that to non-public safety entities, or to facilitate direct subleasing between public safety operators and external parties. 

In its letter to the FCC, the coalition announced support for the second of those methods stressing the fact that it will allow public safety license holders retain authority over who they sublease their spectrum to. 

]]>
https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/feed/ 0 55716
FCC Cybersecurity Pilot Program, YouTube AI Regulations, Infrastructure Act Anniversary https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/?utm_source=rss&utm_medium=rss&utm_campaign=fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/#respond Wed, 15 Nov 2023 19:05:55 +0000 https://broadbandbreakfast.com/?p=55563 November 15, 2023 – The Federal Communications Commission proposed Monday a cybersecurity pilot program for schools and libraries, which would require a three-year $200 million investment in ways to best protect K-12 students from cyberattacks. 

In addition to going in and assessing what kind of cybersecurity services are best suited for students and school needs, the program would also subsidize the cost of those services used in schools.  

The program would serve as a separate Universal Service Fund program, separate from the existing school internet subsidy program called E-Rate. 

“This pilot program is an important pathway for hardening our defenses against sophisticated cyberattacks on schools and ransomware attacks that harm our students and get in the way of their learning,” said FCC Chairwoman Jessica Rosenworcel.

The proposal would be a part of the larger Learn Without Limit’s initiative, which supports internet connectivity in schools to help reduce the homework gap by enabling kids’ digital access to digital learning.

YouTube rolling out AI content regulations 

Alphabet’s video sharing platform YouTube announced in a blog post Tuesday it will be rolling out AI guidelines over the next few months, which will inform viewers about when they are interacting with “synthetic” or AI-generated content. 

The rules will require creators to identify if the video is made of AI content. Creators who don’t disclose that information could see their work flagged and removed, and they may be suspended from the platform or subject to other penalties.

For the viewer, tags will appear in the description panel on videos indicating that if the video is synthetic or AI generated. YouTube noted that for videos dealing with more sensitive topics, it may use more prominent labels. 

YouTube’s AI guidelines come at a time when members of Congress and industry leaders are calling for increased effort toward AI regulatory reform, and after President Joe Biden’s executive order on AI guidelines signed into effect in October.

Two-year anniversary of the infrastructure investment jobs act 

Thursday marked the second-year anniversary of the Infrastructure, Investment and Jobs Act, which prompted a $400-billion investment into the US economy. 

The IIJA pushed for a variety of programs and initiatives, with over 40,000 sector-specific projects having received funding – several of those working to improve the broadband sector. 

$65 billion was invested by the IIJA into improving connectivity, which helped to establish the $14-billion Affordable Connectivity Program, which has so-far helped more than 20 million US households get affordable internet through a $30 and $75 subsidy per month. 

Outside of ACP, the IIJA called on the National Telecommunications and Information Administration to develop the Broadband Equity Access Deployment program, a $42.5-billion investment into high-speed broadband deployment across all 50 states. 

Currently, states are in the process of submitting their BEAD draft proposals, which all outline how states will administer the funding they receive as well as any funding they already have or how they will use broadband mapping data. 

]]>
https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/feed/ 0 55563
Will Rinehart: Unpacking the Executive Order on Artificial Intelligence https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=will-rinehart-unpacking-the-executive-order-on-artificial-intelligence https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/#respond Wed, 15 Nov 2023 14:24:32 +0000 https://broadbandbreakfast.com/?p=55548 If police are working on an investigation and want to tap your phone lines, they’ll effectively need to get a warrant. They will also need to get a warrant to search your home, your business, and your mail.

But if they want to access your email, all they need is just to wait for 180 days.

Because of a 1986 law called the Electronic Communications Privacy Act, people using third-party email providers, like Gmail, only get 180 days of warrant protection. It’s an odd quirk of the law that only exists because no one in 1986 could imagine holding onto emails longer than 180 days. There simply wasn’t space for it back then!¹

ECPA is a stark illustration of consistent phenomena in government: policy choices, especially technical requirements, have durable and long-lasting effects. There are more mundane examples as well. GPS could be dramatically more accurate but when the optical system was recently upgraded, it was held back by a technical requirement in the Federal Enterprise Architecture Framework (FEAF) of 1999. More accurate headlights have been shown to be better at reducing night crashes yet adaptive headlights only just got approved last year, nearly 16 years after Europe because of technical requirements in FMVSS 108. All it takes is one law or regulation to crystallize an idea into an enduring framework that fails to keep up with developments.

I fear the approach pushed by the White House in their recent executive order on AI might represent another crystallization moment. ChatGPT has been public for a year, the models on which they are based are only five years old, and yet the administration is already working to set the terms for regulation.

The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” is sprawling. It spans 13 sections, extends over 100 pages, and lays out nearly 100 deliverables for every major agency. While there are praiseworthy elements to the document, there is also a lot of cause for concern.

Among the biggest changes is the new authority the White House has claimed over newly designated “dual use foundation models.” As the EO defines it, a dual-use foundation model is

  • an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.

While the designation seems to be common sense, it is new and without provenance. Until last week, no one had talked about dual use foundation models. Rather, the designation does comport with the power the president has over the export of military tech.

As the EO explains it, the administration is especially interested in those models with the potential to

  • lower the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons;
  • enable powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
  • permit the evasion of human control or oversight through means of deception or obfuscation

The White House is justifying its regulation of these models under the Defense Production Act, a federal law first enacted in 1950 to respond to the Korean War. Modeled after World War II’s War Powers Acts, the DPA was part of a broad civil defense and war mobilization effort that gave the President the power to requisition materials and property, expand government and private defense production capacity, ration consumer goods, and fix wage and price ceilings, among other powers.

The DPA is reauthorized every five years, which has allowed Congress to expand the set of presidential powers in the DPA. Today, the allowable use of DPA extends far beyond U.S. military preparedness and includes domestic preparedness, response, and recovery from hazards, terrorist attacks, and other national emergencies. The DPA has long been intended to address market failures and slow procurement processes in times of crisis. Now the Biden Administration is using DPA to force companies to open up their AI models.

The administration’s invocation of the Defense Production Act is clearly a strategic maneuver to utilize the maximum extent of its DPA power in service of Biden’s AI policy agenda. The difficult part of this process now sits with the Department of Commerce, which has 90 days to issue regulations.

In turn, the Department will likely use the DPA’s industrial base assessment power to force companies to disclose various aspects of their AI models. Soon enough, dual use foundation models will have to report to the government tests based on guidance developed by the National Institute of Standards and Technology (NIST). But that guidance won’t be available for another 270 days. In other words, Commerce will regulate companies without knowing what they will be beholden to.

Recent news from the United Kingdom suggests that all of the major players in AI are going to be included in the new regulation. In closing out a two-day summit on AI, British Prime Minister Rishi Sunak announced that eight companies were going to give deeper access to their models in an agreement that had been signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. Those eight companies included Amazon Web Services, Anthropic, Google, as well its subsidiary DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, and OpenAI.

Thankfully, the administration isn’t pushing for a pause on AI development, they aren’t denouncing more advanced models, nor are they suggesting that AI needs to be licensed. But this is probably because doing so would face a tough legal challenge. Indeed, it seems little appreciated by the AI community that the demand to report on models is a kind of compelled speech, which has typically triggered First Amendment scrutiny. But the courts have occasionally recognized that compelled commercial speech may actually advance First Amendment interests more than undermine them.

The EO clearly marks a shift in AI regulation because of what will come next. In addition to the countless deliverables, the EO encourages agencies to use their full power to advance rulemaking.

For example, the EO explains that,

  • the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.

Innocuous as it may seem, the Federal Trade Commission, as well as all of the other agencies that have been encouraged to use their power by the administration, could come under court scrutiny. In West Virginia v. EPA, the Supreme Court made it more difficult for agencies to expand their power when the court established the major questions doctrine. This new line of legal reasoning takes an ax to agency delegation. Unless there’s explicit, clear-cut authority granted by Congress, an agency cannot regulate a major economic or political issue. Agency efforts to push rules on AI could get caught up by the courts.

To be fair, there are a lot of positive actions that this EO advances.² But details matter, and it will take time for the critical details to emerge.

Meanwhile, we need to be attentive to the creep of power. As Adam Thierer described this catch-22,

  • While there is nothing wrong with federal agencies being encouraged through the EO to use NIST’s AI Risk Management Framework to help guide sensible AI governance standards, it is crucial to recall that the framework is voluntary and meant to be highly flexible and iterative—not an open-ended mandate for widespread algorithmic regulation. The Biden EO appears to empower agencies to gradually convert that voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime (a process made easier by the lack of congressional action on AI issues).

In all, the EO is a mixed bag that will take time to shake out. On this, my colleague Neil Chilson is right: some of it is good, some is bad, and some is downright ugly.

Still, the path we are currently navigating with the Executive Order on AI parallels similar paths in ECPA, GPS, and adaptive lights. It underscores a fundamental truth about legal decisions: even the technical rules we set today will shape the landscape for years, perhaps decades, to come. As we move forward, we must tread carefully, ensuring that our legal frameworks are adaptable and resilient, capable of evolving alongside the very technologies they seek to regulate.

Will Rinehart is a senior research fellow at the Center for Growth and Opportunity, where he specializes in telecommunication, internet and data policy, with a focus on emerging technologies and innovation. He was formerly the Director of Technology and Innovation Policy at the American Action Forum and before that a research fellow at TechFreedom and the director of operations at the International Center for Law & Economics. This piece originally appeared in the Exformation Newsletter on November 9, 2023, and is reprinted with permission.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

]]>
https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/feed/ 0 55548
Senators Pitch New Agency for Tech Regulation to Address FTC Shortcomings https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/?utm_source=rss&utm_medium=rss&utm_campaign=senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/#respond Thu, 02 Nov 2023 17:46:18 +0000 https://broadbandbreakfast.com/?p=55106 WASHINGTON, November 2, 2023 – Sen. Michael Bennet D-Colorado, and Sen. Peter Welch, D-Vermont, reiterated at a Brookings event Tuesday the need for the United States to form a new agency to oversee tech regulation.

The senators, alongside former Federal Communications Commission Chairman Tom Wheeler, argued that the government’s approach to regulating AI, social media and big tech does not match the speed at which those industries are changing.

Bennet and Welch both outlined how the Federal Trade Commission and the Department of Justice, two entities that are heavily involved in regulating large tech companies, govern so broadly that they are unable to properly deal with specific cases.

The two added that those respective agencies lack the specific expertise in tech fields to be able to address key issues.

“Despite their work to enforce existing antitrust and consumer protection laws, they lack the expert staff and resources necessary for robust oversight,” Bennet said previously. “Moreover, both bodies are limited by existing statutes to react to case-specific challenges raised by digital platforms, when proactive, long-term rules for the sector are required,” explained Bennet in an earlier press release.

The conversation comes after the two senators introduced a digital technology regulatory bill in May of 2023 outlining how a new proposed agency would regulate the tech industry in consultation with the FTC and the DOJ.

Their proposed bill would require the establishment of a five-person agency to address tech regulation and antitrust cases, as well as establish some kind of protection against things like harmful algorithms.

“For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue. It’s time to establish an independent agency to provide comprehensive oversight of social media companies,” said Welch in the same press release.

Wheeler, who moderated the event, echoed their concerts after having written his book Techlash, which argues innovators drive tech development and that the government follows their lead in regulation.

]]>
https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/feed/ 0 55106
U.S. and Singapore to Strengthen AI and Tech Partnership https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-and-singapore-to-strengthen-ai-and-tech-partnership https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/#respond Fri, 13 Oct 2023 23:02:37 +0000 https://broadbandbreakfast.com/?p=54694 WASHINGTON, October 13, 2023 – The United States and Singapore announced on Thursday a new partnership to strengthen ties on artificial intelligence and other technological research. The nations launched the initiative, called the Critical Emerging Technology Dialogue, in D.C. on the same day.

Building on a 2022 meeting between U.S. President Joe Biden and Singaporean Prime Minister Lee Hsien Loong, senior officials from both governments – including Deputy Prime Minister Lawrence Wong from Singapore and National Security Advisor Jake Sullivan from the U.S. – met in Washington for discussions on six areas of focus.

Artificial intelligence

The countries intend to launch a joint AI governance group, according to a White House statement. The group would focus on ensuring “safe, trustworthy, and responsible AI innovation,” the statement said.

The Commerce Department’s National Institute of Standards and Technology recently completed an exercise with the Singapore Infocomm Media Development Authority on AI risk management. Both nations are looking to expand on that and collaborate on research into AI security, the statement said.

AI regulation has been a subject of discussion in Washington. Biden announced in September he plans to issue an executive order on the issue by the end of the year, and a group of Congressional Democrats pushed him on Thursday to use their proposed AI Bill of Rights to inform that policy.

Quantum computing

American and Singaporean agencies are planning to collaborate on post-quantum cryptography methods and standards. While current quantum computers are rudimentary, the technology is in theory capable of cracking current encryption methods. 

Biotechnology

The countries plan to convene universities, private and public research institutions, and government agencies on advancing research into gene therapies and delivery systems for those therapies. The nations also expressed an intent to connect their biotechnology startup communities to exchange best practices on scaling, as well as research and development.

Officials also discussed defense technology, data governance, and climate resilience. The next CET Dialogue is planned for 2024 in Singapore.

]]>
https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/feed/ 0 54694
Still Learning About Artificial Intelligence, Legislators Say Congress Must Act https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/?utm_source=rss&utm_medium=rss&utm_campaign=still-learning-about-artificial-intelligence-legislators-say-congress-must-act https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/#respond Sat, 30 Sep 2023 14:47:03 +0000 https://broadbandbreakfast.com/?p=54471 WASHINGTON, September 30, 2023 – Although Congress is still learning key aspects of artificial intelligence, senators and representatives speaking at an AI summit on Wednesday said they believed the urgency of the moment required the passage of “some narrow pieces” of legislation.

The same day that Sen. Ed Markey, D-Mass., sent a letter to Meta CEO Mark Zuckerberg urging him to halt the release of AI-powered chatbots that the social media giant plans to integrate within its platforms, Markey urged the Federal Trade Commission to protect minors from AI-powered software.

Markey, speaking at Politico’s AI and Tech Summit, cited suicide rates amongst minors using social media and a recent warning from the Surgeon General about social media and adolescent mental health.

“We’re not going to be able to handle devices talking to young people in our society without understanding what the safeguards are going to be,” Markey said.

His message to Big Tech was: “Don’t deploy it until we get the answers to what the safeguards are going to be for the young people in our society.”

Similarly, Sen. Todd Young, R-Indiana, said he believed it was “very likely” that Congress would pass “some narrow pieces” of a regime regulating AI.

“I hope we go wider and consider a host of different legislative proposals because our innovators, our entrepreneurs, our researchers, our national security committee, they all say that we need to act in this space and we continue to lead the way of the world and manage the many risks that are out there around the financial markets,” Young said.

Other legislators proposed other specific facets of AI regulation.

Congressman Ted Lieu, D-Calif., proposed a law to prevent AI from autonomously using nuclear weapons. He also suggested a national AI commission.

Such a commission would help create a public record about how and why AI should be regulated. Doing so would be preferable to the approach in which Senate Majority Leader Chuck Schumer, D-N.Y., has been hosting closed-door briefings with tech giants on the topic.

“AI is innovating so quickly that I think it’s important that we have the national AI commission experts,” Lieu said. “There’s quite a lot of legislation to work on that, that can make recommendations from Congress asking what kind of AI we might want to regulate, how we might want to do about doing so and also provide some time for AI to be developed.”

Rep. Jay Obernolte, R-Calif., vice chair of the Congressional Artificial Intelligence Caucus, said that Congress is doing a “great job” educating themselves on AI but that creating legislation that has a human centric framework needs to be properly defined.

“By framework, I don’t mean a bunch of buzzwords flying in close formation, right?” Obernolte said. “What does it mean for AI to be human centered? What role does government have in making sure that they are human centered?”

]]>
https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/feed/ 0 54471
Companies Must Be Transparent About Their Use of Artificial Intelligence https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=companies-must-be-transparent-about-their-use-of-artificial-intelligence https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/#respond Wed, 20 Sep 2023 21:34:04 +0000 https://broadbandbreakfast.com/?p=54027 WASHINGTON, September 20, 2023 – Researchers at an artificial intelligence workshop Tuesday said companies should be transparent about their use of algorithmic AI in things like hiring processes and content writing. 

Andrew Bell, a fellow at the New York University Center for Responsible AI, said that making the use of AI known is key to addressing any pitfalls AI might have. 

Algorithmic AI is behind systems like chatbots which can generate texts and answers to questions. It is used in hiring processes to quickly screen resumes or in journalism to write articles. 

According to Bell, ‘algorithmic transparency’ is the idea that “information about decisions made by algorithms should be visible to those who use, regulate, and are affected by the systems that employ those algorithms.”

The need for this kind of transparency comes after events like Amazons’ old AI recruiting tool showed bias toward women in the hiring process, or when OpenAI, the company that created ChatGPT, was probed by the FTC for generating misinformation. 

Incidents like these have brought the topic of regulating AI and making sure it is transparent to the forefront of Senate conversations.

Senate committee hears need for AI regulation

The Senate’s subcommittee on consumer protection on September 12 heard about proposals to make AI use more transparent, including disclaiming when AI is being used and developing tools to predict and understand risk associated with different AI models.

Similar transparency methods were mentioned by Bell and his supervisor Julia Stoyanovich, the Director of the Center for Responsible AI at New York University, a research center that explores how AI can be made safe and accessible as the technology evolves. 

According to Bell, a transparency label on algorithmic AI would “[provide] insight into ingredients of an algorithm.” Similar to a nutrition label, a transparency label would identify all the factors that go into algorithmic decision making.  

Data visualization was another option suggested by Bell, which would require a company to put up a public-facing document that explains the way their AI works, and how it generates the decisions it spits out. 

Adding in those disclaimers creates a better ecosystem between AI and AI users, increasing levels of trust between all stakeholders involved, explained Bell.

Bell and his supervisor built their workshop around an Algorithm Transparency Playbook, a document they published that has straightforward guidelines on why transparency is important and ways companies can go about it. 

Tech lobbying groups like the Computer and Communications Industry Association, which represent Big Tech companies, however, have spoken out in the past against the Senate regulating AI, claiming that it could stifle innovation. 

]]>
https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/feed/ 0 54027
Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/#respond Wed, 13 Sep 2023 00:20:12 +0000 https://broadbandbreakfast.com/?p=53830 WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

]]>
https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/feed/ 0 53830
Tech Policy Group CCIA Speaks Out Against AI Regulation https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=tech-policy-group-ccia-speaks-out-against-ai-regulation https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/#respond Tue, 12 Sep 2023 19:16:06 +0000 https://broadbandbreakfast.com/?p=53820 WASHINGTON, September 12, 2023 – A policy director at the Computer and Communications Industry Association spoke out on Tuesday against impending artificial intelligence regulations in the European Union and United States.

The CCIA represents some of the biggest tech companies in the world, with members including Amazon, Google, Meta, and Apple.

“The E.U. approach will focus very much on the technology itself, rather than the use of it, which is highly problematic,” said Boniface de Champris, CCIA’s Europe policy manager, at a panel hosted by the Cato Institute. “The requirements would basically inhibit the development and use of cutting edge technology in the E.U.”

This echoes de Champris’s American counterparts, who have argued in front of Congress that AI-specific laws would stifle innovation.

The European Parliament is aiming to reach an agreement by the end of the year on the AI Act, which would put regulations on all AI systems based on their assessed risk level. 

The E.U. also adopted in August the Digital Services Act, legislation that tightens privacy rules and expands transparency requirements. Under the law, users can opt to turn off artificial intelligence-enabled content recommendation.

U.S. President Joe Biden announced in July that seven major AI and tech companies – including CCIA members Amazon, Meta, and Google – made voluntary commitments to various AI safeguards, including information sharing and security testing.

Multiple U.S. agencies are exploring more binding AI regulation. Both the Senate Judiciary committee and Senate consumer protection subcommittee held hearings on potential AI policy later on Tuesday. The judiciary hearing will include testimony from Microsoft president Brad Smith and AI and graphics company NVIDIA’s chief scientist William Daly.

The House Energy and Commerce Committee passed in July the Artificial Intelligence Accountability Act, which gives the National Telecommunications and Information Administration a mandate to study accountability measures for artificial intelligence systems used by telecom companies.

]]>
https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/feed/ 0 53820
Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/?utm_source=rss&utm_medium=rss&utm_campaign=rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/#respond Wed, 30 Aug 2023 11:00:38 +0000 https://broadbandbreakfast.com/?p=53540 In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

]]>
https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/feed/ 0 53540
Newsrooms Should Engage Responsibly with Artificial Intelligence, Say Journalists https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/?utm_source=rss&utm_medium=rss&utm_campaign=newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/#respond Tue, 29 Aug 2023 11:22:43 +0000 https://broadbandbreakfast.com/?p=53486 WASHINGTON, August 28, 2023 – Newsrooms should take an active role in crafting artificial intelligence practices and policies, experts said on August 17 at a webinar hosted by the Knight Center for Journalism in the Americas.

Waiting too long to institute policies around the application of AI in the news gathering process and the use of newsroom data and content for AI research could allow tech companies to dictate these on their terms, said Amy Rinehart, a senior program manager for local news and AI at the Associated Press.

“Big tech came in and told us how the internet was going to work, and we have abided by the rules they’ve set up,” she said. “If we don’t get in there and experiment, they’re going to write the rules.”

Seven tech companies met with the White House in July to work out terms of a voluntary commitment to public safety measures in their AI research and products.

Increased AI literacy will improve future coverage of the technology, according to Rinehart. She said coverage has largely been sensational because of the news industry’s discomfort with the potential automation of some of their work.

Sil Hamilton, an artificial intelligence researcher at McGill University, said this scenario is still far from what the technology is truly capable of.

The current trajectory of large language models – the systems behind chatbots like ChatGPT – “is to simply be coworking with us,” he said. “It won’t entirely automate jobs away.”

Rinehart emphasized the importance of staying informed about the technology and how it might affect the news industry from both inside and outside the newsroom.

“This is pushing us in a direction that some of us don’t like,” she said. “But if we don’t experiment together we’re going to end up on the other side of something that is unrecognizable.”

]]>
https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/feed/ 0 53486
U.S. Government is Eyeing AI to Improve Emergency Alerts, Outreach https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/#respond Mon, 28 Aug 2023 16:20:05 +0000 https://broadbandbreakfast.com/?p=53417 WASHINGTON, August 25, 2023 – United States government agencies are eyeing artificial intelligence to aid emergency alerts and other outreach services, experts said on Thursday.

The National Oceanic and Atmospheric Administration is looking to use AI to do new kinds of analysis on storm and wildfire data, improving alert system accuracy as climate change makes natural disasters more common, said NOAA Chief Technology Officer Frank Indiviglo.

“Things you see on your local weather channel are good,” Indiviglo said, “but really understanding ahead of these events: Am I at risk? Is my family at risk? That’s what we’re working toward,” Indiviglo said at a Technology Spotlight event hosted by NextGov.

Emergency weather alerts from NOAA have been broadcast since the 1970s from the agency’s radio network, which continuously transmits forecasts otherwise. Cable TV stations broadcast the audio of their local NOAA radio station in emergencies.

The alerts warn listeners of severe weather events in their area. Coverage can be hindered by mountains, but the agency says that more than 95% of Americans live in areas covered by the system as of July 2023.

The agency’s forecasts, and thus emergency alerts, are based on data collected by physical sensors and the outputs of several mathematical models designed to give the agency a picture of what’s happening on the ground, according to NOAA technical procedures.

People have complained about other FCC alerts warning them of severe weather and other emergencies that are not in their area. More computationally intensive analysis aided by AI would help the agency issue these warnings with more precision, Indiviglo said.

Patty Delafuente, a data scientist at AI hardware and software company NVIDIA, said fielding help desk calls and other customer services are another common use case for the company’s government clients.

Language models that have ingested huge amounts of information can help government employees serve people asking what programs they qualify for, especially as more experienced workers retire, she said.

U.S. government spending on AI has exceeded $7 billion in the last three fiscal years.

]]>
https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/feed/ 0 53417
U.S. Chip Export Restrictions Will be ‘Huge Roadblock’ for Chinese AI Competitiveness: Expert https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/#respond Thu, 24 Aug 2023 22:21:31 +0000 https://broadbandbreakfast.com/?p=53410 WASHINGTON, August 24, 2023 – China’s ability to remain competitive in the global artificial intelligence race will depend on its ability to produce its own chips, as U.S. restrictions on the export of that product to the adversarial nation will hobble its ability to move forward, experts said Thursday.

“U.S. chip export sanctions are a huge roadblock” for AI development in China, said Qiheng Chen, a senior analyst at consulting firm Compass Lexecon.

The ability to manufacture advanced chips domestically will be essential for the country to continue researching and implementing AI, Chen added at the AI event hosted by the Asia Society Policy Institute.

The Commerce Department imposed in October 2022 restrictions on exports of advanced semiconductors and chip manufacturing equipment to China and required U.S. citizens to get a permit before working with Chinese chip manufacturers.

The move was designed to limit China’s ability to compete with the U.S. by curbing its access to hardware required for cutting-edge military technology. It also makes AI research and development, a highly chip-dependent process, more difficult.

Other panelists Tuesday emphasized chip making as a top priority of the Chinese government.

The country has already moved toward independence from the U.S. in other areas, like satellites and fiber optics, as a response to Trump administration policies.

This has continued under President Joe Biden, with a 2021 executive order restricting investment in Chinese firms drawing criticism from Huawei, the Chinese telecom company.

Experts have previously said the threat of restricting access to global trade even further could make China hesitant to retaliate for the sanctions. This is because advanced chip manufacturing requires materials, components, and processes that would be difficult for a single nation to source entirely within its borders.

“It’s too complex, too global, too interdependent for one country to be able to produce all these technologies on their own,” said Jimmy Goodrich, vice president of Global Policy at the Semiconductor Industry Association, at a conference earlier this year.

A Huawei spokesperson estimated at a conference following the investment ban that it would take three to five years for Chinese chip manufacturing to become self-sufficient and rely less on American components and investments.

Biden signed the CHIPS and Science Act into law last year, two months before the export restrictions went into effect. It allocates $52 billion for American semiconductor manufacturing and gives tax credits for investments in the industry.

]]>
https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/feed/ 0 53410
Open Access to Training Data Vital for AI Safety and Innovation: Expert https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/?utm_source=rss&utm_medium=rss&utm_campaign=open-access-to-training-data-vital-for-ai-safety-and-innovation-expert https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/#respond Wed, 23 Aug 2023 20:05:21 +0000 https://broadbandbreakfast.com/?p=53317 WASHINGTON, August 23, 2023 – An open ecosystem providing public access to artificial intelligence data is vital for the development of a safe and innovative AI system, am expert said at a forum on Monday.  

Instead of the current “black box” approach to AI training, developers should adopt a transparent “glass box” approach, where they provide not only the data but also the models and step-by-step guidance for model replication, said Ali Farhadi, CEO of Allen Institute for Artificial Intelligence. This approach would enable developers to learn from each other’s mistakes, thus reducing the occurrence of repeated errors and associated costs, he explained.

The accessible dataset also serves as a critical “traceability” factor to assist lawmakers in crafting legal frameworks and safeguards against a multitude of risks posed by AI, ranging from misinformation, deep fakes to child safety concerns and workforce-related challenges.

“Looking back at the history of how software has been developed, whenever we actually opened up a piece of technology, the progress outpaced the malicious acts,” he added.

His argument found support among other speakers, including Senate Commerce Committee Chairwoman Maria Cantwell, D-Washington, who agreed that an “open architecture” has the potential to encourage a “public-private partnership” that could facilitate further advancements in AI development.

“We’ve been really working since the 2020 bill on understanding ways that we can accelerate our process to come to faster resolution of some of the issues that come to the table,” said Cantwell, who spearheaded the “The Future of AI” Act to convene leaders across academia, federal, and the private sectors to examine the opportunities and consequences of AI technology.

“I believe the government must continue to partner with industry and academia,” she added. “And public private partnership is the right direction for us to keep going.”

Hosted by Sen. Cantwell, the forum joined other lawmakers’ efforts to gain a deeper understanding of AI. The White House announced in August a competition with prizes up to 20 million as an incentive for developers to bolster the capabilities of AI systems. In late July, the administration also secured commitments from leading AI companies to oversee the safe and transparent development of the technology.

These initiatives are part of Washington’s effort to take the lead in the development of AI and maintain its technological competitiveness, especially as counterparts in Brussels and Beijing have been racing ahead in terms of regulations.

]]>
https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/feed/ 0 53317
Office of National Intelligence Adopting AI for Data Processing https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/?utm_source=rss&utm_medium=rss&utm_campaign=office-of-national-intelligence-adopting-ai-for-data-processing https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/#respond Mon, 07 Aug 2023 21:24:30 +0000 https://broadbandbreakfast.com/?p=52885 WASHINGTON, August 7, 2023 – The Office of the Director of National Intelligence is adopting artificial intelligence for data processing, said the Principal Deputy Director of National Intelligence Stacy Dixon at an Intelligence and National Security Alliance discussion Thursday.  

“We are excited for the technology and where it can take us,” she said, but warned that because the technology is so widespread, the barriers to entry are lower, and adversaries have better access to more harmful technologies. 

Non state actors and terrorists have no business with AI, claimed Dixon. But unfortunately, the threat is out there, and we have to protect our democratic ideals, she said. For this reason, the ODNI is implementing AI to stay ahead of bad actors. 

Dixon said the agency will work to implement AI in a “incrementally” and in a “smart way” to improve cooperation and trust between the private and public sectors. For the ODNI, the first step in AI implementation is making sure is data is ready for AI and establishing the workforce that understands the data and how to write the necessary algorithms, said Dixon. 

The ODNI is an independent agency established by Congress in 2004 to assist the director of national intelligence, a cabinet-level government official. The ODNI’s goal is to integrate foreign, military and domestic intelligence in defense of the United States and its interests abroad.  

According to Dixon, the agency is already using AI in some automation use cases, but it is not as widespread as it needs to be to enable better efficiency in the agency and stay ahead of adversaries. It is important to think of the agency as a data organization rather than simply intelligence, she said.  

The agency is building civil liberty protections into the AI models while simultaneously increasing AI use internally, Dixon added. 

Other federal agencies are evaluating how artificial intelligence can be implemented to improve internal processes. The Federal Communications Commission joined with the National Science Foundation to discuss how AI can be used to improve dynamic spectrum sharing, protect against harmful robocalls and improve the national broadband map in July. 

In July, the House Energy and Commerce Committee passed a bill to the House floor that directs the National Telecommunications and Information Administration to conduct a study on accountability measures for artificial intelligence. 

]]>
https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/feed/ 0 52885
Congress Should Not Create AI-Specific Regulation, Say Techies https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-not-create-ai-specific-regulation-say-techies https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/#respond Fri, 28 Jul 2023 18:23:58 +0000 https://broadbandbreakfast.com/?p=52742 WASHINGTON, July 28, 2023 – Artificial Intelligence experts said that Congress should not make AI-specific legislation to protect against potential harms at a Congressional Internet Caucus Academy panel Friday. 

AI harms and risks are already addressed by existing laws, said Joshua Landau, senior counsel of innovation policy at nonprofit advocacy organization the Computer and Communications Industry Association. 

Landau urged Congress to write laws that address harms rather than creating laws that specifically regulate AI usage. He warned that differentiating between AI and human crimes will only create loopholes in law that will serve to incentivize unlawful behavior, which in turn will affect where research and development in the industry will go. The exception is laws that delineate liability for harmful actions when AI is involved, he said. 

His comments follow an opinion expressed by former Chairman of the Federal Communications Commission, Richard Wiley, on Tuesday who said that now is not the right time to regulate AI and urged lawmakers to slow down in efforts to regulate the technology. 

The desire for perfect policy has held Congress back from developing AI regulation, added Evi Fuelle, global policy director at Credo AI. She urged for Congress to implement transparency mandates for both large and small AI companies.  

Voluntary commitments will fail to show results if Congress does not mandate them, said Fuelle, referring to the seven AI companies that committed to the White House’s AI goals last week. The commitments included steps to ensure safety, transparency and trustworthiness of the technology. 

Nick Garcia, policy counsel at Public Knowledge, cautioned against policies that will call for a pause or halt in AI research and development, saying that it is not a sustainable solution. He also urged Congress to address AI issues without neglecting equally important concerns surrounding social media regulation. 

In October, the Biden Administration announced a blueprint for a first-ever AI Bill of Rights that identifies five principles that should guide the design, use and deployment of AI systems in order to protect American citizens. According to the White House, federal agencies have “ramped up their efforts” to protect American citizens from risks posed by AI technology.   

In May, Biden signed an executive order directing federal agencies to root out bias in the design of AI technology and protect the public from algorithmic discrimination. Thursday, a House Committee passed legislation that would direct the National Telecommunications and Information Administration to conduct research on accountability measures for AI. 

]]>
https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/feed/ 0 52742
Former FCC Commissioners Disagree on Future of AI Regulation https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=former-fcc-commissioners-disagree-on-future-of-ai-regulation https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/#respond Thu, 27 Jul 2023 00:51:24 +0000 https://broadbandbreakfast.com/?p=52660 WASHINGTON, July 26, 2023 – Former chairs of the Federal Communications Commission urged for lawmakers to slow down in regulating artificial intelligence at a Multicultural Media, Telecom and Internet Council event Tuesday. 

Richard Wiley, chair of the agency under Presidents Nixon, Ford and Carter, said that now is not the right time to regulate AI, and neither is the FCC the right agency to do the job. He urged lawmakers to wait until the technology is better developed to write long lasting regulations. 

“AI is the future of technology in many respects,” said Wiley. “It will provide a great amount of innovation for our country.” He believes that it should not be regulated to allow for innovation. 

Former Acting FCC Chairwoman Mignon Clyburn disagreed, warning that Congress should not work too slowly on AI regulation. AI evolution will not slow down, she said, “we can’t sleep on this.” She did not specify how the technology should be regulated.

Clyburn served as acting chairwoman under President Obama, until the confirmation of Tom Wheeler.

There are 17 states where AI legislation has already been introduced, said Clyburn. “Things will happen whether we [federal agencies] move or not,” she said, warning against a patchwork of laws across states that could increase complications for tech companies.  

Clyburn added that artificial intelligence will make potentially dangerous material more accessible to vulnerable populations, including children and vulnerable adults. It is a balance of encouraging good innovation and protecting those who could be further harmed by AI, she said, “we cannot stall” on these conversations. 

Wiley argued that children’s protection should be in the hands of parents. He suggested that tech developers could provide parents with a set of best practices to help them understand the threats revolving around AI. 

Jonathan Adelstein, former commissioner at the FCC from 2002 to 2009, expressed hope that AI will provide a revenue stream for 5G networks. He said that laws should encourage tech development of AI while ensuring that citizens are protected against potential dangers. “It’s a delicate balance, and I’m not sure the FCC is the right place to do it,” he said. 

The FCC is currently considering how AI can be used to make congestion control decisions on dynamic spectrum sharing applications. AI has been flagged as a major opportunity for the United States to improve its competitiveness with China. Last week, seven AI companies pledged to uphold key principles that the White House believes are fundamental to the safe future of AI.  

]]>
https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/feed/ 0 52660
Seven Tech Companies at White House Commit to Prevent AI Risks https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/?utm_source=rss&utm_medium=rss&utm_campaign=seven-tech-companies-at-white-house-commit-to-prevent-ai-risks https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/#respond Fri, 21 Jul 2023 19:42:25 +0000 https://broadbandbreakfast.com/?p=52526 WASHINGTON, July 21, 2023 – President Joe Biden announced that his administration has secured voluntary commitments from leading artificial intelligence companies to manage the risks posed by the technology in the White House Friday. 

“Artificial intelligence promises an enormous promise of both risk to our society and our economy and national security but also incredible opportunities,” began Biden in his remarks. Attending the event were President of Microsoft Brad Smith, President of Google Kent Walker, President of Meta Nick Clegg and President of OpenAI Greg Brockman, among other tech leaders. 

Biden and Vice President Kamala Harris met with tech leaders two months ago to “underscore the responsibility of making sure that products they are producing are safe.” Seven companies – Amazon, AI safety and research company Anthropic, Google, AI startup Inflection, Meta, Microsoft, and OpenAI – agreed to commitments that will be implemented immediately to “help move toward safe, secure, and transparent development of AI technology.” 

The commitments seek to uphold key principles that the White House believes are “fundamental to the future of AI,” namely safety, security and trust.  

The companies commit to ensuring products are safe before introducing them to the public by running products through internal and external security testing of AI systems before their release. The testing will be carried out in part by independent experts and will protect the public against the most significant AI risks including biosecurity and cybersecurity. Included in this commitment is assurance that the company will share information across the industry, with government, and academia on best practices for AI safety, attempts to circumvent safeguards, and technical collaboration. 

Furthermore, the companies commit to putting security first by investing in cybersecurity safeguards and facilitating third-party discovery and reporting of vulnerabilities in AI systems. 

Finally, the companies commit to earning the public’s trust by developing robust technical mechanisms to ensure that users know when content is AI generated to reduce dangers of fraud and deception. The companies will also publicly report their AI systems’ capabilities, limitations, and appropriate uses to address bias and fairness. They will also prioritize research on the societal risks that the AI systems can pose and develop and deploy advanced AI systems to address society’s greatest challenges. 

“From cancer prevention to mitigating climate change to so much in between, AI – if properly managed – can contribute enormously to the prosperity, equality and security of all,” read the announcement. 

“These commitments are real and they are concrete,” said Biden. “They are going to help fulfill industry fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and shared values.” He expressed his hope that AI will transform and improve the lives of Americans, claiming that he will work with federal agencies to make necessary steps to ensure AI will make a positive impact.  

The White House has consulted with 21 different governments around the world about the voluntary commitments. 

In October, the Biden Administration announced a blueprint for a first-ever AI Bill of Rights that identifies five principles that should guide the design, use and deployment of AI systems in order to protect American citizens. According to the White House, federal agencies have “ramped up their efforts” to protect American citizens from risks posed by AI technology.  

In May, Biden signed an executive order directing federal agencies to root out bias in the design of AI technology and protect the public from algorithmic discrimination. 

The White House also announced that it is currently underway to develop an executive order that will pursue bipartisan legislation to “help America lead the way in responsible innovation.” 

]]>
https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/feed/ 0 52526
Increase US Competitiveness with China Through AI and Spectrum, Experts Urge https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/?utm_source=rss&utm_medium=rss&utm_campaign=increase-us-competitiveness-with-china-through-ai-and-spectrum https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/#respond Thu, 20 Jul 2023 18:47:13 +0000 https://broadbandbreakfast.com/?p=52487 WASHINGTON, July 20, 2023 – Maintaining U.S. competitiveness with China requires leveraging artificial intelligence for supply chain monitoring and allocating mid-band spectrum for commercial use, said experts Thursday. 

It is critical that the United States reduces its dependency on China in key areas including microelectronics, electric vehicles, solar panels, pharmaceutical ingredients, rare earth minerals processing, and more, said Rep. Mike Gallagher, R-Wisconsin, at a Punchbowl News event. He added that it is essential that American companies and governments are aware of their own supply chain risks and vulnerable areas.  

Artificial intelligence can be deployed to understand vulnerabilities in the supply chain, said Carrie Wibben, president of government solutions at supply chain management software company Exiger. 

American adversaries have been using AI for a long time to understand where to penetrate American supply chain ecosystem to obtain a strategic advantage over the country, said Wibben. She reported that the Department of Defense is moving quickly to increase visibility in its supply chain and implement new technology.  

AI and supply chains are the two fronts the U.S. competes in to maintain global dominance, said Wibben. She encouraged the coordination of the two to develop a strategy to keep U.S. global competitiveness and increase national security. 

A major concern in Congress is the nation’s reliance on China for its supply chain, added Rep. Raja Krishnamoorthi, D-Illinois. He said that the best solution is diversifying in the private sector, meaning that companies have redundant suppliers.  

In many cases, this can be done without government intervention but where the private sector doesn’t have the knowledge base to replicate these systems, it is essential that the government step in and provide incentives, Krishnamoorthi said. Congress has passed several laws, including the Inflation Reduction Act and the CHIPS and Science Act that invest billions of dollars into American-made clean energy and semiconductors. 

Krishnamoorthi said that the White House is doing what it can to prevent aggression from the Peoples Republic of China materializing into conflict.  

Need more spectrum 

Allocating more licensed spectrum for commercial use to support 5G is essential to maintaining US competitiveness with China, said panelists at a separate American Enterprise Institute event Thursday.  

Next generation wireless mobile network, 5G, enables higher speeds with low latency and more reliability. For a democratic state, 5G will enable more expression, innovation, human freedom, and opportunities to solve world challenges of health and climate, said Clete Johnson, senior fellow at the Center for Strategic and International Studies. For an authoritarian state, the same technology will enable policing of citizens, social control, and an overarching understanding of what people are doing, said Johnson.  

If the U.S. is behind China in allocating the spectrum that 5G rides on, then China will dominate cyber and information operations, including force projections and more capable weaponry, warned Johnson. “If we don’t lead, China will.” 

“Commercial strength is national security,” said Johnson, referring to the need to allocate spectrum for commercial use.  

China recognizes the value of 5G and how this kind of foundation will enable industrial and commercial activity, said Peter Rysavy, president of wireless consultancy Rysavy Research. The country has allocated three times as much spectrum in the mid-band areas for commercial use than the U.S. has, he said.  

No amount of spectrum efficiency and sharing mechanisms will replace having more spectrum available, added Paroma Sanyal, principal at economic consultancy Brattle Group. The U.S. government needs to get more spectrum into the pipeline, she said. 

A former administrator of the National Telecommunications and Information Administration said on a panel last week that national security depends on commercial access to spectrum. “If you take economic security out of the national security equation, you damage national security and vice versa,” John Kneuer said. 

Kneuer suggested that allowing the commercial sector access to more spectrum is beneficial to this goal as it spurs innovation that is a byproduct of increased economic activity that can then spill back into the federal agencies for new capabilities they would not have had otherwise.   

The Federal Communications Commission is evaluating how artificial intelligence can be used in dynamic spectrum sharing to optimize traffic and prevent harmful interference. AI can be used to make congestion control decisions and sense when federal agencies are using the bands to allow commercial use on federally owned spectrum without disrupting high-priority use. 

This comes as the FCC is facing spectrum availability concerns. In its June open meeting, the FCC issued proposed rulemaking that explores how the 42 –42.5 GHz spectrum band might be made available on a shared basis. The agency’s spectrum auction authority, however, expired earlier this year. 

The head of the NTIA announced this week that the national spectrum strategy is set to be complete by the end of the year. It will represent a government-wide approach to maximizing the potential of the nation’s spectrum resources and takes into account input from government agencies and the private sector. 

Rep. Doris Matsui, D-Calif., is heading two bills, the Spectrum Relocation Enhancement Act and the Spectrum Coexistence Act, that would make updates to the spectrum relocation fund that compensates federal agencies to clear spectrum for commercial use and would require the NTIA to conduct a review of federal receiver technology to support more intensive use of limited spectrum.    

]]>
https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/feed/ 0 52487
Artificial Intelligence for Spectrum Sharing ‘Not Far Off,’ Says FCC Chair Rosenworcel https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/#respond Thu, 13 Jul 2023 19:20:36 +0000 https://broadbandbreakfast.com/?p=52320 WASHINGTON, July 13, 2023 – The Federal Communications Commission is evaluating how artificial intelligence can be used in dynamic spectrum sharing, protect against harmful robocalls and improve the national broadband map.  

The FCC joined with the National Science Foundation in a forum Thursday to discuss how AI can be used to improve agency operations. Chairwoman Jessica Rosenworcel said that the points and solutions discussed during the event will spearhead the FCC’s August open meeting. 

She pointed to spectrum sharing optimization as a major improvement possible through AI optimization. “Smarter radios using AI can work with each other without a central authority dictating the best use of spectrum in every environment,” she said, claiming that the technology is “not far off.”  

AI can be used to make congestion control decisions, which is a major opportunity for dynamic spectrum sharing, said Ness Shroff, director of NSF AI institute, in a panel discussion. It can also be used to sense when federal agencies are using spectrum bands to allow commercial use on federally owned spectrum without disrupting high-priority use.  

This comes as the FCC is facing spectrum availability concerns. In its June open meeting, the FCC issued proposed rulemaking that explores how the 42 – 42.5 GHz spectrum band might be made available on a shared basis. 

As research progresses, we will see more uses of AI for the FCC and in the telecom field in general, Shroff concluded. 

Lisa Guess, senior vice president of solutions engineering at telecom Ericsson, said that AI can be an important tool for getting a more granular national broadband map by analyzing areas that are likely to be overreported and analyzing the data submitted for accuracy and consistency.  

Shroff added that AI could analyze federal grant programs to determine how successful they are and find solutions for problem areas. 

Illegal robocalls can also be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices, said Alisa Valentin, senior director of technology and telecommunications policy at the National Urban League. Unfortunately, AI also makes it easier for bad actors to appear legitimate, she said, which is why the FCC needs to address new concerns as they appear.  

Harold Feld, senior vice president for consumer advocacy group Public Knowledge, added that the FCC needs to recognize that AI is a tool to be utilized but also a cause of potential concern that the agency needs to anticipate. He urged the FCC to develop regulations now that will prohibit its misuse in the future. 

Rosenworcel expressed her optimism about the future of AI in opening remarks. “Every day I see how communications networks power our world. I know how their expansion and evolution can change commercial and civic life. I also know the power of those communications networks can grow exponentially when we can use AI to understand how to increase the efficiency and effectiveness of our networks,” she said. 

Commissioner Nathan Simington added his support, emphasizing the need to maintain American headway as the technology leader of the world. “Most visions for a shared spectral future depend on one or another implementation of machine learning in automated frequency coordination,” he said. 

Simington added caution against casting regulatory solutions to problems that do not exist yet that may “be worse than the disease.” 

AI in telecommunications 

Not only is AI a game changer for the FCC, but it can also transform the way that telecommunications companies run their businesses, said Jorge Amar, senior partner at global management consulting firm, McKinsey and Company. AI can provide companies hyper personalization for consumer experience, improve labor productivity, and improve internal network operation. 

Generative AI “has potential to continue to disrupt how AI transforms telecom companies,” added Amar. Almost every telecom company is starting to work with AI, which is increasing the value of the industry, he said — “it is here to stay.” 

In fact, AI has a unique customer experience application for people with disabilities by predicting the likelihood that a particular customer will call customer service and preempt them by calling the consumer themselves and help address their pain points, said Amar.  

An easy application of AI that is already being deployed is chat bots that are able to respond to consumer’s concerns in real time and limit the amount of time waiting on hold or conversing with an employee, he added.  

Rosenworcel highlighted network resiliency in her remarks, saying that AI “can help proactively diagnose difficulties, orchestrate solutions, and heal networks on its own,” especially in response to weather events that create unforeseen technical problems. “That means operators can fix problems before they reach customers, and design them with radically improved intelligence and efficiency.” 

The House Subcommittee on Communications and Technology passed a bill Wednesday that would require the NTIA to examine accountability standards for AI systems used in communications networks as a greater push to enhance transparency of government’s use of AI to communicate with the public. 

]]>
https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/feed/ 0 52320
Senator Calls for Global Cooperation on Artificial Intelligence Regulation to Compete with China https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/?utm_source=rss&utm_medium=rss&utm_campaign=senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/#respond Tue, 20 Jun 2023 13:15:16 +0000 https://broadbandbreakfast.com/?p=51721 WASHINGTON, June 20, 2023 – Sen. Mark Warner, D-VA, called on western allies to collaborate on regulating artificial intelligence, warning China has gained a significant head start on that front.

China’s “very much ahead of the game,” even surpassing Europe in implementing AI regulations, Warner said Thursday in a video interview for Politico’s Global Tech Day. China had reportedly already started its AI development plan in 2017.

“Many of us believe that we are in an enormous technology competition, particularly with China, and that national security means who wins the battle around AI” and other emerging technologies, he said, adding China might employ “inappropriate means to use AI on an offensive basis or on a misinformation or deceptive basis against the balance of the world.”

He proposed that the United States collaborate with its global allies, particularly the European Union, the United Kingdom and Japan, to establish a universal framework for regulating artificial intelligence. The EU recently passing a draft law known as the A.I. Act, while Senate witnesses have called on senators to do something about AI transparency.

Earlier in June, Warner joined Sens. Michael Bennet, D-CO, and Todd Young, R-IN, in introducing legislation to form an agency charged with increasing American competitiveness in the global tech arena, including the field of artificial technology.

Jonathan Berry, UK minister for AI and intellectual property, reiterated the call for a unified approach toward AI regulations, emphasizing the need to “arrive at the same landing zone” later during the summit.

“From a UK’s perspective, we are very keen to offer thought leadership in this space,” he said.

The capacity of generative AI to quickly produce responses by accessing information from unregulated online datasets has raised concerns regarding data privacy, content bias and ethical applications. Legislators, tech leaders, and academics have all called on Congress to adopt guidelines for the safe and responsible development of AI.

]]>
https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/feed/ 0 51721
Academics Call for Dedicated Agency for AI Regulation https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=academics-call-for-dedicated-agency-for-ai-regulation https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/#respond Mon, 12 Jun 2023 19:04:57 +0000 https://broadbandbreakfast.com/?p=51643 WASHINGTON, June 12, 2023 – Panelists at an event last week recommended a dedicated government agency to oversee the regulation of artificial intelligence.

Ben Shneiderman, professor at the University of Maryland’s department of computer science, said he sees government agencies as the primary entities to take the lead in internet and AI regulation. He encouraged the involvement of accounting firms and insurance companies in auditing and regulating AI systems, emphasizing the need for collaboration among different players to address the complex challenges associated with AI.  

“The history of regulation shows that it can be very positive and a great trigger of innovation.” Shneiderman said at an event hosted by the Center for Data Innovation and R Street Institute. “It’s a big job. It’s going to take our attention for the next 50 years. And we need lots of players to participate.” 

Participants at the event discussed how agencies like FAA, FTC, SEC and others are capable and well placed to know the domains of application of AI regulations. Though they did agree that a dedicated agency could ensure the safety and effectiveness of AI systems through stringent regulations before their deployment. 

“There’s a lot of expertise with the current agencies.” said Lee Tiedrich, faculty fellow in ethical technology at Duke University. She said she wished that the government optimized current agencies and administrative structures before creating a new agency. 

Generative AI creates original content using deep learning algorithms, mimicking human creativity by learning from data provided by humans.

Since OpenAI’s ChatGPT‘s launch in November 2002, AI technology has advanced with more sophisticated language models and has been implemented across industries.

Experts are concerned about the machine’s impact on ethics, privacy, bias, and accountability as AI becomes more integrated into society.

]]>
https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/feed/ 0 51643
Bennet, Young, and Warner Propose Legislation to Enhance U.S. Technology Competitiveness https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/?utm_source=rss&utm_medium=rss&utm_campaign=bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/#respond Fri, 09 Jun 2023 15:05:36 +0000 https://broadbandbreakfast.com/?p=51627 WASHINGTON, June 9, 2023 – Citing threats from China, two Democratic and one Republican senator have introduced the Global Technology Leadership Act that would create an Office of Global Competition Analysis.

The new office would be tasked with assessing U.S. leadership in science, technology and innovation in advanced manufacturing, workforce development, supply chain resilience and research and development initiatives.

“We cannot afford to lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China,” said Sen. Michael Bennet, D-Colo., one of the three sponsors, together with Mark Warner, D-Virginia, and Todd Young, R-Indiana.

On a periodic basis, the director of the Office of Science and Technology Policy, presidential assistants for economic policy, national security and the heads of such agencies of OSTP and the White House deem appropriate would determine the priorities of the office.

Bennet said that the office’s assessments would inform policymakers and help enhance American leadership in strategic innovation.

 

 

]]>
https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/feed/ 0 51627
U.S. Must Take Lead on Global AI Regulations: State Department Official https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-must-take-lead-on-global-ai-regulations-state-department-official https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/#respond Wed, 31 May 2023 19:30:02 +0000 https://broadbandbreakfast.com/?p=51286 WASHINGTON, May 31, 2023 – A State Department official is calling for a United States-led global coalition to set artificial intelligence regulations.

“This is the exact moment where the US needs to show leadership,” Jennifer Bachus, assistant secretary of state for Cyberspace and Digital Policy, said last week on a panel discussing international principles on responsible AI. “This is a shared problem and we need a shared solution.”

She opposed pitting the U.S. and China against one another in the AI race, saying it would “ultimately always lead to a problem.” Instead, Bachus called for an alliance of the United States, the European Union, and Japan to take the lead in creating a legal framework to govern artificial intelligence.

The introduction of OpenAI’s ChatGPT earlier this year sent tech companies in a rush to create their own generative AI chatbot systems. Competition between tech giants has heated up with the recent release of Google’s Bard and Microsoft’s Bing chatbot. Similar to ChatGPT in terms of its vast language model, these chatbots can also access data from the internet to answer queries or carry out tasks.

Experts are concerned about the dangers posed by this unprecedented technology. On Tuesday, hundreds of tech experts and industry leaders, including OpenAI’s CEO Sam Altman, signed a one-sentence statement calling the existential threats presented by A.I. a “global priority” on par with “pandemics and nuclear conflicts.” Earlier in March, Elon Musk joined several AI experts signing another open letter urging for a pause on “giant AI experiments.”

Despite the pressing concerns about generative AI, there is rising criticism that policymakers are slow to put forth adequate legislation for this nascent technology. Panelists argued this is partly because legislators have difficulty understanding technological innovations. Michelle Giuda, director of Krach Institute for Tech Diplomacy, argued for a more proactive contribution from the academic community and tech firms.

“There is a risk of relying too much on the government to regulate ahead of where innovation is going and providing the clarity that’s needed,” said Giuda. “We all know that the government isn’t going to stay ahead of the innovation curve, but this is an ongoing dialogue between tech companies, governments and civil society.”

Microsoft’s Chief Responsible AI Officer, Natasha Crampton, agreed that developers and experts in the field must play a central role in crafting and implementing legislation pertaining to artificial intelligence. She did, however, mention that businesses using AI technology should also share part of the responsibility.

“It is our job to make sure that safety and responsibility is baked into these systems from the very beginning,” said Crampton. “Making sure that you are really holding developers to very high standards but also deployers of technology in some aspects as well.”

Earlier in May, Sens. Michael Bennet, D-C.O., and Peter Welch, D-VT. introduced a bill to establish a government agency to oversee artificial intelligence. The Joe Biden administration also announced $140 million in funding to establish seven new National AI Research institutions, increasing the total number of institutions in the nation to 25.

]]>
https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/feed/ 0 51286
AI is a Key Component in Effectively Managing the Energy Grid https://broadbandbreakfast.com/2023/05/ai-is-a-key-component-in-effectively-managing-the-energy-grid/?utm_source=rss&utm_medium=rss&utm_campaign=ai-is-a-key-component-in-effectively-managing-the-energy-grid https://broadbandbreakfast.com/2023/05/ai-is-a-key-component-in-effectively-managing-the-energy-grid/#respond Tue, 30 May 2023 18:29:04 +0000 https://broadbandbreakfast.com/?p=51257 WASHINGTON, May 30, 2023 – Artificial intelligence will be required to effectively manage and optimize a more complex energy grid, said experts at a United States Energy Association event Tuesday. 

Renewable energy technologies such as solar panels, electric vehicles, and power walls add large amounts of energy storage to the grid, said Jeremy Renshaw, senior technical executive at the Electric Power Research Institute. Utility companies are required to manage many bidirectional resources that both store and use energy, he said. 

Learn more about the smart grid, clean energy and the U.S.-China tech race at Broadband Breakfast’s Made in America Summit on June 27.

“The grid of the future is going to be significantly more complicated,” said Renshaw. Having humans operate the grid will be economically infeasible, he continued, claiming that AI will drastically improve operations. 

The ability to balance the grid’s supply and demand in real time will become extremely complex with the adoption of these new technologies, added Marc Spieler, leader for global business development at AI hardware and software supplier, Nvidia. 

Utility companies will need to redirect traffic in real time to support the incoming demand, he said. AI enables real time redirecting of traffic and an understanding of the capacity of the grid at any point, said Spieler.  

Moreover, AI can identify what changes need to be made to avoid waste by over generating electricity and black outs by under generating, he said. AI also has the capability to predict and plan for extreme weather that can be hazardous to electrical infrastructure and can identify bottleneck areas where infrastructure needs to be updated, said Spieler. 

Human management will still be required to ensure that systems are operated responsibly, said John Savage, professor of computer science at Brown University. Utility companies should avoid allowing AI to make unsupervised decisions especially for unforeseen scenarios, he said. 

The panelists envision AI as a decision support mechanism to help humans make more informed decisions, agreed the panelists. The technology will replace jobs that deal with mundane and repetitive tasks but will ultimately create more jobs in new positions, said Renshaw. 

This comes several weeks after industry experts urged Congress to implement federal AI regulation. 

]]>
https://broadbandbreakfast.com/2023/05/ai-is-a-key-component-in-effectively-managing-the-energy-grid/feed/ 0 51257
Experts Debate Artificial Intelligence Licensing Legislation https://broadbandbreakfast.com/2023/05/experts-debate-artificial-intelligence-licensing-legislation/?utm_source=rss&utm_medium=rss&utm_campaign=experts-debate-artificial-intelligence-licensing-legislation https://broadbandbreakfast.com/2023/05/experts-debate-artificial-intelligence-licensing-legislation/#respond Tue, 23 May 2023 20:50:53 +0000 https://broadbandbreakfast.com/?p=51128 WASHINGTON, May 23, 2023 – Experts on artificial intelligence disagree on whether licensing is the proper legislation for the technology. 

If adopted, licensing requirements would require companies to obtain a federal license prior to developing AI technology. Last week, OpenAI CEO Sam Altman testified that Congress should consider a series of licensing and testing requirements for AI models above a threshold of capability. 

At a Public Knowledge event Monday, Aalok Mehta, head of US Public Policy at OpenAI, added licensing is a means to ensuring that AI developers put together safety practices. By establishing licensing rules, we are developing external validation tools that will improve consumer experience, he said. 

Generative AI — the model used by chatbots including OpenAI’s widely popular ChatGPT and Google’s Bard — is AI designed to produce content rather than simply processing information, which could have widespread effects on copyright disputes and disinformation, experts have said. Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

Some industry leaders, however, are concerned that calls for licensing are a way of shutting the door to competition and new startups by large companies like OpenAI and Google.  

B Cavello, director of emerging technologies at the Aspen Institute, said Monday that licensing requirements place burdens on competition, particularly small start-ups. 

Implementing licensing requirements can place a threshold that defines a set of players allowed to play in the AI space and a set that are not, said B. Licensing can make it more difficult for smaller players to gain traction in the competitive space, B said.  

Already the resources required to support these systems create a barrier that can be really tough to break through, B continued. While there should be mandates for greater testing and transparency, it can also present unique challenges we should seek to avoid, B said.  

Austin Carson, founder and president of SeedAI, said a licensing model would not get to the heart of the issue, which is to make sure AI developers are consciously testing and measuring their own models. 

The most important thing is to support the development of an ecosystem that revolves around assurance and testing, said Carson. Although no mechanisms currently exist for wide-scale testing, it will be critical to the support of this technology, he said. 

Base-level testing at this scale will require that all parties participate, Carson emphasized. We need all parties to feel a sense of accountability for the systems they host, he said. 

Christina Montgomery, AI ethics board chair at IBM, urged Congress to adopt precision regulation approach to AI that would govern AI in specific use cases, not regulating the technology itself in her testimony last week.  

]]>
https://broadbandbreakfast.com/2023/05/experts-debate-artificial-intelligence-licensing-legislation/feed/ 0 51128
Senate Witnesses Call For AI Transparency https://broadbandbreakfast.com/2023/05/senate-witnesses-call-for-ai-transparency/?utm_source=rss&utm_medium=rss&utm_campaign=senate-witnesses-call-for-ai-transparency https://broadbandbreakfast.com/2023/05/senate-witnesses-call-for-ai-transparency/#respond Tue, 16 May 2023 21:20:01 +0000 https://broadbandbreakfast.com/?p=50952 WASHINGTON, May 16, 2023 – Congress should increase regulatory requirements for transparency in artificial intelligence while adopting the technology in federal agencies, said witnesses at a Senate Homeland Security and Governmental Affairs Committee hearing on Tuesday. 

Many industry experts have urged for more federal AI regulation, claiming that widespread AI applications could lead to broad societal risks including an uptick in online disinformation, technological displacement, algorithmic discrimination, and other harms. 

The hearing addressed implementing AI in federal agencies. Congress is concerned about ensuring that the United States government is prepared to capitalize on the capabilities afforded by AI technology while also protecting the constitutional rights of citizens, said Sen. Gary Peters, D-Michigan.   

The United States “is suffering from a lack of leadership and prioritization on these topics,” stated Lynne Parker, director of AI Tennessee Initiative at the University of Tennessee in her comments. 

In a separate hearing Tuesday, CEO of OpenAI Sam Altman said that is is “essential that powerful AI is developed with democratic values in mind which mean US leadership is critical.”

Applications of AI are immensely beneficial, said Altman. However, “we think that regular intervention by governments will be crucial to mitigate the risks of increasingly powerful models.”

To do so, Altman suggested that the U.S. government consider a combination of licensing and testing requirements for the development and release of AI models above a certain threshold of capability.

Companies like OpenAI can partner with governments to ensure AI models adhere to a set of safety requirements, facilitate efficient processes, and examine opportunities for global coordination, he said.

Building accountability into AI systems

Siezing this moment to modernize the government’s systems will strengthen the country, said Daniel Ho, professor at Stanford Law School, encouraging Congress to lead by example to implement accountable AI practices.  

An accountable system ensures that agencies are responsible to report to the public and those that AI algorithms directly affect, added Richard Eppink of the American Civil Liberties Union of Idaho Foundation. 

A serious risk to implementing AI is that it can conceal how the systems work, including the bad data that they could be trained on, said Eppink. This can prevent accountability to the public and puts citizen’s constitutional rights at risk, he said. 

To prevent this, the federal government should implement transparency requirements and governance standards that would include transparency during the implementation process, said Eppink. Citizens have the right to the same information that the government has so we can maintain accountability, he concluded.  

Parker suggested that Congress appoint a Chief AI Director at each agency that would help develop Ai strategies for each agency and establish an interagency Chief AI Council to govern the use of the technology in the Federal government. 

Getting technical talent into the workforce is the predicate to a range of issues we are facing today, agreed Ho, claiming that less than two percent of AI personnel is in government. He urged Congress to establish pathways and trajectories for technical agencies to attract AI talent to public service.   

Congress considers AI regulation

Congress’s attention has been captured by growing AI regulatory concerns.  

In April, Senator Check Schumer, D-N.Y., proposed a high-level AI policy framework focused on ensuring transparency and accountability by requiring companies to allow independent experts to review and test AI technologies and make results available publicly. 

Later in April, Representative Yvette Clarke, D-N.Y., introduced a bill that would require the disclosure of AI-generated content in political ads. 

The Biden Administration announced on May 4 that it will invest $140 million in funding to launch seven new National AI Research Institutes, which investment will bring the total number of Institutes to 25 across the country.  

]]>
https://broadbandbreakfast.com/2023/05/senate-witnesses-call-for-ai-transparency/feed/ 0 50952
‘Watershed Moment’ Has Experts Calling for Increased Federal Regulation of AI https://broadbandbreakfast.com/2023/04/watershed-moment-has-experts-calling-for-increased-federal-regulation-of-ai/?utm_source=rss&utm_medium=rss&utm_campaign=watershed-moment-has-experts-calling-for-increased-federal-regulation-of-ai https://broadbandbreakfast.com/2023/04/watershed-moment-has-experts-calling-for-increased-federal-regulation-of-ai/#respond Fri, 28 Apr 2023 21:04:34 +0000 https://broadbandbreakfast.com/?p=50488 WASHINGTON, April 28, 2023 — As artificial intelligence technologies continue to rapidly develop, many industry leaders are calling for increased federal regulation to address potential technological displacement, algorithmic discrimination and other harms — while other experts warn that such regulation could stifle innovation.

“It’s fair to say that this is a watershed moment,” said Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at a panel hosted Wednesday by the Brookings Institution. “But we have to be honest about this as well, which is to say, there will be displacement.”

Screenshot of Reggie Townsend, vice president of the data ethics practice at the SAS Institute, at the Brookings Institute event

While some AI displacement is comparable to previous technological advances that popularized self-checkout machines and ATMs, Townsend argued that the current moment “feels a little bit different… because of the urgency attached to it.”

Recent AI developments have the potential to impact job categories that have traditionally been considered safe from technological displacement, agreed Cameron Kerry, a distinguished visiting fellow at Brookings.

In order to best equip people for the coming changes, experts emphasized the importance of increasing public knowledge of how AI technologies work. Townsend compared this goal to the general baseline knowledge that most people have about electricity. “We’ve got to raise our level of common understanding about AI similar to the way we all know not to put a fork in the sockets,” he said.

Some potential harms of AI may be mitigated by public education, but a strong regulatory framework is critical to ensure that industry players adhere to responsible development practices, said Susan Gonzales, founder and CEO at AIandYou.

“Leaders of certain companies are coming out and they’re communicating their commitment to trustworthy and responsible AI — but then meanwhile, the week before, they decimated their ethical AI departments,” Gonzales added.

Some experts caution against overregulation in low-risk use cases

However, some experts warn that the regulations themselves could cause harm. Overly strict regulations could hamper further AI innovation and limit the benefits that have already emerged — which range from increasing workplace productivity to more effectively detecting certain types of cancer, said Daniel Castro, director of the Center for Data Innovation, at a Broadband Breakfast event on Wednesday.

“We should want to see this technology being deployed,” Castro said. “There are areas where it will likely have lifesaving impacts; it will have very positive impacts on the economy. And so part of our policy conversation should also be, not just how do we make sure things don’t go wrong, but how do we make sure things go right.”

Effective AI oversight should distinguish between the different risk levels of various AI use cases before determining the appropriate regulatory approaches, said Aaron Cooper, vice president of global policy for the software industry group BSA.

“The AI system for [configuring a] router doesn’t have the same considerations as the AI system for an employment case, or even in a self-driving vehicle,” he said.

There are already laws that govern many potential cases of AI-related harms, even if those laws do not specifically refer to AI, Cooper noted.

“We just think that in high-risk situations, there are some extra steps that the developer and the deployer of the AI system can take to help mitigate that risk and limit the possibility of it happening in the first place,” he said.

Multiple entities considering AI governance

Very little legislation currently governs the use of AI in the United States, but the issue has recently garnered significant attention from Congress, the Federal Trade Commission, the National Telecommunications and Information Administration and other federal entities.

The National Artificial Intelligence Advisory Committee on Tuesday released a draft report detailing recommendations based on its first year of research, concluding that AI “requires immediate, significant and sustained government attention.”

One of the report’s most important action items is increasing sociotechnical research on AI systems and their impacts, said EqualAI CEO Miriam Vogel, who chairs the committee.

Throughout the AI development process, Vogel explained, each human touchpoint presents the risk of incorporating the developer’s biases — as well as a crucial opportunity for identifying and fixing these issues before they become embedded.

Vogel also countered the idea that regulation would necessarily stifle future AI development.

“If we don’t have more people participating in the process, with a broad array of perspectives, our AI will suffer,” she said. “There are study after study that show that the broader diversity in who is… building your AI, the better your AI system will be.”

Our Broadband Breakfast Live Online events take place on Wednesday at 12 Noon ET. Watch the event on Broadband Breakfast, or REGISTER HERE to join the conversation.

Wednesday, April 26, 2023, 12 Noon ET – Should AI Be Regulated?

The recent explosion in artificial intelligence has generated significant excitement, but it has also amplified concerns about how the powerful technology should be regulated — and highlighted the lack of safeguards currently in place. What are the potential risks associated with artificial intelligence deployment? Which concerns are likely just fearmongering? And what are the respective roles of government and industry players in determining future regulatory structures?

Panelists

  • Daniel Castro, Vice President, Information Technology and Innovation Foundation and Director, Center for Data Innovation
  • Aaron Cooper, Vice President of Global Policy, BSA | The Software Alliance
  • Rebecca Klar (moderator), Technology Policy Reporter, The Hill

Panelist resources

 

Daniel Castro is vice president at the Information Technology and Innovation Foundation and director of ITIF’s Center for Data Innovation. Castro writes and speaks on a variety of issues related to information technology and internet policy, including privacy, security, intellectual property, Internet governance, e-government and accessibility for people with disabilities. In 2013, Castro was named to FedScoop’s list of the “top 25 most influential people under 40 in government and tech.”

Aaron Cooper serves as vice president of Global Policy for BSA | The Software Alliance. In this role, Cooper leads BSA’s global policy team and contributes to the advancement of BSA members’ policy priorities around the world that affect the development of emerging technologies, including data privacy, cybersecurity, AI regulation, data flows and digital trade. He testifies before Congress and is a frequent speaker on data governance and other issues important to the software industry.

Rebecca Klar is a technology policy reporter at The Hill, covering data privacy, antitrust law, online disinformation and other issues facing the evolving tech world. She is a native New Yorker and graduated from Binghamton University. She previously covered local news at The York Dispatch in York, Pa. and The Island Now in Nassau County, N.Y.

Graphic from Free-Vectors.Net used with permission

WATCH HERE, or on YouTubeTwitter and Facebook.

As with all Broadband Breakfast Live Online events, the FREE webcasts will take place at 12 Noon ET on Wednesday.

SUBSCRIBE to the Broadband Breakfast YouTube channel. That way, you will be notified when events go live. Watch on YouTubeTwitter and Facebook.

See a complete list of upcoming and past Broadband Breakfast Live Online events.

]]>
https://broadbandbreakfast.com/2023/04/watershed-moment-has-experts-calling-for-increased-federal-regulation-of-ai/feed/ 0 50488
Google CEO Promotes AI Regulation, GOP Urges TikTok Ban for Congress Members, States Join DOJ Antitrust Suit https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/?utm_source=rss&utm_medium=rss&utm_campaign=google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/#respond Tue, 18 Apr 2023 21:35:24 +0000 https://broadbandbreakfast.com/?p=50323 April 18, 2023 — Google CEO Sundar Pichai on Sunday called for increased regulation of artificial intelligence, warning that the rapidly developing technology poses broad societal risks.

“The pace at which we can think and adapt as societal institutions compared to the pace at which the technology’s evolving — there seems to be a mismatch,” Pichai said in an interview with CBS News.

Watch Broadband Breakfast on April 26, 2023 – Should AI Be Regulated?
What are the risks associated with artificial intelligence deployment, and which concerns are just fearmongering?

Widespread AI applications could lead to a dramatic uptick in online disinformation, as it becomes increasingly easy to create and spread fake news, images and videos, Pichai warned.

Google recently released a series of recommendations for regulating AI, advocating for “a sectoral approach that builds on existing regulation” and cautioning against “over-reliance on human oversight as a solution to AI issues.”

But the directive also noted that “while self-regulation is vital, it is not enough.”

Pichai emphasized this point, calling for broad multisector collaboration to best determine the shape of AI regulation.

“The development of this needs to include not just engineers, but social scientists, ethicists, philosophers and so on,” he said. “And I think these are all things society needs to figure out as we move along — it’s not for a company to decide.”

Republicans call to ban members of Congress from personal TikTok use

A group of Republican lawmakers on Monday urged the House and Senate rules committees to ban members of Congress from using TikTok, citing national security risks and the need to “lead by example.”

Congress banned use of the app on government devices in late 2022, but several elected officials have maintained accounts on their personal devices.

In Monday’s letter, Republican lawmakers argued that the recent hearing featuring TikTok CEO Shou Zi Chew made it “blatantly clear to the public that the China-based app is mining data and potentially spying on American citizens.”

“It is troublesome that some members continue to disregard these clear warnings and are even encouraging their constituents to use TikTok to interface with their elected representatives – especially since some of these users are minors,” the letter continued.

TikTok is facing hostility from the other side of the aisle as well. On Thursday, Rep. Frank Pallone, D-N.J., sent Chew a list of questions about the app’s privacy and safety practices that House Democrats claimed were left unanswered at the March hearing.

Meanwhile, Montana lawmakers voted Friday to ban TikTok on all personal devices, becoming the first state to pass such legislation. The bill now awaits the signature of Gov. Greg Gianforte — who was one of several state leaders last year to mimic Congress in banning TikTok from government devices.

Nine additional states join DOJ’s antitrust lawsuit against Google

The Justice Department announced on Monday that nine additional states joined its antitrust lawsuit over Google’s alleged abuse of the digital advertising market.

The Attorneys General of Arizona, Illinois, Michigan, Minnesota, Nebraska, New Hampshire, North Carolina, Washington and West Virginia joined the existing coalition of California, Colorado, Connecticut, New Jersey, New York, Rhode Island, Tennessee and Virginia.

“We look forward to litigating this important case alongside our state law enforcement partners to end Google’s long-running monopoly in digital advertising technology markets,” said Doha Mekki, principal deputy assistant attorney general of the Justice Department’s Antitrust Division.

The lawsuit alleges that Google monopolizes digital advertising technologies used for both buying and selling ads, said Jonathan Kanter, assistant attorney general of the Justice Department’s Antitrust Division, when the suit was filed in January.

“Our complaint sets forth detailed allegations explaining how Google engaged in 15 years of sustained conduct that had — and continues to have — the effect of driving out rivals, diminishing competition, inflating advertising costs, reducing revenues for news publishers and content creators, snuffing out innovation, and harming the exchange of information and ideas in the public sphere,” Kanter said.

]]>
https://broadbandbreakfast.com/2023/04/google-ceo-promotes-ai-regulation-gop-urges-tiktok-ban-for-congress-members-states-join-doj-antitrust-suit/feed/ 0 50323
Sen. Bennet Urges Companies to Consider ‘Alarming’ Child Safety Risks in AI Chatbot Race https://broadbandbreakfast.com/2023/03/sen-bennet-urges-companies-to-consider-alarming-child-safety-risks-in-ai-chatbot-race/?utm_source=rss&utm_medium=rss&utm_campaign=sen-bennet-urges-companies-to-consider-alarming-child-safety-risks-in-ai-chatbot-race https://broadbandbreakfast.com/2023/03/sen-bennet-urges-companies-to-consider-alarming-child-safety-risks-in-ai-chatbot-race/#respond Wed, 22 Mar 2023 17:21:12 +0000 https://broadbandbreakfast.com/?p=49820 WASHINGTON, March 22, 2023 — Sen. Michael Bennet, D-Colo., on Tuesday urged the companies behind generative artificial intelligence products to anticipate and mitigate the potential harms that AI-powered chatbots pose to underage users.

“The race to deploy generative AI cannot come at the expense of our children,” Bennet wrote in a letter to the heads of Google, OpenAI, Meta, Microsoft and Snap. “Responsible deployment requires clear policies and frameworks to promote safety, anticipate risk and mitigate harm.”

In response to the explosive popularity of OpenAI’s ChatGPT, several leading tech companies have rushed to integrate their own AI-powered applications. Microsoft recently released an AI-powered version of its Bing search engine, and Google has announced plans to make a conversational AI service “widely available to the public in the coming weeks.”

Social media platforms have followed suit, with Meta CEO Mark Zuckerberg saying the company plans to “turbocharge” its AI development the same day Snapchat launched a GPT-powered chatbot called My AI.

These chatbots have already demonstrated “alarming” interactions, Bennet wrote. In response to a researcher posing as a child, My AI gave instructions for lying to parents about an upcoming trip with a 31-year-old man and for covering up a bruise ahead of a visit from Child Protective Services.

A Snap Newsroom post announcing the chatbot acknowledged that “as with all AI-powered chatbots, My AI is prone to hallucination and can be tricked into saying just about anything.”

Bennet criticized the company for deploying My AI despite knowledge of its shortcomings, noting that 59 percent of teens aged 13 to 17 use Snapchat. “Younger users are at an earlier stage of cognitive, emotional, and intellectual development, making them more impressionable, impulsive, and less equipped to distinguish fact from fiction,” he wrote.

These concerns are compounded by an escalating youth mental health crisis, Bennet added. In 2021, more than half of teen girls reported feeling persistently sad or hopeless and one in three seriously contemplated suicide, according to a recent report from the Centers for Disease Control and Prevention.

“Against this backdrop, it is not difficult to see the risk of exposing young people to chatbots that have at times engaged in verbal abuse, encouraged deception and suggested self-harm,” the senator wrote.

Bennet’s letter comes as lawmakers from both parties are expressing growing concerns about technology’s impact on young users. Legislation aimed at safeguarding children’s online privacy has gained broad bipartisan support, and several other measures — ranging from a minimum age requirement for social media usage to a slew of regulations for tech companies — have been proposed.

Many industry experts have also called for increased AI regulation, noting that very little legislation currently governs the powerful technology.

]]>
https://broadbandbreakfast.com/2023/03/sen-bennet-urges-companies-to-consider-alarming-child-safety-risks-in-ai-chatbot-race/feed/ 0 49820
Oversight Committee Members Concerned About New AI, As Witnesses Propose Some Solutions https://broadbandbreakfast.com/2023/03/oversight-committee-members-concerned-about-new-ai-as-witnesses-propose-some-solutions/?utm_source=rss&utm_medium=rss&utm_campaign=oversight-committee-members-concerned-about-new-ai-as-witnesses-propose-some-solutions https://broadbandbreakfast.com/2023/03/oversight-committee-members-concerned-about-new-ai-as-witnesses-propose-some-solutions/#respond Tue, 14 Mar 2023 21:59:49 +0000 https://broadbandbreakfast.com/?p=49329 WASHINGTON, March 14, 2023 –  In response to lawmakers’ concerns over the impacts on certain artificial intelligence technologies, experts said at an oversight subcommittee hearing on Wednesday that more government regulation would be necessary to stem their negative impacts.

Relatively new machine learning technology known as generative AI, which is designed to create content on its own, has taken the world by storm. Specific applications such as the recently surfaced ChatGPT, which can write out entire novels from basic user inputs, has drawn both marvel and concern.

Such AI technology can be used to encourage cheating behaviors in academia as well as harm people through the use of  deep fakes, which uses AI to superimpose a user in a video. Such AI can be used to produce “revenge pornography” to harass, silence and blackmail victims.

Aleksander Mądry, professor of Cadence Design Systems of Massachusetts Institute of Technology, told the subcommittee that AI is a very fast moving technology, meaning the government needs to step in to confirm the objectives of the companies and whether the algorithms match the societal benefits and values. These generative AI technologies are often limited to their human programming and can also display biases.

Rep. Marjorie Taylor Greene, R-Georgia, raised concerns about this type of AI replacing human jobs. Eric Schmidt, former Google CEO and now chair of the AI development initiative known as the Special Competitive Studies Project, said that if this AI can be well-directed, it can aid people in obtaining higher incomes and actually creating more jobs.

To that point, Rep. Stephen Lynch, D-Massachusetts., raised the question of how much progress the government has made or still needs in AI development.

Schmidt said governments across the country need to look at bolstering the labor force to keep up.

“I just don’t see the progress in government to reform the way of hiring and promoting technical people,” he said. “This technology is too new. You need new students, new ideas, new invention – I think that’s the fastest way.

“On the federal level, the easiest thing to do is to come up with some program that’s ministered by the state or by leading universities and getting them money so that they can build these programs.”

Schmidt urged lawmakers last year to create a digital service academy to train more young American students on AI, cybersecurity and cryptocurrency, reported Axios.

]]>
https://broadbandbreakfast.com/2023/03/oversight-committee-members-concerned-about-new-ai-as-witnesses-propose-some-solutions/feed/ 0 49329
Congress Should Focus on Tech Regulation, Said Former Tech Industry Lobbyist https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/#respond Fri, 10 Mar 2023 20:31:59 +0000 https://broadbandbreakfast.com/?p=49188 WASHINGTON, March 9, 2023 – Congress should focus on technology regulation, particularly for emerging technology, rather than speech debates, said Adam Conner, vice president of technology policy at American Progress at Broadband Breakfast’s Big Tech and Speech Summit Thursday.

Conner challenged the view of many in industry who assume that any change to current laws, including section 230, would only make the internet worse.  

Conner, who aims to build a progressive technology policy platform and agenda, spent the past 15 years working as a Washington employee for several Silicon Valley companies, including Slack Technologies and Brigade. In 2007, Conner founded Facebook’s Washington office.

Instead, Conner argues that this mindset traps industry leaders in the assumption that the internet is currently the best it could ever be. This is a fallacy, he claims. To avoid this mindset, Conner suggests that the industry focus on regulation for new and emerging technology like artificial intelligence. 

Recent AI innovations, like ChatGPT, create the most human readable AI experience ever made through text, images, and videos, Conner said. The penetration of AI will completely change the discussion about protecting free speech, he said, urging Congress to draft laws now to ensure its safe use in the United States. 

Congress should start its AI regulation with privacy, anti-trust, and child safety laws, he said. Doing so will prove to American citizens that the internet can, in fact, be better than it is now and will promote future policy amendments, he said.

To watch the full videos join the Broadband Breakfast Club below. We are currently offering a Free 30-Day Trial: No credit card required!

]]>
https://broadbandbreakfast.com/2023/03/congress-should-focus-on-tech-regulation-said-former-tech-industry-lobbyist/feed/ 0 49188
As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation https://broadbandbreakfast.com/2023/02/as-chatgpts-popularity-skyrockets-some-experts-call-for-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=as-chatgpts-popularity-skyrockets-some-experts-call-for-ai-regulation https://broadbandbreakfast.com/2023/02/as-chatgpts-popularity-skyrockets-some-experts-call-for-ai-regulation/#respond Fri, 03 Feb 2023 14:49:36 +0000 https://broadbandbreakfast.com/?p=48297 WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

]]>
https://broadbandbreakfast.com/2023/02/as-chatgpts-popularity-skyrockets-some-experts-call-for-ai-regulation/feed/ 0 48297
Automated Content Moderation’s Main Problem is Subjectivity, Not Accuracy, Expert Says https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/?utm_source=rss&utm_medium=rss&utm_campaign=automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/#respond Thu, 02 Feb 2023 20:51:56 +0000 https://broadbandbreakfast.com/?p=48282 WASHINGTON, February 2, 2023 — The vast quantity of online content generated daily will likely drive platforms to increasingly rely on artificial intelligence for content moderation, making it critically important to understand the technology’s limitations, according to an industry expert.

Despite the ongoing culture war over content moderation, the practice is largely driven by financial incentives — so even companies with “a speech-maximizing set of values” will likely find some amount of moderation unavoidable, said Alex Feerst, CEO of Murmuration Labs, at a Jan. 25 American Enterprise Institute event. Murmuration Labs works with tech companies to develop online trust and safety products, policies and operations.

If a piece of online content could potentially lead to hundreds of thousands of dollars in legal fees, a company is “highly incentivized to err on the side of taking things down,” Feerst said. And even beyond legal liability, if the presence of certain content will alienate a substantial number of users and advertisers, companies have financial motivation to remove it.

However, a major challenge for content moderation is the sheer quantity of user-generated online content — which, on the average day, includes 500 million new tweets, 700 million Facebook comments and 720,000 hours of video uploaded to YouTube.

“The fully loaded cost of running a platform includes making millions of speech adjudications per day,” Feerst said.

“If you think about the enormity of that cost, very quickly you get to the point of, ‘Even if we’re doing very skillful outsourcing with great accuracy, we’re going to need automation to make the number of daily adjudications that we seem to need in order to process all of the speech that everybody is putting online and all of the disputes that are arising.’”

Automated moderation is not just a theoretical future question. In a March 2021 congressional hearing, Meta CEO Mark Zuckerberg testified that “more than 95 percent of the hate speech that we take down is done by an AI and not by a person… And I think it’s 98 or 99 percent of the terrorist content.”

Dealing with subjective content

But although AI can help manage the volume of user-generated content, it can’t solve one of the key problems of moderation: Beyond a limited amount of clearly illegal material, most decisions are subjective.

Much of the debate surrounding automated content moderation mistakenly presents subjectivity problems as accuracy problems, Feerst said.

For example, much of what is generally considered “hate speech” is not technically illegal, but many platforms’ terms of service prohibit such content. With these extrajudicial rules, there is often room for broad disagreement over whether any particular piece of content is a violation.

“AI cannot solve that human subjective disagreement problem,” Feerst said. “All it can do is more efficiently multiply this problem.”

This multiplication becomes problematic when AI models are replicating and amplifying human biases, which was the basis for the Federal Trade Commission’s June 2022 report warning Congress to avoid overreliance on AI.

“Nobody should treat AI as the solution to the spread of harmful online content,” said Samuel Levine, director of the FTC’s Bureau of Consumer Protection, in a statement announcing the report. “Combatting online harm requires a broad societal effort, not an overly optimistic belief that new technology — which can be both helpful and dangerous — will take these problems off our hands.”

The FTC’s report pointed to multiple studies revealing bias in automated hate speech detection models, often as a result of being trained on unrepresentative and discriminatory data sets.

As moderation processes become increasingly automated, Feerst predicted that the “trend of those problems being amplified and becoming less possible to discern seems very likely.”

Given those dangers, Feerst emphasized the urgency of understanding and then working to resolve AI’s limitations, noting that the demand for content moderation will not go away. To some extent, speech disputes are “just humans being human… you’re never going to get it down to zero,” he said.

]]>
https://broadbandbreakfast.com/2023/02/automated-content-moderations-main-problem-is-subjectivity-not-accuracy-expert-says/feed/ 0 48282
AI Should Compliment and Not Replace Humans, Says Stanford Expert https://broadbandbreakfast.com/2022/11/ai-should-compliment-and-not-replace-humans-says-stanford-expert/?utm_source=rss&utm_medium=rss&utm_campaign=ai-should-compliment-and-not-replace-humans-says-stanford-expert https://broadbandbreakfast.com/2022/11/ai-should-compliment-and-not-replace-humans-says-stanford-expert/#respond Fri, 04 Nov 2022 13:21:35 +0000 https://broadbandbreakfast.com/?p=45311 WASHINGTON, November 4, 2022 – Artificial intelligence should be developed primarily to augment the performance of, not replace, humans, said Erik Brynjolfsson, director of the Stanford Digital Economy Lab, at a Wednesday web event hosted by the Brookings Institution.

AI that complements human efforts can increase wages by driving up worker productivity, Brynjolfsson argued. AI that strictly imitates human behavior, he said, can make workers superfluous – thereby lowering the demand for workers and concentrating economic and political power in the hands of employers – in this case the owners of the AI.

“Complementarity (AI) implies that people remain indispensable for value creation and retain bargaining power in labor markets and in political decision-making,” he wrote in an essay earlier this year.

What’s more, designing AI to mimic existing human behaviors limits innovation, Brynjolfsson argued Wednesday.

“If you are simply taking what’s already being done and using a machine to replace what the human’s doing, that puts an upper bound on how good you can get,” he said. “The bigger value comes from creating an entirely new thing that never existed before.”

Brynjolfsson argued that AI should be crafted to reflect desired societal outcomes. “The tools we have now are more powerful than any we had before, which almost by definition means we have more power to change the world, to shape the world in different ways,” he said.

The AI Bill of Rights

In October, the White House released a blueprint for an “AI Bill of Rights.” The document condemned algorithmic discrimination on the basis of race, sex, religion, or age and emphasized the importance of user privacy. It also endorsed system transparency with users and suggested the use of human alternatives to AI when feasible.

To fully align with the blueprint’s standards, Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, argued at a recent Brookings event that the nation must develop a larger AI workforce.

]]>
https://broadbandbreakfast.com/2022/11/ai-should-compliment-and-not-replace-humans-says-stanford-expert/feed/ 0 45311
Workforce Training Needed to Address Artificial Intelligence Bias, Researchers Suggest https://broadbandbreakfast.com/2022/10/workforce-training-needed-to-address-artificial-intelligence-bias-researchers-suggest/?utm_source=rss&utm_medium=rss&utm_campaign=workforce-training-needed-to-address-artificial-intelligence-bias-researchers-suggest https://broadbandbreakfast.com/2022/10/workforce-training-needed-to-address-artificial-intelligence-bias-researchers-suggest/#respond Wed, 26 Oct 2022 00:39:38 +0000 https://broadbandbreakfast.com/?p=45024 WASHINGTON, October 24, 2022–To align with the newly released White House guide on artificial intelligence, Stanford University’s director of policy said at an October Brookings Institution event last week that there needs to be more social and technical workforce training to address artificial intelligence biases.

Released on October 4, the Blueprint for an AI Bill of Rights framework by the White House’s Office of Science and Technology Policy is a guide for companies to follow five principles to ensure the protection of consumer rights from automated harm.

AI algorithms rely on learning the users behavior and disclosed information to customize services and advertising. Due to the nature of this process, algorithms carry the potential to send targeted information or enforce discriminatory eligibility practices based on race or class status, according to critics.

Risk mitigation, which prevents algorithm-based discrimination in AI technology is listed as an ‘expectation of an automated system’ under the “safe and effective systems” section of the White House framework.

Experts at the Brookings virtual event believe that workforce development is the starting point for professionals to learn how to identify risk and obtain the capacity to fulfill this need.

“We don’t have the talent available to do this type of investigative work,” Russell Wald, policy director for Stanford’s Institute for Human-Centered Artificial Intelligence, said at the event.

“We just don’t have a trained workforce ready and so what we really need to do is. I think we should invest in the next generation now and start giving people tools and access and the ability to learn how to do this type of work.”

Nicol Turner-Lee, senior fellow at the Brookings Institution, agreed with Wald, recommending sociologists, philosophers and technologists get involved in the process of AI programming to align with algorithmic discrimination protections – another core principle of the framework.

Core principles and protections suggested in this framework would require lawmakers to create new policies or include them in current safety requirements or civil rights laws. Each principle includes three sections on principles, automated systems and practice by government entities.

In July, Adam Thierer, senior research fellow at the Mercatus Center of George Mason University stated that he is “a little skeptical that we should create a regulatory AI structure,” and instead proposed educating workers on how to set best practices for risk management, calling it an “educational institution approach.”

]]>
https://broadbandbreakfast.com/2022/10/workforce-training-needed-to-address-artificial-intelligence-bias-researchers-suggest/feed/ 0 45024
Deepfakes Pose National Security Threat, Private Sector Tackles Issue https://broadbandbreakfast.com/2022/07/deepfakes-pose-national-security-threat-as-private-sector-tackles-issue/?utm_source=rss&utm_medium=rss&utm_campaign=deepfakes-pose-national-security-threat-as-private-sector-tackles-issue https://broadbandbreakfast.com/2022/07/deepfakes-pose-national-security-threat-as-private-sector-tackles-issue/#respond Wed, 20 Jul 2022 19:43:32 +0000 https://broadbandbreakfast.com/?p=42984 WASHINGTON, July 20, 2022 – Content manipulation techniques known as deepfakes are concerning policy makers and forcing the public and private sectors to work together to tackle the problem, a Center for Democracy and Technology event heard on Wednesday.

A deepfake is a technical method of generating synthetic media in which a person’s likeness is inserted into a photograph or video in such a way that creates the illusion that they were actually there. Policymakers are concerned that deepfakes could pose a threat to the country’s national security as the technology is being increasingly offered to the general population.

Deepfake concerns that policymakers have identified, said participants at Wednesday’s event, include misinformation from authoritarian governments, faked compromising and abusive images, and illegal profiting from faked celebrity content.

“We should not and cannot have our guard down in the cyberspace,” said Representative John Katko, R-NY, ranking member of House Committee on homeland security.

Adobe pitches technology to identify deepfakes

Software company Adobe released an open-source toolkit to counter deepfake concerns earlier this month, said Dana Rao, executive vice president of Adobe. The companies’ Content Credentials feature is a technology developed over three years that tracks changes made to images, videos, and audio recordings.

Content Credentials is now an opt-in feature in the company’s photo editing software Photoshop that it says will help establish credibility for creators by adding “robust, tamper-evident provenance data about how a piece of content was produced, edited, and published,” read the announcement.

Adobe’s Connect Authenticity Initiative project is dedicated to addressing problems establishing trust after the damage caused by deepfakes. “Once we stop believing in true things, I don’t know how we are going to be able to function in society,” said Rao. “We have to believe in something.”

As part of its initiative, Adobe is working with the public sector in supporting the Deepfake Task Force Act, which was introduced in August of 2021. If adopted, the bill would establish a National Deepfake and Digital task force comprised of members from the private sector, public sector, and academia to address disinformation.

For now, said Cailin Crockett, senior advisor to the White House Gender Policy Council, it is important to educate the public on the threat of disinformation.

]]>
https://broadbandbreakfast.com/2022/07/deepfakes-pose-national-security-threat-as-private-sector-tackles-issue/feed/ 0 42984