Connect with us

Artificial Intelligence

Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses

Transparency around data collection and risk assessments should be mandated by law, especially in high-risk applications of AI.

Published

on

Screenshot of the Business Software Alliance's Victoria Espinel at the Commerce subcommittee hearing

WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

Reporter Jake Neenan, who covers broadband infrastructure and broadband funding, is a recent graduate of the Columbia Journalism School. Previously, he reported on state prison conditions in New York and Massachusetts. He is also a devoted cat parent.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

CES 2024: Senators Talk Priorities on AI, Broadband Connectivity

Lawmakers called for guardrails on AI systems and more ACP funding.

Published

on

Photo of the panel by Jake Neenan

LAS VEGAS, January 12, 2024 – U.S. senators highlighted their tech policy priorities on artificial intelligence and broadband connectivity at CES on Friday.

Sens. Ben Luján, D-New Mexico, Cynthia Lummis, R-Wyoming, and John Hickenlooper, D-Colorado, sat on a panel moderated by Senator Jacky Rosen, D-Nevada.

Promise and perils of AI

The lawmakers highlighted their focus on mitigating the potential risks of implementing AI. 

Hickenlooper touted the AI Research, Innovation and Accountability Act, which he introduced in November with Luján and other members of the Senate Commerce, Science and Transportation Committee.

That law would require businesses deploying AI in relation to critical infrastructure operation, biometric data collection, criminal justice, and other “critical-impact” uses to submit risk assessments to the Commerce Department. The National Institute of Standards and Technology, housed in the department, would be tasked with developing standards for authenticating human and AI-generated content online.

“AI is everywhere,” Hickenlooper said. “And every application comes with incredible opportunity, but also remarkable risks.”

Connectivity

Luján and Rosen expressed support for recent legislation introduced to extend the Affordable Connectivity Program. The fund, which provides a $30 monthly internet subsidy to 23 million low-income households, is set to dry up in April 2024 without more money from Congress.

The ACP Extension Act would provide $7 billion to keep the program afloat through 2024. It was first stood up with $14 billion from the Infrastructure Act in late 2021. 

“There are a lot of us working together,” Luján said, to keep the program alive for “people across America who could not connect, not because they didn’t have a connection to their home or business, but because they couldn’t afford it.”

Lawmakers, advocates, the Biden administration, and industry groups have been calling for months for additional funding, but the bill faces an uncertain future as House Republicans look to cut back on domestic spending.

Luján also stressed the need to reinstate the Federal Communications Commission’s spectrum auction authority.

“I’m ashamed to say it’s lapsed, but we need to get this done,” he said.

The Commission’s authority to auction off and issue licenses for the commercial use of electromagnetic spectrum expired for the first time in March 2023 after Congress failed to renew it. A stopgap law permitting the agency to issue already purchased licenses passed in December, but efforts at blanket reauthorization have stalled.

Continue Reading

12 Days of Broadband

12 Days: Is ChatGPT Artificial General Intelligence or Not?

On the First Day of Broadband, my true love sent to me: One Artificial General Intelligence

Published

on

Illustration by DALL-E

December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.

This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.

For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.

Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in a sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.

Which makes the Chatbots seem, well, a little bit more human.

Drama at OpenAI

At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligence.

The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.

More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.

In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.

“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.

Washington and the FCC wake up to AI

And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.

In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.

But it’s not clear which side legislators on Capitol Hill might take in the future.

One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.

AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.

Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.

But isn’t ChatGPT a form of artificial general intelligence?

As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond passing the Turing test is the acclaimed concept of “artificial general intelligence.” That presumably means that it is a little bit smarter than ChatGPT-4.

Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”

Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.

AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”

That sound like something that an AGI-capable machine would very much want the world to believe.

Additional reporting provided on this story by Reporter Jericho Casper.

See “The Twelve Days of Broadband” on Broadband Breakfast

Continue Reading

Artificial Intelligence

Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware

Altman was brought back to OpenAI only days after being fired.

Published

on

Photo of Snap CEO Evan Spiegel, taken 2019, permission.

November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending