Connect with us

Artificial Intelligence

As ChatGPT’s Popularity Skyrockets, Some Experts Call for AI Regulation

As generative AI models grow more sophisticated, they present increasing risks.

Published

on

Photo by Tada Images/Adobe Stock used with permission

WASHINGTON, February 3, 2023 — Just two months after its viral launch, ChatGPT reached 100 million monthly users in January, reportedly making it the fastest-growing consumer application in history — and raising concerns, both internal and external, about the lack of regulation for generative artificial intelligence.

Many of the potential problems with generative AI models stem from the datasets used to train them. The models will reflect whatever biases, inaccuracies and otherwise harmful content was present in their training data, but too much dataset filtering can detract from performance.

OpenAI has grappled with these concerns for years while developing powerful, publicly available tools such as DALL·E — an AI system that generates realistic images and original art from text descriptions, said Anna Makanju, OpenAI’s head of public policy, a Federal Communications Bar Association event on Friday.

“We knew right off the bat that nonconsensual sexual imagery was going to be a problem, so we thought, ‘Why don’t we just try to go through the dataset and remove any sexual imagery so people can’t generate it,’” Makanju said. “And when we did that, the model could no longer generate women, because it turns out most of the visual images that are available to train a dataset on women are sexual in nature.”

Despite rigorous testing before ChatGPT’s release, early users quickly discovered ways to evade some of the guardrails intended to prevent harmful uses.

The model would not generate offensive content in response to direct requests, but one user found a loophole by asking it to write from the perspective of someone holding racist views — resulting in several paragraphs of explicitly racist text. When some users asked ChatGPT to write code using race and gender to determine whether someone would be a good scientist, the bot replied with a function that only selected white men. Still others were able to use the tool to generate phishing emails and malicious code.

OpenAI quickly responded with adjustments to the model’s filtering algorithms, as well as increased monitoring.

“So far, the approach we’ve taken is we just try to stay away from areas that can be controversial, and we ask the model not to speak to those areas,” Makanju said.

The company has also attempted to limit certain high-impact uses, such as automated hiring. “We don’t feel like at this point we know enough about how our systems function and biases that may impact employment, or if there’s enough accuracy for there to be an automated decision about hiring without a human in the loop,” Makanju explained.

However, Makanju noted that future generative language models will likely reach a point where users can significantly customize them based on personal worldviews. At that point, strong guardrails will need to be in place to prevent the model from behaving in certain harmful ways — for example, encouraging self-harm or giving incorrect medical advice.

Those guardrails should probably be established by external bodies or government agencies, Makanju said. “We recognize that we — a pretty small company in Silicon Valley — are not the best place to make a decision of how this will be used in every single domain, as hard as we try to think about it.”

Little AI regulation currently exists

So far, the U.S. has very little legislation governing the use of AI, although some states regulate automated hiring tools. On Jan. 26, the National Institute of Standards and Technology released the first version of its voluntary AI risk management framework, developed at the direction of Congress.

This regulatory crawl is being rapidly outpaced by the speed of generative AI research. Google reportedly declared a “code red” in response to ChatGPT’s release, speeding the development of multiple AI tools. Chinese tech company Baidu is planning to launch its own AI chatbot in March.

Not every company will respond to harmful uses as quickly as OpenAI, and some may not even attempt to stop them, said Claire Leibowicz, head of AI and media integrity at the Partnership on AI. PAI is a nonprofit coalition that develops tools recommendations for AI governance.

Various private organizations, including PAI, have laid out their own ethical frameworks and policy recommendations. There is ongoing discussion about the extent to which these organizations, government agencies and tech companies should be determining AI regulation, Leibowicz said.

“What I’m interested in is, who’s involved in that risk calculus?” she asked. “How are we making those decisions? What types of actual affected communities are we talking to in order to make that calculus? Or is it a group of engineers sitting in a room trying to forecast for the whole world?”

Leibowicz advocated for transparency measures such as requiring standardized “nutrition labels” that would disclose the training dataset for any given AI model — a proposal similar to the label mandate announced in November for internet service providers.

A regulatory framework should be implemented while these technologies are still being created, rather than in response to a future crisis, Makanju said. “It’s very clear that this technology is going to be incorporated into every industry in some way in the coming years, and I worry a little bit about where we are right now in getting there.”

Reporter Em McPhie studied communication design and writing at Washington University in St. Louis, where she was a managing editor for the student newspaper. In addition to agency and freelance marketing experience, she has reported extensively on Section 230, big tech, and rural broadband access. She is a founding board member of Code Open Sesame, an organization that teaches computer programming skills to underprivileged children.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

CES 2024: Senators Talk Priorities on AI, Broadband Connectivity

Lawmakers called for guardrails on AI systems and more ACP funding.

Published

on

Photo of the panel by Jake Neenan

LAS VEGAS, January 12, 2024 – U.S. senators highlighted their tech policy priorities on artificial intelligence and broadband connectivity at CES on Friday.

Sens. Ben Luján, D-New Mexico, Cynthia Lummis, R-Wyoming, and John Hickenlooper, D-Colorado, sat on a panel moderated by Senator Jacky Rosen, D-Nevada.

Promise and perils of AI

The lawmakers highlighted their focus on mitigating the potential risks of implementing AI. 

Hickenlooper touted the AI Research, Innovation and Accountability Act, which he introduced in November with Luján and other members of the Senate Commerce, Science and Transportation Committee.

That law would require businesses deploying AI in relation to critical infrastructure operation, biometric data collection, criminal justice, and other “critical-impact” uses to submit risk assessments to the Commerce Department. The National Institute of Standards and Technology, housed in the department, would be tasked with developing standards for authenticating human and AI-generated content online.

“AI is everywhere,” Hickenlooper said. “And every application comes with incredible opportunity, but also remarkable risks.”

Connectivity

Luján and Rosen expressed support for recent legislation introduced to extend the Affordable Connectivity Program. The fund, which provides a $30 monthly internet subsidy to 23 million low-income households, is set to dry up in April 2024 without more money from Congress.

The ACP Extension Act would provide $7 billion to keep the program afloat through 2024. It was first stood up with $14 billion from the Infrastructure Act in late 2021. 

“There are a lot of us working together,” Luján said, to keep the program alive for “people across America who could not connect, not because they didn’t have a connection to their home or business, but because they couldn’t afford it.”

Lawmakers, advocates, the Biden administration, and industry groups have been calling for months for additional funding, but the bill faces an uncertain future as House Republicans look to cut back on domestic spending.

Luján also stressed the need to reinstate the Federal Communications Commission’s spectrum auction authority.

“I’m ashamed to say it’s lapsed, but we need to get this done,” he said.

The Commission’s authority to auction off and issue licenses for the commercial use of electromagnetic spectrum expired for the first time in March 2023 after Congress failed to renew it. A stopgap law permitting the agency to issue already purchased licenses passed in December, but efforts at blanket reauthorization have stalled.

Continue Reading

12 Days of Broadband

12 Days: Is ChatGPT Artificial General Intelligence or Not?

On the First Day of Broadband, my true love sent to me: One Artificial General Intelligence

Published

on

Illustration by DALL-E

December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.

This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.

For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.

Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in a sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.

Which makes the Chatbots seem, well, a little bit more human.

Drama at OpenAI

At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligence.

The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.

More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.

In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.

“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.

Washington and the FCC wake up to AI

And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.

In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.

But it’s not clear which side legislators on Capitol Hill might take in the future.

One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.

AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.

Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.

But isn’t ChatGPT a form of artificial general intelligence?

As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond passing the Turing test is the acclaimed concept of “artificial general intelligence.” That presumably means that it is a little bit smarter than ChatGPT-4.

Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”

Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.

AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”

That sound like something that an AGI-capable machine would very much want the world to believe.

Additional reporting provided on this story by Reporter Jericho Casper.

See “The Twelve Days of Broadband” on Broadband Breakfast

Continue Reading

Artificial Intelligence

Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware

Altman was brought back to OpenAI only days after being fired.

Published

on

Photo of Snap CEO Evan Spiegel, taken 2019, permission.

November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending