Connect with us

Artificial Intelligence

FCC Cybersecurity Pilot Program, YouTube AI Regulations, Infrastructure Act Anniversary

The FCC has proposed a pilot program to help schools and libraries protect against cyberattacks.

Published

on

Photo of fourth grade computer lab, taken 2009, permission.

November 15, 2023 – The Federal Communications Commission proposed Monday a cybersecurity pilot program for schools and libraries, which would require a three-year $200 million investment in ways to best protect K-12 students from cyberattacks. 

In addition to going in and assessing what kind of cybersecurity services are best suited for students and school needs, the program would also subsidize the cost of those services used in schools.  

The program would serve as a separate Universal Service Fund program, separate from the existing school internet subsidy program called E-Rate. 

“This pilot program is an important pathway for hardening our defenses against sophisticated cyberattacks on schools and ransomware attacks that harm our students and get in the way of their learning,” said FCC Chairwoman Jessica Rosenworcel.

The proposal would be a part of the larger Learn Without Limit’s initiative, which supports internet connectivity in schools to help reduce the homework gap by enabling kids’ digital access to digital learning.

YouTube rolling out AI content regulations 

Alphabet’s video sharing platform YouTube announced in a blog post Tuesday it will be rolling out AI guidelines over the next few months, which will inform viewers about when they are interacting with “synthetic” or AI-generated content. 

The rules will require creators to identify if the video is made of AI content. Creators who don’t disclose that information could see their work flagged and removed, and they may be suspended from the platform or subject to other penalties.

For the viewer, tags will appear in the description panel on videos indicating that if the video is synthetic or AI generated. YouTube noted that for videos dealing with more sensitive topics, it may use more prominent labels. 

YouTube’s AI guidelines come at a time when members of Congress and industry leaders are calling for increased effort toward AI regulatory reform, and after President Joe Biden’s executive order on AI guidelines signed into effect in October.

Two-year anniversary of the infrastructure investment jobs act 

Thursday marked the second-year anniversary of the Infrastructure, Investment and Jobs Act, which prompted a $400-billion investment into the US economy. 

The IIJA pushed for a variety of programs and initiatives, with over 40,000 sector-specific projects having received funding – several of those working to improve the broadband sector. 

$65 billion was invested by the IIJA into improving connectivity, which helped to establish the $14-billion Affordable Connectivity Program, which has so-far helped more than 20 million US households get affordable internet through a $30 and $75 subsidy per month. 

Outside of ACP, the IIJA called on the National Telecommunications and Information Administration to develop the Broadband Equity Access Deployment program, a $42.5-billion investment into high-speed broadband deployment across all 50 states. 

Currently, states are in the process of submitting their BEAD draft proposals, which all outline how states will administer the funding they receive as well as any funding they already have or how they will use broadband mapping data. 

Reporter Hanna Agro studied journalism at Columbia University focused on news reporting and video production. For Broadband Breakfast, she has covered broadband deployment, rural area investment and artificial intelligence. She has also done culture reporting and documentary production.

Continue Reading
Click to comment

Leave a Reply

Artificial Intelligence

CES 2024: Senators Talk Priorities on AI, Broadband Connectivity

Lawmakers called for guardrails on AI systems and more ACP funding.

Published

on

Photo of the panel by Jake Neenan

LAS VEGAS, January 12, 2024 – U.S. senators highlighted their tech policy priorities on artificial intelligence and broadband connectivity at CES on Friday.

Sens. Ben Luján, D-New Mexico, Cynthia Lummis, R-Wyoming, and John Hickenlooper, D-Colorado, sat on a panel moderated by Senator Jacky Rosen, D-Nevada.

Promise and perils of AI

The lawmakers highlighted their focus on mitigating the potential risks of implementing AI. 

Hickenlooper touted the AI Research, Innovation and Accountability Act, which he introduced in November with Luján and other members of the Senate Commerce, Science and Transportation Committee.

That law would require businesses deploying AI in relation to critical infrastructure operation, biometric data collection, criminal justice, and other “critical-impact” uses to submit risk assessments to the Commerce Department. The National Institute of Standards and Technology, housed in the department, would be tasked with developing standards for authenticating human and AI-generated content online.

“AI is everywhere,” Hickenlooper said. “And every application comes with incredible opportunity, but also remarkable risks.”

Connectivity

Luján and Rosen expressed support for recent legislation introduced to extend the Affordable Connectivity Program. The fund, which provides a $30 monthly internet subsidy to 23 million low-income households, is set to dry up in April 2024 without more money from Congress.

The ACP Extension Act would provide $7 billion to keep the program afloat through 2024. It was first stood up with $14 billion from the Infrastructure Act in late 2021. 

“There are a lot of us working together,” Luján said, to keep the program alive for “people across America who could not connect, not because they didn’t have a connection to their home or business, but because they couldn’t afford it.”

Lawmakers, advocates, the Biden administration, and industry groups have been calling for months for additional funding, but the bill faces an uncertain future as House Republicans look to cut back on domestic spending.

Luján also stressed the need to reinstate the Federal Communications Commission’s spectrum auction authority.

“I’m ashamed to say it’s lapsed, but we need to get this done,” he said.

The Commission’s authority to auction off and issue licenses for the commercial use of electromagnetic spectrum expired for the first time in March 2023 after Congress failed to renew it. A stopgap law permitting the agency to issue already purchased licenses passed in December, but efforts at blanket reauthorization have stalled.

Continue Reading

12 Days of Broadband

12 Days: Is ChatGPT Artificial General Intelligence or Not?

On the First Day of Broadband, my true love sent to me: One Artificial General Intelligence

Published

on

Illustration by DALL-E

December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.

This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.

For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.

Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in a sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.

Which makes the Chatbots seem, well, a little bit more human.

Drama at OpenAI

At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligence.

The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.

More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.

In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.

“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.

Washington and the FCC wake up to AI

And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.

In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.

But it’s not clear which side legislators on Capitol Hill might take in the future.

One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.

AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.

Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.

But isn’t ChatGPT a form of artificial general intelligence?

As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond passing the Turing test is the acclaimed concept of “artificial general intelligence.” That presumably means that it is a little bit smarter than ChatGPT-4.

Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”

Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.

AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”

That sound like something that an AGI-capable machine would very much want the world to believe.

Additional reporting provided on this story by Reporter Jericho Casper.

See “The Twelve Days of Broadband” on Broadband Breakfast

Continue Reading

Artificial Intelligence

Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware

Altman was brought back to OpenAI only days after being fired.

Published

on

Photo of Snap CEO Evan Spiegel, taken 2019, permission.

November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

Continue Reading

Signup for Broadband Breakfast News



Broadband Breakfast Research Partner

Trending