Innovation – Broadband Breakfast https://broadbandbreakfast.com Better Broadband, Better Lives Fri, 12 Jan 2024 22:15:19 +0000 en-US hourly 1 https://wordpress.org/?v=6.2.3 https://i0.wp.com/broadbandbreakfast.com/wp-content/uploads/2021/05/cropped-logo2.png?fit=32%2C32&ssl=1 Innovation – Broadband Breakfast https://broadbandbreakfast.com 32 32 190788586 CES 2024: Senators Talk Priorities on AI, Broadband Connectivity https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2024-senators-talk-priorities-on-ai-broadband-connectivity https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/#respond Fri, 12 Jan 2024 22:15:19 +0000 https://broadbandbreakfast.com/?p=57090 LAS VEGAS, January 12, 2024 – U.S. senators highlighted their tech policy priorities on artificial intelligence and broadband connectivity at CES on Friday.

Sens. Ben Luján, D-New Mexico, Cynthia Lummis, R-Wyoming, and John Hickenlooper, D-Colorado, sat on a panel moderated by Senator Jacky Rosen, D-Nevada.

Promise and perils of AI

The lawmakers highlighted their focus on mitigating the potential risks of implementing AI. 

Hickenlooper touted the AI Research, Innovation and Accountability Act, which he introduced in November with Luján and other members of the Senate Commerce, Science and Transportation Committee.

That law would require businesses deploying AI in relation to critical infrastructure operation, biometric data collection, criminal justice, and other “critical-impact” uses to submit risk assessments to the Commerce Department. The National Institute of Standards and Technology, housed in the department, would be tasked with developing standards for authenticating human and AI-generated content online.

“AI is everywhere,” Hickenlooper said. “And every application comes with incredible opportunity, but also remarkable risks.”

Connectivity

Luján and Rosen expressed support for recent legislation introduced to extend the Affordable Connectivity Program. The fund, which provides a $30 monthly internet subsidy to 23 million low-income households, is set to dry up in April 2024 without more money from Congress.

The ACP Extension Act would provide $7 billion to keep the program afloat through 2024. It was first stood up with $14 billion from the Infrastructure Act in late 2021. 

“There are a lot of us working together,” Luján said, to keep the program alive for “people across America who could not connect, not because they didn’t have a connection to their home or business, but because they couldn’t afford it.”

Lawmakers, advocates, the Biden administration, and industry groups have been calling for months for additional funding, but the bill faces an uncertain future as House Republicans look to cut back on domestic spending.

Luján also stressed the need to reinstate the Federal Communications Commission’s spectrum auction authority.

“I’m ashamed to say it’s lapsed, but we need to get this done,” he said.

The Commission’s authority to auction off and issue licenses for the commercial use of electromagnetic spectrum expired for the first time in March 2023 after Congress failed to renew it. A stopgap law permitting the agency to issue already purchased licenses passed in December, but efforts at blanket reauthorization have stalled.

]]>
https://broadbandbreakfast.com/2024/01/ces-2024-senators-talk-priorities-on-ai-broadband-connectivity/feed/ 0 57090
CES 2024: Siemens Announces New Partnerships with AWS, Sony https://broadbandbreakfast.com/2024/01/ces-2024-siemens-announces-new-partnerships-with-aws-sony/?utm_source=rss&utm_medium=rss&utm_campaign=ces-2024-siemens-announces-new-partnerships-with-aws-sony https://broadbandbreakfast.com/2024/01/ces-2024-siemens-announces-new-partnerships-with-aws-sony/#respond Tue, 09 Jan 2024 20:47:57 +0000 https://broadbandbreakfast.com/?p=56982 LAS VEGAS, January 9, 2024 – Technology company Siemens announced two new partnerships on the opening night of CES: generative AI integration in its development platform with Amazon, and a mixed-reality headset for engineers and creators with Sony.

Both are part of the company’s broader vision for the “industrial metaverse” – a more detailed and immersive version of the simulations companies use to test products and equipment before expending real-world resources.

The Amazon partnership will bring the company’s generative AI service, Amazon Bedrock, to Siemens’ low-code development platform, Mendix.

Low-code refers to platforms that allow users to develop software with a visual interface rather than by writing code line-by-line. With Amazon Bedrock, Mendix users will have access to a myriad of generative AI models, each tailored for a specific use.

“Through Mendix you can access with just a few clicks different artificial intelligence models, which are good at natural language processing, predicting elements, that can manage automation,” said AWS Vice President of Product Matt Wood, “All without needing to know anything about the machine learning models themselves.”

On the Sony front, the companies announced an augmented/virtual reality headset that will make use of Siemens’ Xcelerator design software. The headset comes with handheld accessories including a ring and a pointer tool to manipulate 3D models.

The headset is intended to allow engineers to interact naturally with virtual prototypes and components – or “digital twins,” another part of Siemens’ pitch for computationally aided manufacturing.

“Anyone can actually put on the headset and be an engineer, and design and collaborate with anyone in the world,” said Cedrik Neike, CEO of Siemens Digital Industries. “Even me.”

The headset is expected to hit the market later this year.

]]>
https://broadbandbreakfast.com/2024/01/ces-2024-siemens-announces-new-partnerships-with-aws-sony/feed/ 0 56982
12 Days: Biden’s Signature CHIPS Act Spurs Investments and China Concerns https://broadbandbreakfast.com/2023/12/12-days-bidens-signature-chips-act-spurs-investments-and-china-concerns/?utm_source=rss&utm_medium=rss&utm_campaign=12-days-bidens-signature-chips-act-spurs-investments-and-china-concerns https://broadbandbreakfast.com/2023/12/12-days-bidens-signature-chips-act-spurs-investments-and-china-concerns/#respond Wed, 27 Dec 2023 15:55:11 +0000 https://broadbandbreakfast.com/?p=56694 December 27, 2023 — August 2023 marked the one year anniversary of President Joe Biden’s signature law, the CHIPS and Science Act. On that occasion, the White House touted $166 billion dollars of new semiconductor investments and manufacturing projects into the United States.

As both Biden and Commerce Secretary Gina Raimondo have been quick to note, American ingenuity invented the semiconductor. But today, the U.S. currently produces only 12 percent of the world’s supply, none of which are the most advanced. This is down from 40 percent in 1990.

The CHIPS Act provides $52 billion to incentivize chip companies to build factories in the U.S., aiming to reduce reliance on Asia for the crucial components used in everyday electronics. Over the summer, during the one-year anniversary, Biden administration officials touted investment commitments from companies like Micron, IBM and Wolfspeed.

The influx of cash is a relief for an industry disrupted by pandemic-related shutdowns’ impact on global supply chains. Automakers were especially impacted by the chip shortage, forcing production cuts and inventory reductions.

“The innovation and technology funded in the CHIPS Act is how we plan to expand the technological and national security advantages of America and our allies; these guardrails will help ensure we stay ahead of adversaries for decades to come,” Raimondo said.

White House takes a victory lap

In creating a 25 percent tax credit for capital investments in semiconductor manufacturing, the administration cited how companies have announced more than $166 billion in manufacturing in semiconductors and electronics, and at least 50 community colleges in 19 states have announced new or expanded programming to help American workers access jobs in the semiconductor industry.

In August, the Commerce Department announced the first round of grants under CHIPS to support the development of open and interoperable wireless networks, and the National Science Foundation and the Energy, Commerce, and Defense Departments announced progress toward establishing the National Semiconductor Technology Center.

Among the other milestones touted by the administration include:

  • Supporting U.S. Semiconductor Manufacturing through $39 billion in semiconductor manufacturing incentives.
  • The receipt of more than 460 statements of interest from companies for projects across 42 states interested in receiving CHIPS funding.
  • The Department of Commerce has also stood up CHIPS for America, a team of more than 140 people working to support implementation of all aspects of the CHIPS incentives program.
  • The Treasury Department’s proposed rule, in March, to provide guidance on the Advanced Manufacturing Investment Credit, that 25 percent investment tax credit.

Outstanding questions and labor shortage issues

There are also outstanding questions about whether the incentives in the law are sufficient to help level the playing field for U.S. companies versus lower building and operating costs in Asia.

The legislation requires companies receiving funds to commit to certain wage and labor requirements, including offering childcare benefits — measures some Republican legislators have criticized. Tensions between the U.S. and China also continue around supply chains for critical minerals needed for chip production.

For example, South Korea requested in May that the U.S. reassess the guardrail provisions it adopted in the CHIPS Act. South Korean companies Samsung and SK Hynix represent two of the world’s top manufacturers of memory chips and have invested billions of dollars in Chinese chip factories. The country is a leading chipmaker and also a major investor in the U.S.’s chip sector.

At Broadband Breakfast’s “Made in America” Summit on June 27, panelists raised concerns about workforce shortages in the country’s pursuit to become more independent in the sourcing of semiconductor chips.

In fact, they said, the industry could face a shortage of about 70,000 to 90,000 workers over the next few years.

Sign up for the Broadband Breakfast Club to access the complete videos from the Made in America Summit.

Maryam Rofougaran, cofounder and CEO of 5G chip manufacturer Movandi Corporation, pointed to a decrease in interest from high schoolers and college students in the field that is leading to a lack of skilled American workers in the development of the semiconductors.

Rofougaran called for immigration policies to be more friendly as America continues to look for highly skilled people in the semiconductor field, citing her own personal journey of immigration from Iran. “Immigration has been one of the greatest things for the U.S.,” she said.

Gene Irisari, head of semiconductor policy at Samsung, asked, “Where are all these workers going to come from? They can’t just come from the clusters where the semiconductor fabs are being created.

How will the CHIPS Act impact the AI race?

Indeed, in the chips race, China is both an ally and competitor. “China is a large supplier of raw materials needed for manufacturing and a large consumer of microchips,” said Shawn Muma, director of supply chain innovation and emerging technologies at the Digital Supply Chain Institute, speaking at the “Made in America” Summit.

But the CHIPS Act could also be a major front in the artificial intelligence race, with China’s ability to remain competitive depending on its ability to produce its own chips, as U.S. restrictions on the export of that product to the adversarial nation will hobble its ability to move forward.

“U.S. chip export sanctions are a huge roadblock” for AI development in China, said Qiheng Chen, a senior analyst at consulting firm Compass Lexecon, said at an August 2023 event by the Asia Society Policy Institute.

And former National Security Advisor Robert O’Brien said that United States needs to collaborate more with its allies to ensure semiconductor supply chain resilience.

Speaking at a Hudson Institute event in September 2023, the  former chairman of strategic advisory firm American Global Strategies said that it was necessary to collaborate with allies to onshore, moving plants onto domestic land, and “friend-shore,” moving plants into allying countries, manufacturing plants.

Failing to do so will subject the U.S. and its allies to additional risks in the future, he said.

See “The Twelve Days of Broadband” on Broadband Breakfast

]]>
https://broadbandbreakfast.com/2023/12/12-days-bidens-signature-chips-act-spurs-investments-and-china-concerns/feed/ 0 56694
12 Days: Is ChatGPT Artificial General Intelligence or Not? https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/?utm_source=rss&utm_medium=rss&utm_campaign=12-days-is-chatgpt-artificial-general-intelligence-or-not https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/#respond Thu, 21 Dec 2023 20:13:26 +0000 https://broadbandbreakfast.com/?p=56608 December 21, 2023 – Just over one year ago, most people in the technology and internet world would talk about passing the Turing test as if it were something far in the future.

This “test,” originally called the imitation game by computer scientist Alan Turing in 1950, is a hypothetical test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.

The year 2023 – and the explosive economic, technological, and societal force unleashed by OpenAI since the release of its ChatGPT on November 30, 2022 – make those days only 13 months ago seem quaint.

For example, users of large language models like ChatGPT, Anthropic’s Claude, Meta’s Llama and many others are daily interacting with machines as if they were simply very smart humans.

Yes, yes, informed users understand that Chatbots like these are simply using neural networks with very powerful predictive algorithms to come up with the probabilistic “next word” in a sequence begun by the questioner’s inquiry. And, yes, users understand the propensity of such machines to “hallucinate” information that isn’t quite accurate, or even accurate at all.

Which makes the Chatbots seem, well, a little bit more human.

Drama at OpenAI

At a Broadband Breakfast Live Online event on November 22, 2023, marking the one-year anniversary of ChatGPT’s public launch, our expert panelists focused on the regulatory uncertainty bequeathed by a much-accelerated form of artificial intelligence.

The event took place days after Sam Altman, CEO of the OpenAI, was fired – before rejoining the company on that Wednesday, with a new board of directors. The board members who forced Altman out (all replaced, except one) had clashed with him on the company’s safety efforts.

More than 700 OpenAI employees then signed a letter threatening to quit if the board did not agree to resign.

In the backdrop, in other words, there was a policy angle behind of corporate boardroom battles that was in itself a big tech stories of the year.

“This [was] accelerationism versus de-celerationism,” said Adam Thierer, a senior fellow at the R Street Institute, during the event.

Washington and the FCC wake up to AI

And it’s not that Washington is closing its eyes to the potentially life-altering – literally – consequences of artificial intelligence.

In October, the Biden administration issued an executive order on AI safety includes measures aimed at both ensuring safety and spurring innovation, with directives for federal agencies to generate safety and AI identification standards as well as grants for researchers and small businesses looking to use the technology.

But it’s not clear which side legislators on Capitol Hill might take in the future.

One notable application of AI in telecom highlighted by FCC chief Jessica Rosenworcel is AI-driven spectrum sharing optimization. Rosenworcel said in a July hearing that AI-enabled radios could collaborate autonomously, enhancing spectrum use without a central authority, an advancement poised for implementation.

AI’s potential contribution to enhancing broadband mapping efforts was explored in a November House hearing. AI faced skepticism from experts who argued that in rural areas where data is scarce and of inferior quality, machine learning would struggle to identify potential inaccuracies. Initially, the FCC regarded AI as having strong potential for aiding in broadband mapping.

Also in November, the FCC voted to launch a formal inquiry on the potential impact of AI on robocalls and robotexts. The agency believes that illegal robocalls can be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices.

But isn’t ChatGPT a form of artificial general intelligence?

As we’ve learned through an intensive focus on AI over the course of the year, somewhere still beyond passing the Turing test is the acclaimed concept of “artificial general intelligence.” That presumably means that it is a little bit smarter than ChatGPT-4.

Previously, OpenAI had defined AGI as “AI systems that are generally smarter than humans.” But apparently sometime recently, the company redefined this to mean “a highly autonomous system that outperforms humans at most economically valuable work.”

Some, including Rumman Chowdury, CEO of the tech accountability nonprofit Humane Intelligence, argue that framing AGI in economic terms, OpenAI recast its mission as building things to sell, a far cry from its original vision of using intelligent AI systems to benefit all.

AGI, as ChatGPT-4 told this reporter, “refers to a machine’s ability to understand, learn, and apply its intelligence to solve any problem, much like a human being. ChatGPT, while advanced, is limited to tasks within the scope of its training and programming. It excels in language-based tasks but does not possess the broad, adaptable intelligence that AGI implies.”

That sound like something that an AGI-capable machine would very much want the world to believe.

Additional reporting provided on this story by Reporter Jericho Casper.

See “The Twelve Days of Broadband” on Broadband Breakfast

]]>
https://broadbandbreakfast.com/2023/12/12-days-is-chatgpt-artificial-general-intelligence-or-not/feed/ 0 56608
Sam Altman to Rejoin OpenAI, Tech CEOs Subpoenaed, EFF Warns About Malware https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/?utm_source=rss&utm_medium=rss&utm_campaign=sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/#respond Wed, 22 Nov 2023 17:21:47 +0000 https://broadbandbreakfast.com/?p=55792 November 22, 2023 – OpenAI announced in an X post early Wednesday morning that Sam Altman will be re-joining the company that built ChatGPT as CEO after he was fired on Friday. 

Altman confirmed his intention to rejoin OpenAI in an X post Wednesday morning, saying that he was looking forward to returning to OpenAI with support from the new board.

Former company president Greg Brockman also said Wednesday he will return to the AI company.

Altman and Brockman will join with a newly formed board, which includes former Salesforce co-CEO Bret Taylor as the chair, former US Treasury Secretary Larry Summers, and Quora CEO Adam D’Angelo, who previously held a position on the OpenAI board.

Satya Nadella, the CEO of OpenAI backer Microsoft, echoed support for both Brockman and Altman rejoining OpenAI, adding that he is looking forward to continuing building a relationship with the OpenAI team in order to best deliver AI services to customers. 

OpenAI received backlash from several hundred employees who threatened to leave and join Microsoft under Altman and Brockman unless the current board of directors agreed to resign.  

Tech CEOs subpoenaed to attend hearing

Sens. Dick Durbin, D-Illinois, and Lindsey Graham, R-South Carolina, announced Monday that tech giants Snap, Discord and X have been issued subpoenas for their appearance at the Senate Judiciary Committee on December 6 in relation to concerns over child sexual exploitation online. 

Snap CEO Evan Spiegel, X CEO Linda Yaccarino and Discord CEO Jason Citron have been asked to address how or if they’ve worked to confront that issue. 

Durbin said in a press release that the committee “promised Big Tech that they’d have their chance to explain their failures to protect kids. Now’s that chance. Hearing from the CEOs of some of the world’s largest social media companies will help inform the Committee’s efforts to address the crisis of online child sexual exploitation.” 

Durbin noted in a press release that both X and Discord refused to initially accept subpoenas, which required the US Marshal Service to personally deliver those respective documents. 

The committee is looking to have Meta CEO Mark Zuckerberg and TikTok CEO Shou Zi Chew testify as well but have not received confirmation regarding their attendance.  

Several bipartisan bills have been brought forth to address that kind of exploitation, including the Earn It Act, proposed by Sens. Richard Blumenthal, D-Connecticut, and Graham, which holds them liable under child sexual abuse material laws. 

EFF urging FTC to sanction sellers of malware-containing devices

The Electronic Frontier Foundation, a non-profit digital rights group, have asked the Federal Trade Commission in a letter on November 14 to sanction resellers like Amazon and AliExpress following allegations mobile devices and Android TV boxes purchased from their stores contain malware.

The letter explained that once the devices were turned on and connected to the internet,  they would begin “communicating with botnet command and control (C2) servers. From there, these devices connect to a vast click-fraud network which a report by HUMAN Security recently dubbed BADBOX.”

The EFF added that this malware is often operating unbeknownst to the consumer, and without advanced technical knowledge, there is nothing they can do to remedy it themselves.

“These devices put buyers at risk not only by the click-fraud they routinely take part in, but also the fact that they facilitate using the buyers’ internet connections as proxies for the malware manufacturers or those they sell access to,” explained the letter. 

EFF said that the devices containing malware included ones manufactured by Chinese companies AllWinner and RockChip, who have been reported on for sending out products with malware before by EFF.

]]>
https://broadbandbreakfast.com/2023/11/sam-altman-to-rejoin-openai-tech-ceos-subpoenaed-eff-warns-about-malware/feed/ 0 55792
Sam Altman to Join Microsoft, New FCC Broadband Map, Providers Form 4.9 GHz Coalition https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/?utm_source=rss&utm_medium=rss&utm_campaign=sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/#respond Mon, 20 Nov 2023 20:18:16 +0000 https://broadbandbreakfast.com/?p=55716 November 20, 2023 – Microsoft CEO Satya Nadella announced in an X post Monday that former OpenAI CEO Sam Altman will be joining Microsoft after being fired from the machine learning company. 

Over the course of the last four days, OpenAI has undergone several shifts in leadership, which includes OpenAI investor Microsoft acquiring OpenAI president and chairman Greg Brockman to lead an AI research team alongside Altman

Brockman, who had been concurrently relieved from his role as chairman of the OpenAI board, announced his resignation Friday via X, upon learning that the board had decided to fire Altman. 

OpenAI said in a blog post Friday that Altman “was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.”

OpenAI then notified The Information Saturday that Emmett Shear, co-founder of streaming site Twitch, would serve as CEO after having CTO Mira Murati serve that role in the interim.  

Following Nadella’s announcement Monday morning, nearly 500 of the 700 OpenAI employees were signatories to a letter threatening to leave their roles to work under Altman and Brockman at Microsoft unless all of the current board members resign. 

As of Monday, OpenAI board member Ilya Sutskever posted a message of regret on X regarding the board decision to remove Altman and Brockman. The phrase “OpenAI is nothing without its people,” is now emerging from employee’s X accounts.  

FCC announces new national broadband map

The head of the Federal Communication Commission announced Friday the third iteration of its national broadband map, showing just over 7.2 million locations lack access to high-speed internet. 

That is less than the 8.3 million identified in May.   

FCC Chairwoman Jessica Rosenworcel noted that map data continue to fluctuate less between iterations, showing improvements in map accuracy. 

Previous iterations of the national broadband map had been criticized for not accurately depicting areas with and without service, with widespread concern that that would impact the allocation of Broadband Equity, Access and Deployment funding. 

The map outlines where adequate broadband service is and is not available throughout the nation and provides viewers with information on the providers who service those areas and the technology used to do so. 

Providers form spectrum advocacy coalition 

A group of telecom industry players including Verizon and T-Mobile announced Thursday the formation of the Coalition for Emergency Response and Critical Infrastructure to advocate for select use of the 4.9 GigaHertz (GHz) spectrum band. 

The coalition is in support of prioritizing state and local public safety agencies as main users of the 4.9 GHz band, while ensuring that non-public safety licensees operate on the band to avoid interference. 

“Public Safety agencies have vastly different needs from jurisdiction to jurisdiction, and they should decide what compatible non-public-safety use means within their jurisdictions,” read the coalition’s letter.  

In January of this year, the FCC adopted a report to manage the use of the 4.9 GHz band, while seeking comment on the role a band manager would play in facilitating license allocation between public safety and non-public safety entities. 

It had proposed two methods of operation for the band manager in which it would either lease access rights from public-safety entities and then sublease that to non-public safety entities, or to facilitate direct subleasing between public safety operators and external parties. 

In its letter to the FCC, the coalition announced support for the second of those methods stressing the fact that it will allow public safety license holders retain authority over who they sublease their spectrum to. 

]]>
https://broadbandbreakfast.com/2023/11/sam-altman-to-join-microsoft-new-fcc-broadband-map-providers-form-4-9-ghz-coalition/feed/ 0 55716
Industry Observers See MNO Opportunities in Leasing Network Space to MVNOs https://broadbandbreakfast.com/2023/11/industry-observers-see-mno-opportunities-in-leasing-network-space-to-mvnos/?utm_source=rss&utm_medium=rss&utm_campaign=industry-observers-see-mno-opportunities-in-leasing-network-space-to-mvnos https://broadbandbreakfast.com/2023/11/industry-observers-see-mno-opportunities-in-leasing-network-space-to-mvnos/#respond Wed, 15 Nov 2023 19:53:04 +0000 https://broadbandbreakfast.com/?p=55196 NEW YORK, November 15, 2023 – Wireless industry observers are seeing more providers opening up their networks to mobile virtual network operators as an opportunity to diversify revenue sources.

Large mobile network operators with additional network capacity can lease that to MVNOs that don’t have such infrastructure, allowing for more such service-based shops to emerge.

“Twenty years ago, [it was] don’t touch the network, no one’s getting on the network, it’s not open, it’s completely closed,” Kelly Green, CTO at telecom venture capital firm TelcoDR, said at Jeff Pulver’s Fall 2023 VON Evolution conference in New York earlier this month.

“But small pockets of these organizations…are saying if we don’t do this we’re not going to survive and there’s a huge opportunity in doing so,” she continued. “So I think these days, we talked about monetization and sharing capacity – anyone can look like a virtual network operator and you know it’s up to the service providers to be able to service that innovation.”

Suzanne Hellwig, assistant vice president for 5G Ecosystem and Alliances at AT&T, speaking with Green at the VON Evolution conference, added that MVNO agreements are becoming so prominent these days because it is increasingly easy for entities outside the telecom space to become operators with minimal startup costs.

To underscore Hellwig’s point, Green pointed to the MVNO Mint Mobile, an online-only virtual service provider founded in 2016 by telecom entrepreneur David Glickman, who previously developed Ultra Mobile. The company was later purchased by Hollywood actor Ryan Reynolds, with him becoming an owner in 2019.

The operator offers a variety of phone plans, which vary in prices from $15 to $30 per month. The company’s marketing philosophy is built on the idea that phone plans and data should be affordable.

Mint Mobile was purchased in a larger acquisition by T-Mobile in March of this year, when T-Mobile set out to pay up to $1.35 billion to take on Ka’ena Corporation, Mint Mobile’s parent company.

Observers noted the marketing-driven focus of the Mint brand, as Reynolds features as the main character in the company’s humorous advertisements.

Hellwig added that the MVNO space is easy for people with great influence or sublime marketing ability to enter because as we look toward younger generations signing up for phone plans, brand loyalty matters.

She used her kids as an example. She said while they may not be passionate about AT&T, they could be passionate about familiar influential people offering phone plans.

To alleviate regulators about competition concerns in the wake of T-Mobile’s quest to acquire Sprint, T-Mobile COO Mike Sievert said the telecom would continue all MVNO agreements following the acquisition and cited in a 2019 earnings report phone call that the capacity they create with their network provides incentive for T-Mobile to take on MVNOs.

Fortune Business Insights, a market research consulting firm, released a report in March that said global MVNO market size was valued at $78.15 billion in 2022 and is projected to generate $149.13 billion in 2030.

The report cited factors like 5G ecosystems and rapidly developing wireless technology driving that growth.

]]>
https://broadbandbreakfast.com/2023/11/industry-observers-see-mno-opportunities-in-leasing-network-space-to-mvnos/feed/ 0 55196
FCC Cybersecurity Pilot Program, YouTube AI Regulations, Infrastructure Act Anniversary https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/?utm_source=rss&utm_medium=rss&utm_campaign=fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/#respond Wed, 15 Nov 2023 19:05:55 +0000 https://broadbandbreakfast.com/?p=55563 November 15, 2023 – The Federal Communications Commission proposed Monday a cybersecurity pilot program for schools and libraries, which would require a three-year $200 million investment in ways to best protect K-12 students from cyberattacks. 

In addition to going in and assessing what kind of cybersecurity services are best suited for students and school needs, the program would also subsidize the cost of those services used in schools.  

The program would serve as a separate Universal Service Fund program, separate from the existing school internet subsidy program called E-Rate. 

“This pilot program is an important pathway for hardening our defenses against sophisticated cyberattacks on schools and ransomware attacks that harm our students and get in the way of their learning,” said FCC Chairwoman Jessica Rosenworcel.

The proposal would be a part of the larger Learn Without Limit’s initiative, which supports internet connectivity in schools to help reduce the homework gap by enabling kids’ digital access to digital learning.

YouTube rolling out AI content regulations 

Alphabet’s video sharing platform YouTube announced in a blog post Tuesday it will be rolling out AI guidelines over the next few months, which will inform viewers about when they are interacting with “synthetic” or AI-generated content. 

The rules will require creators to identify if the video is made of AI content. Creators who don’t disclose that information could see their work flagged and removed, and they may be suspended from the platform or subject to other penalties.

For the viewer, tags will appear in the description panel on videos indicating that if the video is synthetic or AI generated. YouTube noted that for videos dealing with more sensitive topics, it may use more prominent labels. 

YouTube’s AI guidelines come at a time when members of Congress and industry leaders are calling for increased effort toward AI regulatory reform, and after President Joe Biden’s executive order on AI guidelines signed into effect in October.

Two-year anniversary of the infrastructure investment jobs act 

Thursday marked the second-year anniversary of the Infrastructure, Investment and Jobs Act, which prompted a $400-billion investment into the US economy. 

The IIJA pushed for a variety of programs and initiatives, with over 40,000 sector-specific projects having received funding – several of those working to improve the broadband sector. 

$65 billion was invested by the IIJA into improving connectivity, which helped to establish the $14-billion Affordable Connectivity Program, which has so-far helped more than 20 million US households get affordable internet through a $30 and $75 subsidy per month. 

Outside of ACP, the IIJA called on the National Telecommunications and Information Administration to develop the Broadband Equity Access Deployment program, a $42.5-billion investment into high-speed broadband deployment across all 50 states. 

Currently, states are in the process of submitting their BEAD draft proposals, which all outline how states will administer the funding they receive as well as any funding they already have or how they will use broadband mapping data. 

]]>
https://broadbandbreakfast.com/2023/11/fcc-cybersecurity-pilot-program-youtube-ai-regulations-infrastructure-act-anniversary/feed/ 0 55563
Will Rinehart: Unpacking the Executive Order on Artificial Intelligence https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=will-rinehart-unpacking-the-executive-order-on-artificial-intelligence https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/#respond Wed, 15 Nov 2023 14:24:32 +0000 https://broadbandbreakfast.com/?p=55548 If police are working on an investigation and want to tap your phone lines, they’ll effectively need to get a warrant. They will also need to get a warrant to search your home, your business, and your mail.

But if they want to access your email, all they need is just to wait for 180 days.

Because of a 1986 law called the Electronic Communications Privacy Act, people using third-party email providers, like Gmail, only get 180 days of warrant protection. It’s an odd quirk of the law that only exists because no one in 1986 could imagine holding onto emails longer than 180 days. There simply wasn’t space for it back then!¹

ECPA is a stark illustration of consistent phenomena in government: policy choices, especially technical requirements, have durable and long-lasting effects. There are more mundane examples as well. GPS could be dramatically more accurate but when the optical system was recently upgraded, it was held back by a technical requirement in the Federal Enterprise Architecture Framework (FEAF) of 1999. More accurate headlights have been shown to be better at reducing night crashes yet adaptive headlights only just got approved last year, nearly 16 years after Europe because of technical requirements in FMVSS 108. All it takes is one law or regulation to crystallize an idea into an enduring framework that fails to keep up with developments.

I fear the approach pushed by the White House in their recent executive order on AI might represent another crystallization moment. ChatGPT has been public for a year, the models on which they are based are only five years old, and yet the administration is already working to set the terms for regulation.

The “Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence” is sprawling. It spans 13 sections, extends over 100 pages, and lays out nearly 100 deliverables for every major agency. While there are praiseworthy elements to the document, there is also a lot of cause for concern.

Among the biggest changes is the new authority the White House has claimed over newly designated “dual use foundation models.” As the EO defines it, a dual-use foundation model is

  • an AI model that is trained on broad data; generally uses self-supervision; contains at least tens of billions of parameters; is applicable across a wide range of contexts; and that exhibits, or could be easily modified to exhibit, high levels of performance at tasks that pose a serious risk to security, national economic security, national public health or safety, or any combination of those matters.

While the designation seems to be common sense, it is new and without provenance. Until last week, no one had talked about dual use foundation models. Rather, the designation does comport with the power the president has over the export of military tech.

As the EO explains it, the administration is especially interested in those models with the potential to

  • lower the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons;
  • enable powerful offensive cyber operations through automated vulnerability discovery and exploitation against a wide range of potential targets of cyber attacks; or
  • permit the evasion of human control or oversight through means of deception or obfuscation

The White House is justifying its regulation of these models under the Defense Production Act, a federal law first enacted in 1950 to respond to the Korean War. Modeled after World War II’s War Powers Acts, the DPA was part of a broad civil defense and war mobilization effort that gave the President the power to requisition materials and property, expand government and private defense production capacity, ration consumer goods, and fix wage and price ceilings, among other powers.

The DPA is reauthorized every five years, which has allowed Congress to expand the set of presidential powers in the DPA. Today, the allowable use of DPA extends far beyond U.S. military preparedness and includes domestic preparedness, response, and recovery from hazards, terrorist attacks, and other national emergencies. The DPA has long been intended to address market failures and slow procurement processes in times of crisis. Now the Biden Administration is using DPA to force companies to open up their AI models.

The administration’s invocation of the Defense Production Act is clearly a strategic maneuver to utilize the maximum extent of its DPA power in service of Biden’s AI policy agenda. The difficult part of this process now sits with the Department of Commerce, which has 90 days to issue regulations.

In turn, the Department will likely use the DPA’s industrial base assessment power to force companies to disclose various aspects of their AI models. Soon enough, dual use foundation models will have to report to the government tests based on guidance developed by the National Institute of Standards and Technology (NIST). But that guidance won’t be available for another 270 days. In other words, Commerce will regulate companies without knowing what they will be beholden to.

Recent news from the United Kingdom suggests that all of the major players in AI are going to be included in the new regulation. In closing out a two-day summit on AI, British Prime Minister Rishi Sunak announced that eight companies were going to give deeper access to their models in an agreement that had been signed by Australia, Canada, the European Union, France, Germany, Italy, Japan, Korea, Singapore, the U.S. and the U.K. Those eight companies included Amazon Web Services, Anthropic, Google, as well its subsidiary DeepMind, Inflection AI, Meta, Microsoft, Mistral AI, and OpenAI.

Thankfully, the administration isn’t pushing for a pause on AI development, they aren’t denouncing more advanced models, nor are they suggesting that AI needs to be licensed. But this is probably because doing so would face a tough legal challenge. Indeed, it seems little appreciated by the AI community that the demand to report on models is a kind of compelled speech, which has typically triggered First Amendment scrutiny. But the courts have occasionally recognized that compelled commercial speech may actually advance First Amendment interests more than undermine them.

The EO clearly marks a shift in AI regulation because of what will come next. In addition to the countless deliverables, the EO encourages agencies to use their full power to advance rulemaking.

For example, the EO explains that,

  • the Federal Trade Commission is encouraged to consider, as it deems appropriate, whether to exercise the Commission’s existing authorities, including its rulemaking authority under the Federal Trade Commission Act, 15 U.S.C. 41 et seq., to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.

Innocuous as it may seem, the Federal Trade Commission, as well as all of the other agencies that have been encouraged to use their power by the administration, could come under court scrutiny. In West Virginia v. EPA, the Supreme Court made it more difficult for agencies to expand their power when the court established the major questions doctrine. This new line of legal reasoning takes an ax to agency delegation. Unless there’s explicit, clear-cut authority granted by Congress, an agency cannot regulate a major economic or political issue. Agency efforts to push rules on AI could get caught up by the courts.

To be fair, there are a lot of positive actions that this EO advances.² But details matter, and it will take time for the critical details to emerge.

Meanwhile, we need to be attentive to the creep of power. As Adam Thierer described this catch-22,

  • While there is nothing wrong with federal agencies being encouraged through the EO to use NIST’s AI Risk Management Framework to help guide sensible AI governance standards, it is crucial to recall that the framework is voluntary and meant to be highly flexible and iterative—not an open-ended mandate for widespread algorithmic regulation. The Biden EO appears to empower agencies to gradually convert that voluntary guidance and other amorphous guidelines into a sort of back-door regulatory regime (a process made easier by the lack of congressional action on AI issues).

In all, the EO is a mixed bag that will take time to shake out. On this, my colleague Neil Chilson is right: some of it is good, some is bad, and some is downright ugly.

Still, the path we are currently navigating with the Executive Order on AI parallels similar paths in ECPA, GPS, and adaptive lights. It underscores a fundamental truth about legal decisions: even the technical rules we set today will shape the landscape for years, perhaps decades, to come. As we move forward, we must tread carefully, ensuring that our legal frameworks are adaptable and resilient, capable of evolving alongside the very technologies they seek to regulate.

Will Rinehart is a senior research fellow at the Center for Growth and Opportunity, where he specializes in telecommunication, internet and data policy, with a focus on emerging technologies and innovation. He was formerly the Director of Technology and Innovation Policy at the American Action Forum and before that a research fellow at TechFreedom and the director of operations at the International Center for Law & Economics. This piece originally appeared in the Exformation Newsletter on November 9, 2023, and is reprinted with permission.

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

]]>
https://broadbandbreakfast.com/2023/11/will-rinehart-unpacking-the-executive-order-on-artificial-intelligence/feed/ 0 55548
Senators Pitch New Agency for Tech Regulation to Address FTC Shortcomings https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/?utm_source=rss&utm_medium=rss&utm_campaign=senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/#respond Thu, 02 Nov 2023 17:46:18 +0000 https://broadbandbreakfast.com/?p=55106 WASHINGTON, November 2, 2023 – Sen. Michael Bennet D-Colorado, and Sen. Peter Welch, D-Vermont, reiterated at a Brookings event Tuesday the need for the United States to form a new agency to oversee tech regulation.

The senators, alongside former Federal Communications Commission Chairman Tom Wheeler, argued that the government’s approach to regulating AI, social media and big tech does not match the speed at which those industries are changing.

Bennet and Welch both outlined how the Federal Trade Commission and the Department of Justice, two entities that are heavily involved in regulating large tech companies, govern so broadly that they are unable to properly deal with specific cases.

The two added that those respective agencies lack the specific expertise in tech fields to be able to address key issues.

“Despite their work to enforce existing antitrust and consumer protection laws, they lack the expert staff and resources necessary for robust oversight,” Bennet said previously. “Moreover, both bodies are limited by existing statutes to react to case-specific challenges raised by digital platforms, when proactive, long-term rules for the sector are required,” explained Bennet in an earlier press release.

The conversation comes after the two senators introduced a digital technology regulatory bill in May of 2023 outlining how a new proposed agency would regulate the tech industry in consultation with the FTC and the DOJ.

Their proposed bill would require the establishment of a five-person agency to address tech regulation and antitrust cases, as well as establish some kind of protection against things like harmful algorithms.

“For far too long, these companies have largely escaped regulatory scrutiny, but that can’t continue. It’s time to establish an independent agency to provide comprehensive oversight of social media companies,” said Welch in the same press release.

Wheeler, who moderated the event, echoed their concerts after having written his book Techlash, which argues innovators drive tech development and that the government follows their lead in regulation.

]]>
https://broadbandbreakfast.com/2023/11/senators-pitch-new-agency-for-tech-regulation-to-address-ftc-shortcomings/feed/ 0 55106
Federal Agencies Need to do More on Robocalls, Senate Hears https://broadbandbreakfast.com/2023/10/federal-agencies-need-to-do-more-on-robocalls-senate-hears/?utm_source=rss&utm_medium=rss&utm_campaign=federal-agencies-need-to-do-more-on-robocalls-senate-hears https://broadbandbreakfast.com/2023/10/federal-agencies-need-to-do-more-on-robocalls-senate-hears/#respond Tue, 24 Oct 2023 22:51:53 +0000 https://broadbandbreakfast.com/?p=54920 WASHINGTON, October 24, 2023 – Federal agencies need to do more to tackle robocalls, experts told lawmakers on Tuesday.

For its part, the Federal Communications Commission has been taking more aggressive action on fraudulent calls and texts in recent months. The commission moved last week to block call traffic from 20 companies for lax robocall policies, and the agency has issued more than $500 million in fines for scam calls in the last year.

But that has not been enough to curb the longstanding issue, said Senator Ben Ray Luján, D-N.M., said at a Senate subcommittee hearing.

“Scammers used our telecom networks to defraud Amwericans out of an estimated $39 billion in 2022 alone,” he said. “That’s enough money to provide affordable broadband to the 21 million households enrolled in the Affordable Connectivity Program for eight years.”

Very few of the fines issued by the FCC have been collected. For Megan Brown, a lawyer representing the U.S. Chamber of Commerce, that comes down to lax DOJ enforcement. 

Josh Becu, the head of USTelecom’s Industry Traceback Group, agreed, telling the Subcommittee on Communications, Media, and Broadband that Congress should push the DOJ to prioritize robocall enforcement.

“The FCC’s efforts really run out of steam if the [Justice] Department is not there to get them across the finish line and actually collect on some of those forfeitures,” Brown said.

She said Congress could push the Department to prioritize money for robocall investigations and enforcement, or set up a dedicated robocall office. 

Margot Saunders, a senior attorney at the National Consumer Law Center, said the FCC should move faster to block call traffic from offending voice providers in the future. 

“If the FCC were to adopt a system under which it quickly suspends the ability of a voice service provider to participate in the network once that provider is determined to be a repeat offender,” Saunders said, “we think that would be a magic bullet.”

The commission announced yesterday a proposed notice of inquiry seeking comment on using artificial intelligence to root out robocall fraud. Commissioners will vote on the proposal at the FCC’s November 15 open meeting.

]]>
https://broadbandbreakfast.com/2023/10/federal-agencies-need-to-do-more-on-robocalls-senate-hears/feed/ 0 54920
FCC Chair Pitches Proposal for Combatting Robocalls Using Artificial Intelligence https://broadbandbreakfast.com/2023/10/fcc-chair-pitches-proposal-for-combatting-robocalls-using-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=fcc-chair-pitches-proposal-for-combatting-robocalls-using-artificial-intelligence https://broadbandbreakfast.com/2023/10/fcc-chair-pitches-proposal-for-combatting-robocalls-using-artificial-intelligence/#respond Mon, 23 Oct 2023 21:22:36 +0000 https://broadbandbreakfast.com/?p=54890 WASHINGTON, October 23, 2023 – The head of the Federal Communications Commission is set to introduce a proposal this week about using artificial intelligence to combat robocalls and robotexts, she said.

Jessica Rosenworcel will be circulating a proposal to get comments on using AI and machine learning to detect fraud, she said on Monday at a fireside chat with AARP policy heads.

AI could be used to detect patterns that indicate potential fraud and “cut those bad actors responsible for robocalls and robotexts off before they ever reach you,” she said.

The proposed inquiry would also seek comment on ways of combating AI-assisted fraud.

Older Americans are especially at risk of losing money to scam phone calls because they are more likely to be isolated, said AARP Texas State Director Tina Tran.

“They want to answer the phone because they want to talk to someone,” she said. “Scammers know this and they really take advantage of it.”

The proposal will be voted on at the commission’s November 15 open meeting. If approved, it will be part of a broader commission effort to combat scam calls and texts.

“This year alone, we’ve issued more than $500 million in fines” for scam calls and texts, Rosenworcel said. 

Most recently, the FCC moved last week to block calls from 20 companies that did not submit adequate robocall policies, in some cases filing blank pages and miscellaneous images instead of fraud prevention plans. If those voice providers do not submit updated plans, they will be removed from the FCC’s Robocall Mitigation Database, meaning other providers must deny their traffic.

The commission also extended in August its STIR/SHAKEN requirements – measures to confirm the identity of callers – to all providers who handle voice traffic.

]]>
https://broadbandbreakfast.com/2023/10/fcc-chair-pitches-proposal-for-combatting-robocalls-using-artificial-intelligence/feed/ 0 54890
U.S. and Singapore to Strengthen AI and Tech Partnership https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-and-singapore-to-strengthen-ai-and-tech-partnership https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/#respond Fri, 13 Oct 2023 23:02:37 +0000 https://broadbandbreakfast.com/?p=54694 WASHINGTON, October 13, 2023 – The United States and Singapore announced on Thursday a new partnership to strengthen ties on artificial intelligence and other technological research. The nations launched the initiative, called the Critical Emerging Technology Dialogue, in D.C. on the same day.

Building on a 2022 meeting between U.S. President Joe Biden and Singaporean Prime Minister Lee Hsien Loong, senior officials from both governments – including Deputy Prime Minister Lawrence Wong from Singapore and National Security Advisor Jake Sullivan from the U.S. – met in Washington for discussions on six areas of focus.

Artificial intelligence

The countries intend to launch a joint AI governance group, according to a White House statement. The group would focus on ensuring “safe, trustworthy, and responsible AI innovation,” the statement said.

The Commerce Department’s National Institute of Standards and Technology recently completed an exercise with the Singapore Infocomm Media Development Authority on AI risk management. Both nations are looking to expand on that and collaborate on research into AI security, the statement said.

AI regulation has been a subject of discussion in Washington. Biden announced in September he plans to issue an executive order on the issue by the end of the year, and a group of Congressional Democrats pushed him on Thursday to use their proposed AI Bill of Rights to inform that policy.

Quantum computing

American and Singaporean agencies are planning to collaborate on post-quantum cryptography methods and standards. While current quantum computers are rudimentary, the technology is in theory capable of cracking current encryption methods. 

Biotechnology

The countries plan to convene universities, private and public research institutions, and government agencies on advancing research into gene therapies and delivery systems for those therapies. The nations also expressed an intent to connect their biotechnology startup communities to exchange best practices on scaling, as well as research and development.

Officials also discussed defense technology, data governance, and climate resilience. The next CET Dialogue is planned for 2024 in Singapore.

]]>
https://broadbandbreakfast.com/2023/10/u-s-and-singapore-to-strengthen-ai-and-tech-partnership/feed/ 0 54694
Still Learning About Artificial Intelligence, Legislators Say Congress Must Act https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/?utm_source=rss&utm_medium=rss&utm_campaign=still-learning-about-artificial-intelligence-legislators-say-congress-must-act https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/#respond Sat, 30 Sep 2023 14:47:03 +0000 https://broadbandbreakfast.com/?p=54471 WASHINGTON, September 30, 2023 – Although Congress is still learning key aspects of artificial intelligence, senators and representatives speaking at an AI summit on Wednesday said they believed the urgency of the moment required the passage of “some narrow pieces” of legislation.

The same day that Sen. Ed Markey, D-Mass., sent a letter to Meta CEO Mark Zuckerberg urging him to halt the release of AI-powered chatbots that the social media giant plans to integrate within its platforms, Markey urged the Federal Trade Commission to protect minors from AI-powered software.

Markey, speaking at Politico’s AI and Tech Summit, cited suicide rates amongst minors using social media and a recent warning from the Surgeon General about social media and adolescent mental health.

“We’re not going to be able to handle devices talking to young people in our society without understanding what the safeguards are going to be,” Markey said.

His message to Big Tech was: “Don’t deploy it until we get the answers to what the safeguards are going to be for the young people in our society.”

Similarly, Sen. Todd Young, R-Indiana, said he believed it was “very likely” that Congress would pass “some narrow pieces” of a regime regulating AI.

“I hope we go wider and consider a host of different legislative proposals because our innovators, our entrepreneurs, our researchers, our national security committee, they all say that we need to act in this space and we continue to lead the way of the world and manage the many risks that are out there around the financial markets,” Young said.

Other legislators proposed other specific facets of AI regulation.

Congressman Ted Lieu, D-Calif., proposed a law to prevent AI from autonomously using nuclear weapons. He also suggested a national AI commission.

Such a commission would help create a public record about how and why AI should be regulated. Doing so would be preferable to the approach in which Senate Majority Leader Chuck Schumer, D-N.Y., has been hosting closed-door briefings with tech giants on the topic.

“AI is innovating so quickly that I think it’s important that we have the national AI commission experts,” Lieu said. “There’s quite a lot of legislation to work on that, that can make recommendations from Congress asking what kind of AI we might want to regulate, how we might want to do about doing so and also provide some time for AI to be developed.”

Rep. Jay Obernolte, R-Calif., vice chair of the Congressional Artificial Intelligence Caucus, said that Congress is doing a “great job” educating themselves on AI but that creating legislation that has a human centric framework needs to be properly defined.

“By framework, I don’t mean a bunch of buzzwords flying in close formation, right?” Obernolte said. “What does it mean for AI to be human centered? What role does government have in making sure that they are human centered?”

]]>
https://broadbandbreakfast.com/2023/09/still-learning-about-artificial-intelligence-legislators-say-congress-must-act/feed/ 0 54471
Companies Must Be Transparent About Their Use of Artificial Intelligence https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/?utm_source=rss&utm_medium=rss&utm_campaign=companies-must-be-transparent-about-their-use-of-artificial-intelligence https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/#respond Wed, 20 Sep 2023 21:34:04 +0000 https://broadbandbreakfast.com/?p=54027 WASHINGTON, September 20, 2023 – Researchers at an artificial intelligence workshop Tuesday said companies should be transparent about their use of algorithmic AI in things like hiring processes and content writing. 

Andrew Bell, a fellow at the New York University Center for Responsible AI, said that making the use of AI known is key to addressing any pitfalls AI might have. 

Algorithmic AI is behind systems like chatbots which can generate texts and answers to questions. It is used in hiring processes to quickly screen resumes or in journalism to write articles. 

According to Bell, ‘algorithmic transparency’ is the idea that “information about decisions made by algorithms should be visible to those who use, regulate, and are affected by the systems that employ those algorithms.”

The need for this kind of transparency comes after events like Amazons’ old AI recruiting tool showed bias toward women in the hiring process, or when OpenAI, the company that created ChatGPT, was probed by the FTC for generating misinformation. 

Incidents like these have brought the topic of regulating AI and making sure it is transparent to the forefront of Senate conversations.

Senate committee hears need for AI regulation

The Senate’s subcommittee on consumer protection on September 12 heard about proposals to make AI use more transparent, including disclaiming when AI is being used and developing tools to predict and understand risk associated with different AI models.

Similar transparency methods were mentioned by Bell and his supervisor Julia Stoyanovich, the Director of the Center for Responsible AI at New York University, a research center that explores how AI can be made safe and accessible as the technology evolves. 

According to Bell, a transparency label on algorithmic AI would “[provide] insight into ingredients of an algorithm.” Similar to a nutrition label, a transparency label would identify all the factors that go into algorithmic decision making.  

Data visualization was another option suggested by Bell, which would require a company to put up a public-facing document that explains the way their AI works, and how it generates the decisions it spits out. 

Adding in those disclaimers creates a better ecosystem between AI and AI users, increasing levels of trust between all stakeholders involved, explained Bell.

Bell and his supervisor built their workshop around an Algorithm Transparency Playbook, a document they published that has straightforward guidelines on why transparency is important and ways companies can go about it. 

Tech lobbying groups like the Computer and Communications Industry Association, which represent Big Tech companies, however, have spoken out in the past against the Senate regulating AI, claiming that it could stifle innovation. 

]]>
https://broadbandbreakfast.com/2023/09/companies-must-be-transparent-about-their-use-of-artificial-intelligence/feed/ 0 54027
Congress Should Mandate AI Guidelines for Transparency and Labeling, Say Witnesses https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/#respond Wed, 13 Sep 2023 00:20:12 +0000 https://broadbandbreakfast.com/?p=53830 WASHINGTON, September 12, 2023 – The United States should enact legislation mandating transparency from companies making and using artificial intelligence models, experts told the Senate Commerce Subcommittee on Consumer Protection, Product Safety, and Data Security on Tuesday.

It was one of two AI policy hearings on the hill Tuesday, with a Senate Judiciary Committee hearing, as well as an executive branch meeting created under the National AI Advisory Committee.

The Senate Commerce subcommittee asked witnesses how AI-specific regulations should be implemented and what lawmakers should keep in mind when drafting potential legislation. 

“The unwillingness of leading vendors to disclose the attributes and provenance of the data they’ve used to train models needs to be urgently addressed,” said Ramayya Krishnan, dean of Carnegie Mellon University’s college of information systems and public policy.

Addressing problems with transparency of AI systems

Addressing the lack of transparency might look like standardized documentation outlining data sources and bias assessments, Krishnan said. That documentation could be verified by auditors and function “like a nutrition label” for users.

Witnesses from both private industry and human rights advocacy agreed legally binding guidelines – both for transparency and risk management – will be necessary. 

Victoria Espinel, CEO of the Business Software Alliance, a trade group representing software companies, said the AI risk management framework developed in March by the National Institute of Standards and Technology was important, “but we do not think it is sufficient.”

“We think it would be best if legislation required companies in high-risk situations to be doing impact assessments and have internal risk management programs,” she said.

Those mandates – along with other transparency requirements discussed by the panel – should look different for companies that develop AI models and those that use them, and should only apply in the most high-risk applications, panelists said.

That last suggestion is in line with legislation being discussed in the European Union, which would apply differently depending on the assessed risk of a model’s use.

“High-risk” uses of AI, according to the witnesses, are situations in which an AI model is making consequential decisions, like in healthcare, hiring processes, and driving. Less consequential machine-learning models like those powering voice assistants and autocorrect would be subject to less government scrutiny under this framework.

Labeling AI-generated content

The panel also discussed the need to label AI-generated content.

“It is unreasonable to expect consumers to spot deceptive yet realistic imagery and voices,” said Sam Gregory, director of human right advocacy group WITNESS. “Guidance to look for a six fingered hand or spot virtual errors in a puffer jacket do not help in the long run.”

With elections in the U.S. approaching, panelists agreed mandating labels on AI-generated images and videos will be essential. They said those labels will have to be more comprehensive than visual watermarks, which can be easily removed, and might take the form of cryptographically bound metadata.

Labeling content as being AI-generated will also be important for developers, Krishnan noted, as generative AI models become much less effective when trained on writing or images made by other AIs.

Privacy around these content labels was a concern for panelists. Some protocols for verifying the origins of a piece of content with metadata require the personal information of human creators.

“This is absolutely critical,” said Gregory. “We have to start from the principle that these approaches do not oblige personal information or identity to be a part of them.”

Separately, the executive branch committee that met Tuesday was established under the National AI Initiative Act of 2020, is tasked with advising the president on AI-related matters. The NAIAC gathers representatives from the Departments of State, Defense, Energy and Commerce, together with the Attorney General, Director of National Intelligence, and Director of Science and Technology Policy.

]]>
https://broadbandbreakfast.com/2023/09/congress-should-mandate-ai-guidelines-for-transparency-and-labeling-say-witnesses/feed/ 0 53830
Tech Policy Group CCIA Speaks Out Against AI Regulation https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=tech-policy-group-ccia-speaks-out-against-ai-regulation https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/#respond Tue, 12 Sep 2023 19:16:06 +0000 https://broadbandbreakfast.com/?p=53820 WASHINGTON, September 12, 2023 – A policy director at the Computer and Communications Industry Association spoke out on Tuesday against impending artificial intelligence regulations in the European Union and United States.

The CCIA represents some of the biggest tech companies in the world, with members including Amazon, Google, Meta, and Apple.

“The E.U. approach will focus very much on the technology itself, rather than the use of it, which is highly problematic,” said Boniface de Champris, CCIA’s Europe policy manager, at a panel hosted by the Cato Institute. “The requirements would basically inhibit the development and use of cutting edge technology in the E.U.”

This echoes de Champris’s American counterparts, who have argued in front of Congress that AI-specific laws would stifle innovation.

The European Parliament is aiming to reach an agreement by the end of the year on the AI Act, which would put regulations on all AI systems based on their assessed risk level. 

The E.U. also adopted in August the Digital Services Act, legislation that tightens privacy rules and expands transparency requirements. Under the law, users can opt to turn off artificial intelligence-enabled content recommendation.

U.S. President Joe Biden announced in July that seven major AI and tech companies – including CCIA members Amazon, Meta, and Google – made voluntary commitments to various AI safeguards, including information sharing and security testing.

Multiple U.S. agencies are exploring more binding AI regulation. Both the Senate Judiciary committee and Senate consumer protection subcommittee held hearings on potential AI policy later on Tuesday. The judiciary hearing will include testimony from Microsoft president Brad Smith and AI and graphics company NVIDIA’s chief scientist William Daly.

The House Energy and Commerce Committee passed in July the Artificial Intelligence Accountability Act, which gives the National Telecommunications and Information Administration a mandate to study accountability measures for artificial intelligence systems used by telecom companies.

]]>
https://broadbandbreakfast.com/2023/09/tech-policy-group-ccia-speaks-out-against-ai-regulation/feed/ 0 53820
Rep. Suzan DelBene: Want Protection From AI? The First Step Is a National Privacy Law https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/?utm_source=rss&utm_medium=rss&utm_campaign=rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/#respond Wed, 30 Aug 2023 11:00:38 +0000 https://broadbandbreakfast.com/?p=53540 In the six months since a new chatbot confessed its love for a reporter before taking a darker turn, the world has woken up to how artificial intelligence can dramatically change our lives and how it can go awry. AI is quickly being integrated into nearly every aspect of our economy and daily lives. However, in our nation’s capital, laws aren’t keeping up with the rapid evolution of technology.

Policymakers have many decisions to make around artificial intelligence, such as how it can be used in sensitive areas such as financial markets, health care, and national security. They will need to decide intellectual property rights around AI-created content. There will also need to be guardrails to prevent the dissemination of mis- and disinformation. But before we build the second and third story of this regulatory house, we need to lay a strong foundation and that must center around a national data privacy standard.

To understand this bedrock need, it’s important to look at how artificial intelligence was developed. AI needs an immense quantity of data. The generative language tool ChatGPT was trained on 45 terabytes of data, or the equivalent of over 200 days’ worth of HD video. That information may have included our posts on social media and online forums that have likely taught ChatGPT how we write and communicate with each other. That’s because this data is largely unprotected and widely available to third-party companies willing to pay for it. AI developers do not need to disclose where they get their input data from because the U.S. has no national privacy law.

While data studies have existed for centuries and can have major benefits, they are often centered around consent to use that information. Medical studies often use patient health data and outcomes, but that information needs the approval of the study participants in most cases. That’s because in the 1990s Congress gave health information a basic level of protection but that law only protects data shared between patients and their health care providers. The same is not true for other health platforms, like fitness apps, or most other data we generate today, including our conversations online and geolocation information.

Currently, the companies that collect our data are in control of it. Google for years scanned Gmail inboxes to sell users targeted ads, before abandoning the practice. Zoom recently had to update its data collection policy after it was accused of using customers’ audio and video to train its AI products. We’ve all downloaded an app on our phone and immediately accepted the terms and conditions window without actually reading it. Companies can and often do change the terms regarding how much of our information they collect and how they use it. A national privacy standard would ensure a baseline set of protections no matter where someone lives in the U.S. and restrict companies from storing and selling our personal data.

Ensuring there’s transparency and accountability in what data goes into AI is also important for a quality and responsible product. If input data is biased, we’re going to get a biased outcome, or better put ‘garbage in, garbage out.’ Facial recognition is one application of artificial intelligence. Largely these systems have been trained by and with data from white people. That’s led to clear biases when communities of color interact with this technology.

The United States must be a global leader on artificial intelligence policy but other countries are not waiting as we sit still. The European Union has moved faster on AI regulations because it passed its privacy law in 2018. The Chinese government has also moved quickly on AI but in an alarmingly anti-democratic way. If we want a seat at the international table to set the long-term direction for AI that reflects our core American values, we must have our own national data privacy law to start.

The Biden administration has taken some encouraging steps to begin putting guardrails around AI but it is constrained by Congress’ inaction. The White House recently announced voluntary artificial intelligence standards, which include a section on data privacy. Voluntary guidelines don’t come with accountability and the federal government can only enforce the rules on the books, which are woefully outdated.

That’s why Congress needs to step up and set the rules of the road. Strong national standards like privacy must be uniform throughout the country, rather than the state-by-state approach we have now. It has to put people back in control of their information instead of companies. It must also be enforceable so that the government can hold bad actors accountable. These are the components of the legislation I have introduced over the past few Congresses and the bipartisan proposal the Energy & Commerce Committee advanced last year.

As with all things in Congress, it comes down to a matter of priorities. With artificial intelligence expanding so fast, we can no longer wait to take up this issue. We were behind on technology policy already, but we fall further behind as other countries take the lead. We must act quickly and set a robust foundation. That has to include a strong, enforceable national privacy standard.

Congresswoman Suzan K. DelBene represents Washington’s 1st District in the United States House of Representatives. This piece was originally published in Newsweek, and is reprinted with permission. 

Broadband Breakfast accepts commentary from informed observers of the broadband scene. Please send pieces to commentary@breakfast.media. The views expressed in Expert Opinion pieces do not necessarily reflect the views of Broadband Breakfast and Breakfast Media LLC.

 

]]>
https://broadbandbreakfast.com/2023/08/rep-suzan-delbene-want-protection-from-ai-the-first-step-is-a-national-privacy-law/feed/ 0 53540
Newsrooms Should Engage Responsibly with Artificial Intelligence, Say Journalists https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/?utm_source=rss&utm_medium=rss&utm_campaign=newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/#respond Tue, 29 Aug 2023 11:22:43 +0000 https://broadbandbreakfast.com/?p=53486 WASHINGTON, August 28, 2023 – Newsrooms should take an active role in crafting artificial intelligence practices and policies, experts said on August 17 at a webinar hosted by the Knight Center for Journalism in the Americas.

Waiting too long to institute policies around the application of AI in the news gathering process and the use of newsroom data and content for AI research could allow tech companies to dictate these on their terms, said Amy Rinehart, a senior program manager for local news and AI at the Associated Press.

“Big tech came in and told us how the internet was going to work, and we have abided by the rules they’ve set up,” she said. “If we don’t get in there and experiment, they’re going to write the rules.”

Seven tech companies met with the White House in July to work out terms of a voluntary commitment to public safety measures in their AI research and products.

Increased AI literacy will improve future coverage of the technology, according to Rinehart. She said coverage has largely been sensational because of the news industry’s discomfort with the potential automation of some of their work.

Sil Hamilton, an artificial intelligence researcher at McGill University, said this scenario is still far from what the technology is truly capable of.

The current trajectory of large language models – the systems behind chatbots like ChatGPT – “is to simply be coworking with us,” he said. “It won’t entirely automate jobs away.”

Rinehart emphasized the importance of staying informed about the technology and how it might affect the news industry from both inside and outside the newsroom.

“This is pushing us in a direction that some of us don’t like,” she said. “But if we don’t experiment together we’re going to end up on the other side of something that is unrecognizable.”

]]>
https://broadbandbreakfast.com/2023/08/newsrooms-should-engage-responsibly-with-artificial-intelligence-say-journalists/feed/ 0 53486
U.S. Government is Eyeing AI to Improve Emergency Alerts, Outreach https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/#respond Mon, 28 Aug 2023 16:20:05 +0000 https://broadbandbreakfast.com/?p=53417 WASHINGTON, August 25, 2023 – United States government agencies are eyeing artificial intelligence to aid emergency alerts and other outreach services, experts said on Thursday.

The National Oceanic and Atmospheric Administration is looking to use AI to do new kinds of analysis on storm and wildfire data, improving alert system accuracy as climate change makes natural disasters more common, said NOAA Chief Technology Officer Frank Indiviglo.

“Things you see on your local weather channel are good,” Indiviglo said, “but really understanding ahead of these events: Am I at risk? Is my family at risk? That’s what we’re working toward,” Indiviglo said at a Technology Spotlight event hosted by NextGov.

Emergency weather alerts from NOAA have been broadcast since the 1970s from the agency’s radio network, which continuously transmits forecasts otherwise. Cable TV stations broadcast the audio of their local NOAA radio station in emergencies.

The alerts warn listeners of severe weather events in their area. Coverage can be hindered by mountains, but the agency says that more than 95% of Americans live in areas covered by the system as of July 2023.

The agency’s forecasts, and thus emergency alerts, are based on data collected by physical sensors and the outputs of several mathematical models designed to give the agency a picture of what’s happening on the ground, according to NOAA technical procedures.

People have complained about other FCC alerts warning them of severe weather and other emergencies that are not in their area. More computationally intensive analysis aided by AI would help the agency issue these warnings with more precision, Indiviglo said.

Patty Delafuente, a data scientist at AI hardware and software company NVIDIA, said fielding help desk calls and other customer services are another common use case for the company’s government clients.

Language models that have ingested huge amounts of information can help government employees serve people asking what programs they qualify for, especially as more experienced workers retire, she said.

U.S. government spending on AI has exceeded $7 billion in the last three fiscal years.

]]>
https://broadbandbreakfast.com/2023/08/u-s-government-is-eyeing-ai-to-improve-emergency-alerts-outreach/feed/ 0 53417
U.S. Chip Export Restrictions Will be ‘Huge Roadblock’ for Chinese AI Competitiveness: Expert https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/#respond Thu, 24 Aug 2023 22:21:31 +0000 https://broadbandbreakfast.com/?p=53410 WASHINGTON, August 24, 2023 – China’s ability to remain competitive in the global artificial intelligence race will depend on its ability to produce its own chips, as U.S. restrictions on the export of that product to the adversarial nation will hobble its ability to move forward, experts said Thursday.

“U.S. chip export sanctions are a huge roadblock” for AI development in China, said Qiheng Chen, a senior analyst at consulting firm Compass Lexecon.

The ability to manufacture advanced chips domestically will be essential for the country to continue researching and implementing AI, Chen added at the AI event hosted by the Asia Society Policy Institute.

The Commerce Department imposed in October 2022 restrictions on exports of advanced semiconductors and chip manufacturing equipment to China and required U.S. citizens to get a permit before working with Chinese chip manufacturers.

The move was designed to limit China’s ability to compete with the U.S. by curbing its access to hardware required for cutting-edge military technology. It also makes AI research and development, a highly chip-dependent process, more difficult.

Other panelists Tuesday emphasized chip making as a top priority of the Chinese government.

The country has already moved toward independence from the U.S. in other areas, like satellites and fiber optics, as a response to Trump administration policies.

This has continued under President Joe Biden, with a 2021 executive order restricting investment in Chinese firms drawing criticism from Huawei, the Chinese telecom company.

Experts have previously said the threat of restricting access to global trade even further could make China hesitant to retaliate for the sanctions. This is because advanced chip manufacturing requires materials, components, and processes that would be difficult for a single nation to source entirely within its borders.

“It’s too complex, too global, too interdependent for one country to be able to produce all these technologies on their own,” said Jimmy Goodrich, vice president of Global Policy at the Semiconductor Industry Association, at a conference earlier this year.

A Huawei spokesperson estimated at a conference following the investment ban that it would take three to five years for Chinese chip manufacturing to become self-sufficient and rely less on American components and investments.

Biden signed the CHIPS and Science Act into law last year, two months before the export restrictions went into effect. It allocates $52 billion for American semiconductor manufacturing and gives tax credits for investments in the industry.

]]>
https://broadbandbreakfast.com/2023/08/u-s-chip-export-restrictions-will-be-huge-roadblock-for-chinese-ai-competitiveness-expert/feed/ 0 53410
Open Access to Training Data Vital for AI Safety and Innovation: Expert https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/?utm_source=rss&utm_medium=rss&utm_campaign=open-access-to-training-data-vital-for-ai-safety-and-innovation-expert https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/#respond Wed, 23 Aug 2023 20:05:21 +0000 https://broadbandbreakfast.com/?p=53317 WASHINGTON, August 23, 2023 – An open ecosystem providing public access to artificial intelligence data is vital for the development of a safe and innovative AI system, am expert said at a forum on Monday.  

Instead of the current “black box” approach to AI training, developers should adopt a transparent “glass box” approach, where they provide not only the data but also the models and step-by-step guidance for model replication, said Ali Farhadi, CEO of Allen Institute for Artificial Intelligence. This approach would enable developers to learn from each other’s mistakes, thus reducing the occurrence of repeated errors and associated costs, he explained.

The accessible dataset also serves as a critical “traceability” factor to assist lawmakers in crafting legal frameworks and safeguards against a multitude of risks posed by AI, ranging from misinformation, deep fakes to child safety concerns and workforce-related challenges.

“Looking back at the history of how software has been developed, whenever we actually opened up a piece of technology, the progress outpaced the malicious acts,” he added.

His argument found support among other speakers, including Senate Commerce Committee Chairwoman Maria Cantwell, D-Washington, who agreed that an “open architecture” has the potential to encourage a “public-private partnership” that could facilitate further advancements in AI development.

“We’ve been really working since the 2020 bill on understanding ways that we can accelerate our process to come to faster resolution of some of the issues that come to the table,” said Cantwell, who spearheaded the “The Future of AI” Act to convene leaders across academia, federal, and the private sectors to examine the opportunities and consequences of AI technology.

“I believe the government must continue to partner with industry and academia,” she added. “And public private partnership is the right direction for us to keep going.”

Hosted by Sen. Cantwell, the forum joined other lawmakers’ efforts to gain a deeper understanding of AI. The White House announced in August a competition with prizes up to 20 million as an incentive for developers to bolster the capabilities of AI systems. In late July, the administration also secured commitments from leading AI companies to oversee the safe and transparent development of the technology.

These initiatives are part of Washington’s effort to take the lead in the development of AI and maintain its technological competitiveness, especially as counterparts in Brussels and Beijing have been racing ahead in terms of regulations.

]]>
https://broadbandbreakfast.com/2023/08/open-access-to-training-data-vital-for-ai-safety-and-innovation-expert/feed/ 0 53317
Fiber Helps Co-ops to Save on Electric Grid Usage, Saving Money https://broadbandbreakfast.com/2023/08/fiber-helps-co-ops-to-save-on-electric-grid-usage-saving-money/?utm_source=rss&utm_medium=rss&utm_campaign=fiber-helps-co-ops-to-save-on-electric-grid-usage-saving-money https://broadbandbreakfast.com/2023/08/fiber-helps-co-ops-to-save-on-electric-grid-usage-saving-money/#respond Tue, 22 Aug 2023 00:44:37 +0000 https://broadbandbreakfast.com/?p=53274 ORLANDO, August 21, 2023 – Fiber networks can reduce operating costs for electric cooperatives as well as connect residents to the internet, said representatives of electric co-ops on a Fiber Connect panel Monday, claiming it is a good investment. 

Broadband networks allow co-ops to share data that keeps them more efficient on the electric grid, said William Graves, fiber optic network manager at MidSouth Electric Cooperative in Texas. 

High-speed broadband connectivity enables the smart grid, a network that allows for two-way communication between the utility and its customers, to ensure that electricity is being managed in the most efficient way, said Graves.  

Pete Hoffswell, superintendent of broadband services at Holland Board of Public Works in Michigan added that fiber can connect city systems – such as parking meters – to avoid backlog that occasionally occurs on less efficient networks.  

Smart infrastructure will be critical as demand for power increases as use-cases continue to grow for electric vehicle charging, smart home technologies, and more, said Hoffswell. He added that connectivity is more than just connecting renewable energy systems, it is now about building a smart city. 

“Smart cities are full of smart people, smart people want their cities to be smart,” he continued. Consumers will make more demands on network providers and this demand will change the way that the networks operate, he said.  

Hoffswell added that investor-owned utilities can cover a huge space in the co-op broadband space. Co-ops have the necessary capital for large broadband projects and are a good match for fiber, he said.   

William Davidson, director of strategic initiatives at NextEra Infrastructure Solutions in Florida, said that providing fiber services to customers provides incremental value to the cooperative. He added that cooperatives have the unique ability to be patient with long-term projects that take years to break even.

Some experts have touted electric co-ops as the ideal grantee for the $42.5 billion BEAD program – which funds are expected in 2024 – because they are well suited to build public owned networks that then can either be operated by the co-op or leased to private providers.  

]]>
https://broadbandbreakfast.com/2023/08/fiber-helps-co-ops-to-save-on-electric-grid-usage-saving-money/feed/ 0 53274
Office of National Intelligence Adopting AI for Data Processing https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/?utm_source=rss&utm_medium=rss&utm_campaign=office-of-national-intelligence-adopting-ai-for-data-processing https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/#respond Mon, 07 Aug 2023 21:24:30 +0000 https://broadbandbreakfast.com/?p=52885 WASHINGTON, August 7, 2023 – The Office of the Director of National Intelligence is adopting artificial intelligence for data processing, said the Principal Deputy Director of National Intelligence Stacy Dixon at an Intelligence and National Security Alliance discussion Thursday.  

“We are excited for the technology and where it can take us,” she said, but warned that because the technology is so widespread, the barriers to entry are lower, and adversaries have better access to more harmful technologies. 

Non state actors and terrorists have no business with AI, claimed Dixon. But unfortunately, the threat is out there, and we have to protect our democratic ideals, she said. For this reason, the ODNI is implementing AI to stay ahead of bad actors. 

Dixon said the agency will work to implement AI in a “incrementally” and in a “smart way” to improve cooperation and trust between the private and public sectors. For the ODNI, the first step in AI implementation is making sure is data is ready for AI and establishing the workforce that understands the data and how to write the necessary algorithms, said Dixon. 

The ODNI is an independent agency established by Congress in 2004 to assist the director of national intelligence, a cabinet-level government official. The ODNI’s goal is to integrate foreign, military and domestic intelligence in defense of the United States and its interests abroad.  

According to Dixon, the agency is already using AI in some automation use cases, but it is not as widespread as it needs to be to enable better efficiency in the agency and stay ahead of adversaries. It is important to think of the agency as a data organization rather than simply intelligence, she said.  

The agency is building civil liberty protections into the AI models while simultaneously increasing AI use internally, Dixon added. 

Other federal agencies are evaluating how artificial intelligence can be implemented to improve internal processes. The Federal Communications Commission joined with the National Science Foundation to discuss how AI can be used to improve dynamic spectrum sharing, protect against harmful robocalls and improve the national broadband map in July. 

In July, the House Energy and Commerce Committee passed a bill to the House floor that directs the National Telecommunications and Information Administration to conduct a study on accountability measures for artificial intelligence. 

]]>
https://broadbandbreakfast.com/2023/08/office-of-national-intelligence-adopting-ai-for-data-processing/feed/ 0 52885
Raimando Calls for U.S. Investment in Semiconductor Manufacturing in Allied Countries https://broadbandbreakfast.com/2023/07/raimando-calls-for-u-s-investment-in-semiconductor-manufacturing-in-allied-countries/?utm_source=rss&utm_medium=rss&utm_campaign=raimando-calls-for-u-s-investment-in-semiconductor-manufacturing-in-allied-countries https://broadbandbreakfast.com/2023/07/raimando-calls-for-u-s-investment-in-semiconductor-manufacturing-in-allied-countries/#respond Mon, 31 Jul 2023 20:46:34 +0000 https://broadbandbreakfast.com/?p=52771 WASHINGTON, July 31, 2023 – Commerce Secretary Gina Raimondo said Wednesday that the U.S. should invest in the semiconductor manufacturing facilities of allied countries and impose restrictions on the export of those chips to China to combat the Communist nation’s influence.

Under the Chips and Science Act, Congress appropriated funding for the domestic manufacturing of microchips in America, setting aside about $39 billion for grants and subsidies for chip makers and their suppliers, plus another $11 billion to set up research centers on chip design. To handle the task, the Commerce Department last year launched the new CHIPS office, which also would provide loan guarantees for as much as $75 billion.  

The CHIPS office already has received more than 400 statements of interest from semiconductor manufacturers keen to get a share of the federal dollars. Preliminary applications for grants and subsidies will be accepted beginning in September, with final applications starting Oct. 23, according to the Commerce Department.

Speaking on a panel hosted by the American Enterprise Institute alongside Sen. Todd Young, R-Indiana, Raimondo added that, on top of domestic manufacturing incentives, there is a need for future export controls against China and investing in allied semiconductor facilities to bolster national security. 

Proposals to work more on technology with allies

Raimondo proposed that the government lean into the resources offered by their allies, such as R&D in Japan, and raw materials in Ukraine to create their own supply chain. By investing in industries linked to the supply chain in allied countries, she said that America would benefit overall.

Conditions would not be so broad “that you deny American companies revenue and China can get the product elsewhere, or China can get the product from other countries,” Raimondo said. Incoming rules “will deny some revenue to American companies, but we think it’s worth it.”

Raimondo said the administration is meeting with companies “to get to the right place so we don’t damage American business but quite frankly protect American national security.” 

The United States needs to invest in its capacity to produce high-end chips, Raimondo said, while preventing the most advanced technology from reaching China. She highlighted concerns about China’s substantial subsidies in the semiconductor sector, which could lead to an excess of mature and legacy chips. 

“The amount of money that China is pouring into subsidizing what will be an excess capacity of mature chips and legacy chips, that’s a problem that we need to be thinking about and working with our allies to get ahead of,” Raimondo said.

This comes after U.S. chip company executives met with top Biden administration officials, including Raimondo, to discuss China policy at the Allen and Co. conference earlier in July.

With semiconductor companies poised to benefit from $52 billion in direct subsidies, these funds are meant to address the dual challenge of manufacturing both low-end and high-end chips, according to Young.

Young brought up how supply chain issues for less advanced chips can cause delays in car manufacturing, referencing the industry in his state of Indiana – while the US needs to enhance its ability to produce advanced chips for specialized applications, such as those used in nuclear-armed submarines.

Furthermore, Young talked about how the proposed subsidies will coincide with incentives provided by the Biden administration to promote the clean energy, electric vehicle, and battery industries.

These sectors are considered critical for the economy and environment, and the government’s initiatives represent the largest industrial policy effort since World War II, said Young, with significant implications for the manufacturing sector.

 

]]>
https://broadbandbreakfast.com/2023/07/raimando-calls-for-u-s-investment-in-semiconductor-manufacturing-in-allied-countries/feed/ 0 52771
Congress Should Not Create AI-Specific Regulation, Say Techies https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/?utm_source=rss&utm_medium=rss&utm_campaign=congress-should-not-create-ai-specific-regulation-say-techies https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/#respond Fri, 28 Jul 2023 18:23:58 +0000 https://broadbandbreakfast.com/?p=52742 WASHINGTON, July 28, 2023 – Artificial Intelligence experts said that Congress should not make AI-specific legislation to protect against potential harms at a Congressional Internet Caucus Academy panel Friday. 

AI harms and risks are already addressed by existing laws, said Joshua Landau, senior counsel of innovation policy at nonprofit advocacy organization the Computer and Communications Industry Association. 

Landau urged Congress to write laws that address harms rather than creating laws that specifically regulate AI usage. He warned that differentiating between AI and human crimes will only create loopholes in law that will serve to incentivize unlawful behavior, which in turn will affect where research and development in the industry will go. The exception is laws that delineate liability for harmful actions when AI is involved, he said. 

His comments follow an opinion expressed by former Chairman of the Federal Communications Commission, Richard Wiley, on Tuesday who said that now is not the right time to regulate AI and urged lawmakers to slow down in efforts to regulate the technology. 

The desire for perfect policy has held Congress back from developing AI regulation, added Evi Fuelle, global policy director at Credo AI. She urged for Congress to implement transparency mandates for both large and small AI companies.  

Voluntary commitments will fail to show results if Congress does not mandate them, said Fuelle, referring to the seven AI companies that committed to the White House’s AI goals last week. The commitments included steps to ensure safety, transparency and trustworthiness of the technology. 

Nick Garcia, policy counsel at Public Knowledge, cautioned against policies that will call for a pause or halt in AI research and development, saying that it is not a sustainable solution. He also urged Congress to address AI issues without neglecting equally important concerns surrounding social media regulation. 

In October, the Biden Administration announced a blueprint for a first-ever AI Bill of Rights that identifies five principles that should guide the design, use and deployment of AI systems in order to protect American citizens. According to the White House, federal agencies have “ramped up their efforts” to protect American citizens from risks posed by AI technology.   

In May, Biden signed an executive order directing federal agencies to root out bias in the design of AI technology and protect the public from algorithmic discrimination. Thursday, a House Committee passed legislation that would direct the National Telecommunications and Information Administration to conduct research on accountability measures for AI. 

]]>
https://broadbandbreakfast.com/2023/07/congress-should-not-create-ai-specific-regulation-say-techies/feed/ 0 52742
Lawmakers and Industry Groups Urge Congress Action on Autonomous Vehicles https://broadbandbreakfast.com/2023/07/lawmakers-and-industry-groups-urge-congress-action-on-autonomous-vehicles/?utm_source=rss&utm_medium=rss&utm_campaign=lawmakers-and-industry-groups-urge-congress-action-on-autonomous-vehicles https://broadbandbreakfast.com/2023/07/lawmakers-and-industry-groups-urge-congress-action-on-autonomous-vehicles/#respond Thu, 27 Jul 2023 13:23:19 +0000 https://broadbandbreakfast.com/?p=52669 WASHINGTON, July 27, 2023 – Witnesses at an Energy and Commerce subcommittee hearing on Wednesday joined lawmakers in pushing for congressional action on establishing a comprehensive federal framework for self-driving vehicles, after several years of regulatory stagnation.

In her opening remarks, Chair Cathy McMorris Rodgers, R-WA, highlighted the importance of advancing US leadership in the field of autonomous vehicles, which can help drive down traffic fatalities, support people with disabilities, and strengthen US technological competitiveness, particularly over China, she said.

Despite these possibilities, the federal regulatory landscape has not been able to catch up with innovation, said John Bozzella, president of trade group Alliance for Automotive Innovation.

The absence of a national standard has led to “a labyrinth of state laws and regulations” which spurs uncertainty among companies and hampers deployment and innovation, warned Bozzella. To that end, he urged Congress to swiftly pass a bipartisan, “balanced federal AV framework” that includes “safeguards, oversight, rules and regulations” to govern the future of autonomous vehicle technology.

“It’s rare that somebody from the private sector comes to plead for their businesses to be regulated by the federal government, but this is exactly what we are seeking,” he said.

Lawmakers have taken a shot at regulating autonomous vehicles in 2017 with the SELF DRIVE Act introduced by Rep. Robert Latta, R-OH, which would have established a national regulatory framework for automated vehicles and encourage the testing and deployment of the technology. The bill passed both the committee and the House but stalled in the Senate.

That legislation now makes up the bulk of legislation considered during Wednesday hearing, along with another bill drafted by Rep. Debbie Dingell, D-MI, to strengthen safety rules regarding automated vehicles and hold manufacturers accountable for adhering to those standards.

“I don’t believe anyone thought we would be back to square one today in 2023, re-examining similar legislation that had previously passed the House unanimously, and that many members of this Committee on both sides cosponsored,” said Innovation, Data and Commerce Subcommittee Chair Gus Bilirakis, R-FL.

Gary Shapiro, president of trade group Consumer Technology Association, said a large number of exemptions should be granted so that companies can start testing new vehicle designs and safety features. Currently, manufacturers are only allowed to deploy up to 2500 vehicles for testing on a temporary basis, a constraint he said would limit the scalability of the technology in the future.

However, Philip Koopman, associate professor at Carnegie Mellon University, sounded a cautionary note regarding “overly-permissive” regulations that would allow vehicle manufacturers to “cut corners on safety.” He argued that automated vehicles are not “a silver bullet for safety” because computer drivers do not necessarily make fewer mistakes than human drivers but rather in different ways.

“If we want to still have an automated vehicle industry in the future, Congress needs to act to require transparency, accountability, and adoption of the industry’s own safety standards,” he said.

The hearing took place against a backdrop of growing dissatisfaction among industry groups and AV advocates regarding the slow-paced regulatory process for driverless transportation technology. Government officials explained that taking time for regulation is necessary to ensure public safety.

]]>
https://broadbandbreakfast.com/2023/07/lawmakers-and-industry-groups-urge-congress-action-on-autonomous-vehicles/feed/ 0 52669
Former FCC Commissioners Disagree on Future of AI Regulation https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=former-fcc-commissioners-disagree-on-future-of-ai-regulation https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/#respond Thu, 27 Jul 2023 00:51:24 +0000 https://broadbandbreakfast.com/?p=52660 WASHINGTON, July 26, 2023 – Former chairs of the Federal Communications Commission urged for lawmakers to slow down in regulating artificial intelligence at a Multicultural Media, Telecom and Internet Council event Tuesday. 

Richard Wiley, chair of the agency under Presidents Nixon, Ford and Carter, said that now is not the right time to regulate AI, and neither is the FCC the right agency to do the job. He urged lawmakers to wait until the technology is better developed to write long lasting regulations. 

“AI is the future of technology in many respects,” said Wiley. “It will provide a great amount of innovation for our country.” He believes that it should not be regulated to allow for innovation. 

Former Acting FCC Chairwoman Mignon Clyburn disagreed, warning that Congress should not work too slowly on AI regulation. AI evolution will not slow down, she said, “we can’t sleep on this.” She did not specify how the technology should be regulated.

Clyburn served as acting chairwoman under President Obama, until the confirmation of Tom Wheeler.

There are 17 states where AI legislation has already been introduced, said Clyburn. “Things will happen whether we [federal agencies] move or not,” she said, warning against a patchwork of laws across states that could increase complications for tech companies.  

Clyburn added that artificial intelligence will make potentially dangerous material more accessible to vulnerable populations, including children and vulnerable adults. It is a balance of encouraging good innovation and protecting those who could be further harmed by AI, she said, “we cannot stall” on these conversations. 

Wiley argued that children’s protection should be in the hands of parents. He suggested that tech developers could provide parents with a set of best practices to help them understand the threats revolving around AI. 

Jonathan Adelstein, former commissioner at the FCC from 2002 to 2009, expressed hope that AI will provide a revenue stream for 5G networks. He said that laws should encourage tech development of AI while ensuring that citizens are protected against potential dangers. “It’s a delicate balance, and I’m not sure the FCC is the right place to do it,” he said. 

The FCC is currently considering how AI can be used to make congestion control decisions on dynamic spectrum sharing applications. AI has been flagged as a major opportunity for the United States to improve its competitiveness with China. Last week, seven AI companies pledged to uphold key principles that the White House believes are fundamental to the safe future of AI.  

]]>
https://broadbandbreakfast.com/2023/07/former-fcc-commissioners-disagree-on-future-of-ai-regulation/feed/ 0 52660
Seven Tech Companies at White House Commit to Prevent AI Risks https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/?utm_source=rss&utm_medium=rss&utm_campaign=seven-tech-companies-at-white-house-commit-to-prevent-ai-risks https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/#respond Fri, 21 Jul 2023 19:42:25 +0000 https://broadbandbreakfast.com/?p=52526 WASHINGTON, July 21, 2023 – President Joe Biden announced that his administration has secured voluntary commitments from leading artificial intelligence companies to manage the risks posed by the technology in the White House Friday. 

“Artificial intelligence promises an enormous promise of both risk to our society and our economy and national security but also incredible opportunities,” began Biden in his remarks. Attending the event were President of Microsoft Brad Smith, President of Google Kent Walker, President of Meta Nick Clegg and President of OpenAI Greg Brockman, among other tech leaders. 

Biden and Vice President Kamala Harris met with tech leaders two months ago to “underscore the responsibility of making sure that products they are producing are safe.” Seven companies – Amazon, AI safety and research company Anthropic, Google, AI startup Inflection, Meta, Microsoft, and OpenAI – agreed to commitments that will be implemented immediately to “help move toward safe, secure, and transparent development of AI technology.” 

The commitments seek to uphold key principles that the White House believes are “fundamental to the future of AI,” namely safety, security and trust.  

The companies commit to ensuring products are safe before introducing them to the public by running products through internal and external security testing of AI systems before their release. The testing will be carried out in part by independent experts and will protect the public against the most significant AI risks including biosecurity and cybersecurity. Included in this commitment is assurance that the company will share information across the industry, with government, and academia on best practices for AI safety, attempts to circumvent safeguards, and technical collaboration. 

Furthermore, the companies commit to putting security first by investing in cybersecurity safeguards and facilitating third-party discovery and reporting of vulnerabilities in AI systems. 

Finally, the companies commit to earning the public’s trust by developing robust technical mechanisms to ensure that users know when content is AI generated to reduce dangers of fraud and deception. The companies will also publicly report their AI systems’ capabilities, limitations, and appropriate uses to address bias and fairness. They will also prioritize research on the societal risks that the AI systems can pose and develop and deploy advanced AI systems to address society’s greatest challenges. 

“From cancer prevention to mitigating climate change to so much in between, AI – if properly managed – can contribute enormously to the prosperity, equality and security of all,” read the announcement. 

“These commitments are real and they are concrete,” said Biden. “They are going to help fulfill industry fundamental obligation to Americans to develop safe, secure and trustworthy technologies that benefit society and uphold our values and shared values.” He expressed his hope that AI will transform and improve the lives of Americans, claiming that he will work with federal agencies to make necessary steps to ensure AI will make a positive impact.  

The White House has consulted with 21 different governments around the world about the voluntary commitments. 

In October, the Biden Administration announced a blueprint for a first-ever AI Bill of Rights that identifies five principles that should guide the design, use and deployment of AI systems in order to protect American citizens. According to the White House, federal agencies have “ramped up their efforts” to protect American citizens from risks posed by AI technology.  

In May, Biden signed an executive order directing federal agencies to root out bias in the design of AI technology and protect the public from algorithmic discrimination. 

The White House also announced that it is currently underway to develop an executive order that will pursue bipartisan legislation to “help America lead the way in responsible innovation.” 

]]>
https://broadbandbreakfast.com/2023/07/seven-tech-companies-at-white-house-commit-to-prevent-ai-risks/feed/ 0 52526
Increase US Competitiveness with China Through AI and Spectrum, Experts Urge https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/?utm_source=rss&utm_medium=rss&utm_campaign=increase-us-competitiveness-with-china-through-ai-and-spectrum https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/#respond Thu, 20 Jul 2023 18:47:13 +0000 https://broadbandbreakfast.com/?p=52487 WASHINGTON, July 20, 2023 – Maintaining U.S. competitiveness with China requires leveraging artificial intelligence for supply chain monitoring and allocating mid-band spectrum for commercial use, said experts Thursday. 

It is critical that the United States reduces its dependency on China in key areas including microelectronics, electric vehicles, solar panels, pharmaceutical ingredients, rare earth minerals processing, and more, said Rep. Mike Gallagher, R-Wisconsin, at a Punchbowl News event. He added that it is essential that American companies and governments are aware of their own supply chain risks and vulnerable areas.  

Artificial intelligence can be deployed to understand vulnerabilities in the supply chain, said Carrie Wibben, president of government solutions at supply chain management software company Exiger. 

American adversaries have been using AI for a long time to understand where to penetrate American supply chain ecosystem to obtain a strategic advantage over the country, said Wibben. She reported that the Department of Defense is moving quickly to increase visibility in its supply chain and implement new technology.  

AI and supply chains are the two fronts the U.S. competes in to maintain global dominance, said Wibben. She encouraged the coordination of the two to develop a strategy to keep U.S. global competitiveness and increase national security. 

A major concern in Congress is the nation’s reliance on China for its supply chain, added Rep. Raja Krishnamoorthi, D-Illinois. He said that the best solution is diversifying in the private sector, meaning that companies have redundant suppliers.  

In many cases, this can be done without government intervention but where the private sector doesn’t have the knowledge base to replicate these systems, it is essential that the government step in and provide incentives, Krishnamoorthi said. Congress has passed several laws, including the Inflation Reduction Act and the CHIPS and Science Act that invest billions of dollars into American-made clean energy and semiconductors. 

Krishnamoorthi said that the White House is doing what it can to prevent aggression from the Peoples Republic of China materializing into conflict.  

Need more spectrum 

Allocating more licensed spectrum for commercial use to support 5G is essential to maintaining US competitiveness with China, said panelists at a separate American Enterprise Institute event Thursday.  

Next generation wireless mobile network, 5G, enables higher speeds with low latency and more reliability. For a democratic state, 5G will enable more expression, innovation, human freedom, and opportunities to solve world challenges of health and climate, said Clete Johnson, senior fellow at the Center for Strategic and International Studies. For an authoritarian state, the same technology will enable policing of citizens, social control, and an overarching understanding of what people are doing, said Johnson.  

If the U.S. is behind China in allocating the spectrum that 5G rides on, then China will dominate cyber and information operations, including force projections and more capable weaponry, warned Johnson. “If we don’t lead, China will.” 

“Commercial strength is national security,” said Johnson, referring to the need to allocate spectrum for commercial use.  

China recognizes the value of 5G and how this kind of foundation will enable industrial and commercial activity, said Peter Rysavy, president of wireless consultancy Rysavy Research. The country has allocated three times as much spectrum in the mid-band areas for commercial use than the U.S. has, he said.  

No amount of spectrum efficiency and sharing mechanisms will replace having more spectrum available, added Paroma Sanyal, principal at economic consultancy Brattle Group. The U.S. government needs to get more spectrum into the pipeline, she said. 

A former administrator of the National Telecommunications and Information Administration said on a panel last week that national security depends on commercial access to spectrum. “If you take economic security out of the national security equation, you damage national security and vice versa,” John Kneuer said. 

Kneuer suggested that allowing the commercial sector access to more spectrum is beneficial to this goal as it spurs innovation that is a byproduct of increased economic activity that can then spill back into the federal agencies for new capabilities they would not have had otherwise.   

The Federal Communications Commission is evaluating how artificial intelligence can be used in dynamic spectrum sharing to optimize traffic and prevent harmful interference. AI can be used to make congestion control decisions and sense when federal agencies are using the bands to allow commercial use on federally owned spectrum without disrupting high-priority use. 

This comes as the FCC is facing spectrum availability concerns. In its June open meeting, the FCC issued proposed rulemaking that explores how the 42 –42.5 GHz spectrum band might be made available on a shared basis. The agency’s spectrum auction authority, however, expired earlier this year. 

The head of the NTIA announced this week that the national spectrum strategy is set to be complete by the end of the year. It will represent a government-wide approach to maximizing the potential of the nation’s spectrum resources and takes into account input from government agencies and the private sector. 

Rep. Doris Matsui, D-Calif., is heading two bills, the Spectrum Relocation Enhancement Act and the Spectrum Coexistence Act, that would make updates to the spectrum relocation fund that compensates federal agencies to clear spectrum for commercial use and would require the NTIA to conduct a review of federal receiver technology to support more intensive use of limited spectrum.    

]]>
https://broadbandbreakfast.com/2023/07/increase-us-competitiveness-with-china-through-ai-and-spectrum/feed/ 0 52487
Artificial Intelligence for Spectrum Sharing ‘Not Far Off,’ Says FCC Chair Rosenworcel https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/?utm_source=rss&utm_medium=rss&utm_campaign=artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/#respond Thu, 13 Jul 2023 19:20:36 +0000 https://broadbandbreakfast.com/?p=52320 WASHINGTON, July 13, 2023 – The Federal Communications Commission is evaluating how artificial intelligence can be used in dynamic spectrum sharing, protect against harmful robocalls and improve the national broadband map.  

The FCC joined with the National Science Foundation in a forum Thursday to discuss how AI can be used to improve agency operations. Chairwoman Jessica Rosenworcel said that the points and solutions discussed during the event will spearhead the FCC’s August open meeting. 

She pointed to spectrum sharing optimization as a major improvement possible through AI optimization. “Smarter radios using AI can work with each other without a central authority dictating the best use of spectrum in every environment,” she said, claiming that the technology is “not far off.”  

AI can be used to make congestion control decisions, which is a major opportunity for dynamic spectrum sharing, said Ness Shroff, director of NSF AI institute, in a panel discussion. It can also be used to sense when federal agencies are using spectrum bands to allow commercial use on federally owned spectrum without disrupting high-priority use.  

This comes as the FCC is facing spectrum availability concerns. In its June open meeting, the FCC issued proposed rulemaking that explores how the 42 – 42.5 GHz spectrum band might be made available on a shared basis. 

As research progresses, we will see more uses of AI for the FCC and in the telecom field in general, Shroff concluded. 

Lisa Guess, senior vice president of solutions engineering at telecom Ericsson, said that AI can be an important tool for getting a more granular national broadband map by analyzing areas that are likely to be overreported and analyzing the data submitted for accuracy and consistency.  

Shroff added that AI could analyze federal grant programs to determine how successful they are and find solutions for problem areas. 

Illegal robocalls can also be addressed through AI which can flag certain patterns that are deemed suspicious and analyze voice biometrics for synthesized voices, said Alisa Valentin, senior director of technology and telecommunications policy at the National Urban League. Unfortunately, AI also makes it easier for bad actors to appear legitimate, she said, which is why the FCC needs to address new concerns as they appear.  

Harold Feld, senior vice president for consumer advocacy group Public Knowledge, added that the FCC needs to recognize that AI is a tool to be utilized but also a cause of potential concern that the agency needs to anticipate. He urged the FCC to develop regulations now that will prohibit its misuse in the future. 

Rosenworcel expressed her optimism about the future of AI in opening remarks. “Every day I see how communications networks power our world. I know how their expansion and evolution can change commercial and civic life. I also know the power of those communications networks can grow exponentially when we can use AI to understand how to increase the efficiency and effectiveness of our networks,” she said. 

Commissioner Nathan Simington added his support, emphasizing the need to maintain American headway as the technology leader of the world. “Most visions for a shared spectral future depend on one or another implementation of machine learning in automated frequency coordination,” he said. 

Simington added caution against casting regulatory solutions to problems that do not exist yet that may “be worse than the disease.” 

AI in telecommunications 

Not only is AI a game changer for the FCC, but it can also transform the way that telecommunications companies run their businesses, said Jorge Amar, senior partner at global management consulting firm, McKinsey and Company. AI can provide companies hyper personalization for consumer experience, improve labor productivity, and improve internal network operation. 

Generative AI “has potential to continue to disrupt how AI transforms telecom companies,” added Amar. Almost every telecom company is starting to work with AI, which is increasing the value of the industry, he said — “it is here to stay.” 

In fact, AI has a unique customer experience application for people with disabilities by predicting the likelihood that a particular customer will call customer service and preempt them by calling the consumer themselves and help address their pain points, said Amar.  

An easy application of AI that is already being deployed is chat bots that are able to respond to consumer’s concerns in real time and limit the amount of time waiting on hold or conversing with an employee, he added.  

Rosenworcel highlighted network resiliency in her remarks, saying that AI “can help proactively diagnose difficulties, orchestrate solutions, and heal networks on its own,” especially in response to weather events that create unforeseen technical problems. “That means operators can fix problems before they reach customers, and design them with radically improved intelligence and efficiency.” 

The House Subcommittee on Communications and Technology passed a bill Wednesday that would require the NTIA to examine accountability standards for AI systems used in communications networks as a greater push to enhance transparency of government’s use of AI to communicate with the public. 

]]>
https://broadbandbreakfast.com/2023/07/artificial-intelligence-for-spectrum-sharing-not-far-off-says-fcc-chair-rosenworcel/feed/ 0 52320
U.S. Needs Robust Semiconductor Workforce Training to Make Progress on More Chip Independence https://broadbandbreakfast.com/2023/07/u-s-needs-robust-semiconductor-workforce-training-to-make-progress-on-more-chip-independence/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-needs-robust-semiconductor-workforce-training-to-make-progress-on-more-chip-independence https://broadbandbreakfast.com/2023/07/u-s-needs-robust-semiconductor-workforce-training-to-make-progress-on-more-chip-independence/#respond Mon, 10 Jul 2023 18:43:49 +0000 https://broadbandbreakfast.com/?p=52187 WASHINGTON, July 10, 2023 – Panelists at a Broadband Breakfast event raised concerns about workforce shortages in the country’s pursuit to become more independent in the sourcing of semiconductor chips. 

The U.S. semiconductor industry could face a shortage of about 70,000 to 90,000 workers over the next few years, according to a report from audit and consulting firm Deloitte. Consulting firm McKinsey also projected a shortfall of about 300,000 engineers and 90,000 skilled technicians in the United States by 2030.

Sign up for the Broadband Breakfast Club to access the complete videos from the Made in America Summit.

Maryam Rofougaran, cofounder and CEO of 5G chip manufacturer Movandi Corporation, pointed at the Broadband Breakfast Made in America Summit last Tuesday to a decrease in interest from high schoolers and college students in the field that is leading to a lack of skilled American workers in the development of the semiconductors. 

Rofougaran called for immigration policies to be more friendly as America continues to look for highly skilled people in the semiconductor field, citing her own personal journey of immigration from Iran. 

“I think immigration has been one of the greatest things for the US,” she said. 

Gene Irisari, head of semiconductor policy at Samsung, asked, “Where are all these workers going to come from? They can’t just come from the clusters where the semiconductor fabs are being created.

“It’s got to be really a national effort and encompassing whether it be, universities, community colleges, there’s got to be standardized curriculum and there’s got to be a big push to get more students interested in engineering and then then into the semiconductor industry or one of the areas that semiconductor feeds into,” he said. 

The Chips and Science Act, signed into law last summer, plows at least $52 billion in incentives to domesticate the manufacturing of the chips that are the brains for many important technologies, including future broadband builds. 

The comments came a day after the federal government allocated to the states the money from the Broadband Equity, Access and Deployment program. The $42.5 billion is expected to help build out networks across the country.  

While there have been economic tensions between China and the U.S. amid America’s attempt to make itself less reliant on Asia for the technology, panelists were concerned about demands to completely decouple the country from China. 

“China is a large supplier of raw materials needed for manufacturing and a large consumer of microchips,” said Shawn Muma, director of supply chain innovation and emerging technologies, at the Digital Supply Chain Institute.

The panelists discussed America’s unwillingness to mine raw materials for itself, and with no foreseeable change to that on the horizon, there would need to be a continued reliance on countries like China, Indonesia, and Ukraine to do the “dirty” work of extracting and providing raw materials required for manufacturing microchips.

“It doesn’t behoove anyone to completely decouple with China,” Irisari said.And to add to that, no one country can assume that they can control all of the semi-market.” 

This comes after President Joe Biden and Indian Prime Minister Narendra Modi issued a joint statement that included American semiconductor company Lam Research training 60000 Indian engineers through its Semiverse Solution virtual fabrication platform, thus increasing the labor pool. 

More recently on July 6, Treasury Secretary Janet Yellen went to China to meet with a new group of top economic policymakers led by Vice Premier He Lifeng.

Just days before she arrived in China, China’s Commerce Ministry announced forthcoming export controls on two metals used in the manufacturing of semiconductors.

America’s finance ministry said in a statement that during the meetings, the Chinese side asked the U.S. to remove tariffs on Chinese goods and stop “pressuring” Chinese companies, among other items.

Sign up for the Broadband Breakfast Club to access the complete videos from the Made in America Summit.

]]>
https://broadbandbreakfast.com/2023/07/u-s-needs-robust-semiconductor-workforce-training-to-make-progress-on-more-chip-independence/feed/ 0 52187
Domestic Manufacturing and the CHIPS Race: Excerpts from Made in America Summit https://broadbandbreakfast.com/2023/07/domestic-manufacturing-and-the-chips-race-excerpts-from-made-in-america-summit/?utm_source=rss&utm_medium=rss&utm_campaign=domestic-manufacturing-and-the-chips-race-excerpts-from-made-in-america-summit https://broadbandbreakfast.com/2023/07/domestic-manufacturing-and-the-chips-race-excerpts-from-made-in-america-summit/#respond Wed, 05 Jul 2023 14:06:04 +0000 https://broadbandbreakfast.com/?p=52145 WASHINGTON, July 5, 2023 – Coming off the Made in America Summit on June 27, 2027, Broadband Breakfast has released the videos from the five panel sessions.

Below are a few excerpts from Panel 3, “Domestic Manufacturing and the CHIPS Race.” Visit the Made in America Summit page and join the Broadband Breakfast Club to access the complete Summit videos.

The CHIPS and Science Act provides $280 billion in funding to spur semiconductor research and manufacturing in the United States. Semiconductors are key components of consumer electronics, military systems and countless other applications, making a domestic supply chain critically important — particularly amid an increasingly hostile technological race with China. How successful will efforts be to bring semiconductor manufacturing to America?

  • Gene Irisari, Head of Semiconductor Policy, Samsung
  • Shawn Muma, Director of Supply Chain Innovation & Emerging Technologies, Digital Supply Chain Institute
  • Maryam Rofougaran, CEO and Co-Founder, Movandi Corporation
  • Rishi Iyengar (moderator), Global Technology Reporter, Foreign Policy

Will chip-making always be entangled with China?

Why Samsung is so supportive of the CHIPS and Science Act:

Why semiconductor production is the most important thing America can be doing:

Sign up for the Broadband Breakfast Club to access the complete videos from the Made in America Summit.

]]>
https://broadbandbreakfast.com/2023/07/domestic-manufacturing-and-the-chips-race-excerpts-from-made-in-america-summit/feed/ 0 52145
Senator Calls for Global Cooperation on Artificial Intelligence Regulation to Compete with China https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/?utm_source=rss&utm_medium=rss&utm_campaign=senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/#respond Tue, 20 Jun 2023 13:15:16 +0000 https://broadbandbreakfast.com/?p=51721 WASHINGTON, June 20, 2023 – Sen. Mark Warner, D-VA, called on western allies to collaborate on regulating artificial intelligence, warning China has gained a significant head start on that front.

China’s “very much ahead of the game,” even surpassing Europe in implementing AI regulations, Warner said Thursday in a video interview for Politico’s Global Tech Day. China had reportedly already started its AI development plan in 2017.

“Many of us believe that we are in an enormous technology competition, particularly with China, and that national security means who wins the battle around AI” and other emerging technologies, he said, adding China might employ “inappropriate means to use AI on an offensive basis or on a misinformation or deceptive basis against the balance of the world.”

He proposed that the United States collaborate with its global allies, particularly the European Union, the United Kingdom and Japan, to establish a universal framework for regulating artificial intelligence. The EU recently passing a draft law known as the A.I. Act, while Senate witnesses have called on senators to do something about AI transparency.

Earlier in June, Warner joined Sens. Michael Bennet, D-CO, and Todd Young, R-IN, in introducing legislation to form an agency charged with increasing American competitiveness in the global tech arena, including the field of artificial technology.

Jonathan Berry, UK minister for AI and intellectual property, reiterated the call for a unified approach toward AI regulations, emphasizing the need to “arrive at the same landing zone” later during the summit.

“From a UK’s perspective, we are very keen to offer thought leadership in this space,” he said.

The capacity of generative AI to quickly produce responses by accessing information from unregulated online datasets has raised concerns regarding data privacy, content bias and ethical applications. Legislators, tech leaders, and academics have all called on Congress to adopt guidelines for the safe and responsible development of AI.

]]>
https://broadbandbreakfast.com/2023/06/senator-calls-for-global-cooperation-on-artificial-intelligence-regulation-to-compete-with-china/feed/ 0 51721
Academics Call for Dedicated Agency for AI Regulation https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/?utm_source=rss&utm_medium=rss&utm_campaign=academics-call-for-dedicated-agency-for-ai-regulation https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/#respond Mon, 12 Jun 2023 19:04:57 +0000 https://broadbandbreakfast.com/?p=51643 WASHINGTON, June 12, 2023 – Panelists at an event last week recommended a dedicated government agency to oversee the regulation of artificial intelligence.

Ben Shneiderman, professor at the University of Maryland’s department of computer science, said he sees government agencies as the primary entities to take the lead in internet and AI regulation. He encouraged the involvement of accounting firms and insurance companies in auditing and regulating AI systems, emphasizing the need for collaboration among different players to address the complex challenges associated with AI.  

“The history of regulation shows that it can be very positive and a great trigger of innovation.” Shneiderman said at an event hosted by the Center for Data Innovation and R Street Institute. “It’s a big job. It’s going to take our attention for the next 50 years. And we need lots of players to participate.” 

Participants at the event discussed how agencies like FAA, FTC, SEC and others are capable and well placed to know the domains of application of AI regulations. Though they did agree that a dedicated agency could ensure the safety and effectiveness of AI systems through stringent regulations before their deployment. 

“There’s a lot of expertise with the current agencies.” said Lee Tiedrich, faculty fellow in ethical technology at Duke University. She said she wished that the government optimized current agencies and administrative structures before creating a new agency. 

Generative AI creates original content using deep learning algorithms, mimicking human creativity by learning from data provided by humans.

Since OpenAI’s ChatGPT‘s launch in November 2002, AI technology has advanced with more sophisticated language models and has been implemented across industries.

Experts are concerned about the machine’s impact on ethics, privacy, bias, and accountability as AI becomes more integrated into society.

]]>
https://broadbandbreakfast.com/2023/06/academics-call-for-dedicated-agency-for-ai-regulation/feed/ 0 51643
Bennet, Young, and Warner Propose Legislation to Enhance U.S. Technology Competitiveness https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/?utm_source=rss&utm_medium=rss&utm_campaign=bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/#respond Fri, 09 Jun 2023 15:05:36 +0000 https://broadbandbreakfast.com/?p=51627 WASHINGTON, June 9, 2023 – Citing threats from China, two Democratic and one Republican senator have introduced the Global Technology Leadership Act that would create an Office of Global Competition Analysis.

The new office would be tasked with assessing U.S. leadership in science, technology and innovation in advanced manufacturing, workforce development, supply chain resilience and research and development initiatives.

“We cannot afford to lose our competitive edge in strategic technologies like semiconductors, quantum computing, and artificial intelligence to competitors like China,” said Sen. Michael Bennet, D-Colo., one of the three sponsors, together with Mark Warner, D-Virginia, and Todd Young, R-Indiana.

On a periodic basis, the director of the Office of Science and Technology Policy, presidential assistants for economic policy, national security and the heads of such agencies of OSTP and the White House deem appropriate would determine the priorities of the office.

Bennet said that the office’s assessments would inform policymakers and help enhance American leadership in strategic innovation.

 

 

]]>
https://broadbandbreakfast.com/2023/06/bennet-young-and-warner-propose-legislation-to-enhance-u-s-technology-competitiveness/feed/ 0 51627
Greater Private Investments Will Supplement Federal Dollars Expended in Build America Initiative https://broadbandbreakfast.com/2023/06/greater-private-investments-will-supplement-federal-dollars-expended-in-build-america-initiative/?utm_source=rss&utm_medium=rss&utm_campaign=greater-private-investments-will-supplement-federal-dollars-expended-in-build-america-initiative https://broadbandbreakfast.com/2023/06/greater-private-investments-will-supplement-federal-dollars-expended-in-build-america-initiative/#respond Thu, 08 Jun 2023 22:26:19 +0000 https://broadbandbreakfast.com/?p=51608 WASHINGTON, June 8, 2023 – American investments in its domestic manufacturing must be accompanied by private investment and ambition, said the director of the Energy Department’s Loan Programs Office Jigar Shah a a Thursday event by nonprofit newsroom Canary Media. 

Currently, private companies are not interested in financing manufacturing loans in the U.S., said Shah. He urged the private industry to show more ambition by investing in infrastructure programs as federal investments come down the pipeline. 

Don’t miss the discussion of the connection between green energy, semiconductor manufacturing and infrastructure investment at Broadband Breakfast’s Made in America Summit on June 27.

The Build America Buy America Act, strengthened as part of the Infrastructure Investment and Jobs Act of 2021, requires that all iron, steel, manufactured products and construction materials used in federally funded projects to be produced in the U.S.

Additionally, Congress passed the Inflation Reduction Act of 2022 which invests $400 billion in federal funding to clean energy and the CHIPS and Science Act which invests $280 billion into U.S. domestic semiconductor manufacturing. Semiconductors are the microprocessors that power all electronic applications. 

These investments, paired with the $1.2 trillion Infrastructure Investment and Jobs Act which invests in various American infrastructure projects, play a central role in the administration’s strategy to revitalize the American industry. They invest in a more sustainable, consistent, and dependable supply chain for the U.S. economy, said Shah. 

Investing in American manufacturing will increase investor confidence that the U.S. is capable of large manufacturing projects, he added. 

By passing these acts, Congress has moved forward to improve American manufacturing, said Shah. It is now up to private industry to make the most of these investments and reinvent themselves to improve American global competitiveness. 

]]>
https://broadbandbreakfast.com/2023/06/greater-private-investments-will-supplement-federal-dollars-expended-in-build-america-initiative/feed/ 0 51608
Advocates for Connected Vehicle Technology Urge the FCC to Act https://broadbandbreakfast.com/2023/06/advocates-for-connected-vehicle-technology-urge-the-fcc-to-act/?utm_source=rss&utm_medium=rss&utm_campaign=advocates-for-connected-vehicle-technology-urge-the-fcc-to-act https://broadbandbreakfast.com/2023/06/advocates-for-connected-vehicle-technology-urge-the-fcc-to-act/#respond Thu, 08 Jun 2023 20:37:00 +0000 https://broadbandbreakfast.com/?p=51601 WASHINGTON, June 8, 2023 – Experts in automated vehicles are urging regulators to approve the implementation of cellular vehicle-to-everything technology, warning a lengthy regulatory process could stifle innovation.

In April, the Federal Communications Commission approved a joint waiver by 14 automakers and equipment manufacturers to use CV2X technology in the 5.9 GHz transportation safety band after nearly two years of review. Since then, numerous similar applications have been submitted and due to review.

“The point of filing was to say, we don’t have time to wait until you finish with the rule making, FCC,” said Suzanne Tetreault, partner at the law firm Wilkinson Barker Knauer, a counsel to the 5G Automotive Association.

The industry’s shift from dedicated short-range communication to CV2X has prompted authorities to figure clear guidelines on the use of this emerging technology. While both allow for vehicles to broadcast signals, CV2X enables more robust connection between vehicles and infrastructure through high-speed cellular networks such as the 5G wireless standard.

These signals can be used to avoid collisions, traffic congestion and support the development of driverless vehicles.

The FCC is currently working with the National Telecommunications and Information Administration and the Department of Transportation to come up with final rules for the widespread use of CV2X technology in addition to spectrum allocation.

Charles Cooper, associate administrator from the NTIA, explained that regulators need to find “a common basis for technical evaluation,” saying “it may take time and effort, but the payoff is tremendous.”

Karen Van Dyke, a spectrum management official at the Department of Transportation, added taking time for regulation is necessary to ensure “zero fatalities.”

Experts in the field, however, pointed out that it is unrealistic to guarantee total safety before moving to the implementation phase. Instead, regulators should aim for more attainable, short-term goals or “low-hanging fruits.”

“You don’t have to solve the problems 100 percent,” said Bryan Mulligan, president of Applied Information Inc. “Let’s focus on vision 50 – how can we get 50 percent of the fatalities saved in the next five years.”

Trial and error are the only way to generate the innovation and data necessary to guarantee safety, according to experts.

“The key thing is moving quick to get deployed and taking those advantages to feed information to the other vehicles,” said John Kuzin, vice president of spectrum policy and regulatory counsel at chips maker Qualcomm.

Meanwhile, the FCC is still waiting to regain its spectrum licensing authority, which has expired for the first time in the agency’s history.

]]>
https://broadbandbreakfast.com/2023/06/advocates-for-connected-vehicle-technology-urge-the-fcc-to-act/feed/ 0 51601
Debt Ceiling Law Doesn’t Change Administration Priorities on Semiconductors, Advanced Energy and Broadband https://broadbandbreakfast.com/2023/06/debt-ceiling-law-doesnt-change-administration-priorities-on-semiconductors-advanced-energy-and-broadband/?utm_source=rss&utm_medium=rss&utm_campaign=debt-ceiling-law-doesnt-change-administration-priorities-on-semiconductors-advanced-energy-and-broadband https://broadbandbreakfast.com/2023/06/debt-ceiling-law-doesnt-change-administration-priorities-on-semiconductors-advanced-energy-and-broadband/#respond Fri, 02 Jun 2023 14:19:45 +0000 https://broadbandbreakfast.com/?p=51374 WASHINGTON, June 2, 2023 — Perhaps the greatest surprise of the debt ceiling deal passed Thursday night by the Senate (and on Wednesday by the House) is that it leaves unscathed the Biden administration’s three top domestic priorities: the Inflation Reduction Act (August 2022), semiconductor promotion in the CHIPS and Science Act (July 2022), and the Infrastructure Investment and Jobs Act (November 2021).

Together, these measures will invest more than $2 trillion of federal funds into American manufacturing, infrastructure (including broadband) and advanced energy.

REGISTER FOR THE MADE IN AMERICA SUMMIT

As Broadband Breakfast’s Made in America Summit takes shape, we encourage you to register now to attend this important event on Tuesday, June 27, in Washington. The summit’s four sessions will explore the intersection of these vital big-picture topics:

  • (R)e-building Energy and Internet Infrastructure
  • Semiconductor Manufacturing and U.S.-Chinese Tech Race
  • Challenges to Reorienting America’s Supply Chain
  • Making Cleaner Energy and Enhancing Green Industry

The Inflation Reduction Act invests billions of dollars in clean energy projects that work to limit carbon emissions and other pollutants, including solar, wind, nuclear, clean hydrogen and more. But will its investments in clean energy founder on the lack of infrastructure deployment, or by delays in federal, state and local permitting? This session will also consider the intersection of “smart grid” infrastructure, long-haul and local, and the synchronicities between the broadband and energy economies.

• Lori Bird, U.S. Energy Program Director and Polsky Chair for Renewable Energy, World Resources Institute
• Xan Fishman, Director of Energy Policy and Carbon Management, Bipartisan Policy Center
• Quindi Franco, Assistant Director, Government Accountability Office
• Robert Glicksman, Professor of Environmental Law, George Washington University Law School
Other panelists have been invited

The CHIPS and Science Act provides $280 billion in funding to spur semiconductor research and manufacturing in the United States. Semiconductors are key components of consumer electronics, military systems and countless other applications, making a domestic supply chain critically important — particularly amid an increasingly hostile technological race with China. How successful will efforts be to bring semiconductor manufacturing to America?

• Gene Irisari, Head of Semiconductor Policy, Samsung
• Shawn Muma, Director of Supply Chain Innovation & Emerging Technologies, Digital Supply Chain Institute
• Maryam Rofougaran, CEO and Co-Founder, Movandi Corporation
• Rishi Iyengar (moderator), Global Technology Reporter, Foreign Policy
Other panelists have been invited

The Build America Buy America Act, part of the Infrastructure Investment and Jobs Act, established a domestic content procurement preference for all federally subsidized infrastructure projects. Although waivers of Buy America requirements have been proposed for certain projects — such as Middle Mile Grant Program recipients — it appears unlikely that these will be extended to initiatives such as the Broadband Equity, Access and Deployment program, despite requests and warnings from industry leaders. Although fiber-optic cable production is on the rise, significant issues remain in America’s semiconductor and electronic equipment supply. How will these issues be addressed in broadband and other infrastructure projects?

 Panelists to be announced

The Inflation Reduction Act establishes requirements for the use of American-made equipment in clean energy production. How will those requirements impact green energy development? How will the resulting projects interact with other ongoing infrastructure initiatives? What will it take for America to establish itself as a clean energy superpower?

 Panelists to be announced

Early-bird registration of $199 until Friday, June 9 + government and Broadband Breakfast Club rate.

Check back frequently to see updates on the Made in America Summit event page.

REGISTER FOR THE MADE IN AMERICA SUMMIT

]]>
https://broadbandbreakfast.com/2023/06/debt-ceiling-law-doesnt-change-administration-priorities-on-semiconductors-advanced-energy-and-broadband/feed/ 0 51374
U.S. Must Take Lead on Global AI Regulations: State Department Official https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/?utm_source=rss&utm_medium=rss&utm_campaign=u-s-must-take-lead-on-global-ai-regulations-state-department-official https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/#respond Wed, 31 May 2023 19:30:02 +0000 https://broadbandbreakfast.com/?p=51286 WASHINGTON, May 31, 2023 – A State Department official is calling for a United States-led global coalition to set artificial intelligence regulations.

“This is the exact moment where the US needs to show leadership,” Jennifer Bachus, assistant secretary of state for Cyberspace and Digital Policy, said last week on a panel discussing international principles on responsible AI. “This is a shared problem and we need a shared solution.”

She opposed pitting the U.S. and China against one another in the AI race, saying it would “ultimately always lead to a problem.” Instead, Bachus called for an alliance of the United States, the European Union, and Japan to take the lead in creating a legal framework to govern artificial intelligence.

The introduction of OpenAI’s ChatGPT earlier this year sent tech companies in a rush to create their own generative AI chatbot systems. Competition between tech giants has heated up with the recent release of Google’s Bard and Microsoft’s Bing chatbot. Similar to ChatGPT in terms of its vast language model, these chatbots can also access data from the internet to answer queries or carry out tasks.

Experts are concerned about the dangers posed by this unprecedented technology. On Tuesday, hundreds of tech experts and industry leaders, including OpenAI’s CEO Sam Altman, signed a one-sentence statement calling the existential threats presented by A.I. a “global priority” on par with “pandemics and nuclear conflicts.” Earlier in March, Elon Musk joined several AI experts signing another open letter urging for a pause on “giant AI experiments.”

Despite the pressing concerns about generative AI, there is rising criticism that policymakers are slow to put forth adequate legislation for this nascent technology. Panelists argued this is partly because legislators have difficulty understanding technological innovations. Michelle Giuda, director of Krach Institute for Tech Diplomacy, argued for a more proactive contribution from the academic community and tech firms.

“There is a risk of relying too much on the government to regulate ahead of where innovation is going and providing the clarity that’s needed,” said Giuda. “We all know that the government isn’t going to stay ahead of the innovation curve, but this is an ongoing dialogue between tech companies, governments and civil society.”

Microsoft’s Chief Responsible AI Officer, Natasha Crampton, agreed that developers and experts in the field must play a central role in crafting and implementing legislation pertaining to artificial intelligence. She did, however, mention that businesses using AI technology should also share part of the responsibility.

“It is our job to make sure that safety and responsibility is baked into these systems from the very beginning,” said Crampton. “Making sure that you are really holding developers to very high standards but also deployers of technology in some aspects as well.”

Earlier in May, Sens. Michael Bennet, D-C.O., and Peter Welch, D-VT. introduced a bill to establish a government agency to oversee artificial intelligence. The Joe Biden administration also announced $140 million in funding to establish seven new National AI Research institutions, increasing the total number of institutions in the nation to 25.

]]>
https://broadbandbreakfast.com/2023/05/u-s-must-take-lead-on-global-ai-regulations-state-department-official/feed/ 0 51286