June 17, 2021

Volume XI, Number 168

Advertisement

June 16, 2021

Subscribe to Latest Legal News and Analysis

June 15, 2021

Subscribe to Latest Legal News and Analysis

June 14, 2021

Subscribe to Latest Legal News and Analysis

The CPSC Digs In on Artificial Intelligence

American households are increasingly connected internally through the use of artificially intelligent appliances.1 But who regulates the safety of those dishwashers, microwaves, refrigerators, and vacuums powered by artificial intelligence (AI)? On March 2, 2021, at a virtual forum attended by stakeholders across the entire industry, the Consumer Product Safety Commission (CPSC) reminded us all that it has the last say on regulating AI and machine learning consumer product safety.

Evolving Regulatory Landscape

The CPSC is an independent agency comprised of five commissioners who are nominated by the president and confirmed by the Senate to serve staggered seven-year terms. With the Biden administration’s shift away from the deregulation agenda of the prior administration and three potential opportunities to staff the commission, consumer product manufacturers, distributors, and retailers should expect increased scrutiny and enforcement.2

The CPSC held the March 2, 2021 forum to gather information on voluntary consensus standards, certification, and product-specification efforts associated with products that use AI, machine learning, and related technologies. Consumer product technology is advancing faster than the regulations that govern it, even with a new administration moving towards greater regulation. As a consequence, many believe that the safety landscape for AI, machine learning, and related technology is lacking. The CPSC, looking to fill the void, is gathering information through events like this forum with a focus on its next steps for AI-related safety regulation.

To influence this developing regulatory framework, manufacturers and importers of consumer products using these technologies must understand and participate in the ongoing dialogue about future regulation and enforcement. While guidance in these evolving areas is likely to be adaptive, the CPSC’s developing regulatory framework may surprise unwary manufacturers and importers who have not participated in the discussion.

The CPSC defines AI as “any method for programming computers or products to enable them to carry out tasks or behaviors that would require intelligence if performed by humans” and machine learning as “an iterative process of applying models or algorithms to data sets to learn and detect patterns and/or perform tasks, such as prediction or decision making that can approximate some aspects of intelligence.”3 To inform the ongoing discussion on how to regulate AI, machine learning, and related technologies, the CPSC provides the following list of considerations:

  • Identification: Determine presence of AI and machine learning in consumer products. Does the product have AI and machine learning components?

  • Implications: Differentiate what AI and machine learning functionality exists. What are the AI and machine learning capabilities?

  • Impact: Discern how AI and machine learning dependencies affect consumers. Do AI and machine learning affect consumer product safety?

  • Iteration: Distinguish when AI and machine learning evolve and how this transformation changes outcomes. When do products evolve/transform, and do the evolutions/transformations affect product safety?4

These factors and corresponding questions will guide the CPSC’s efforts to establish policies and regulations that address current and potential safety concerns.

Potential Regulatory Models

As indicated at the March 2, 2021 forum, the CPSC is taking some of its cues for its fledgling initiative from organizations that have promulgated voluntary safety standards for AI, including Underwriters Laboratories (UL) and the International Organization for Standardization (ISO). UL 4600 Standard for Safety for the Evaluation of Autonomous Products covers “fully autonomous systems that move such as self-driving cars along with applications in mining, agriculture, maintenance, and other vehicles including lightweight unmanned aerial vehicles.”5 Using a claim-based approach, UL 4600 aims to acknowledge the deviations from traditional safety practices that autonomy requires by assessing the reliability of hardware and software necessary for machine learning, ability to sense the operating environment, and other safety considerations of autonomy.  The standard covers topics like “safety case construction, risk analysis, safety relevant aspects of the design process, testing, tool qualification, autonomy validation, data integrity, human-machine interaction (for non-drivers), life cycle concerns, metrics and conformance assessment.”6 While UL 4600 mentions the need for a security plan, it does not define what should be in that plan.

Since 2017, ISO has had an AI working group of 30 participating members and 17 observing members.7 This group, known as SC 42, develops international standards in the area of AI and for AI applications. SC 42 provides guidance to JTC 1—a specific joint technical committee of ISO and the International Electrotechnical Commission (IEC)—and other ISO and IEC committees. As a result of their work, ISO has published seven standards that address AI-related topics and sub-topics, including AI trustworthiness and big data reference architecture.8 Twenty-two standards remain in development.9

The CPSC might also look to the European Union’s (EU) recent activity on AI, including a twenty-six-page white paper published in February 2020 that includes plans to propose new regulations this year.10 On the heels of the General Data Protection Regulation, the EU’s regulatory proposal is likely to emphasize privacy and data governance in its efforts to “build[ ] trust in AI.”11 Other areas of emphasis include human agency and oversight, technical robustness and safety, transparency, diversity, non-discrimination and fairness, societal and environmental wellbeing, and accountability.12

***

Focused on AI and machine learning, the CPSC is contemplating potential new consumer product safety regulations. Manufacturers and importers of consumer products that use these technologies would be well served to pay attention to—and participate in—future CPSC-initiated policymaking conversations, or risk being left behind or disadvantaged by what is to come.

-------------------------------------------------------

1 See Crag S. Smith, A.I. Here, There, Everywhere, N.Y. Times (Feb. 23, 2021), https://www.nytimes.com/2021/02/23/technology/ai-innovation-privacy-seniors-education.html.

2 Erik K. Swanholt & Kristin M. McGaver, Consumer Product Companies Beware! CPSC Expected to Ramp up Enforcement of Product Safety Regulations (Feb. 24, 2021), https://www.foley.com/en/insights/publications/2021/02/cpsc-enforcement-of-product-safety-regulations.

3 85 Fed. Reg. 77183-84.

4 Id.

5 Underwriters Laboratories, Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Productshttps://ul.org/UL4600 (last visited Mar. 30, 2021). It is important to note that autonomous vehicles fall under the regulatory purview of the National Highway Traffic Safety Administration. See NHTSA, Automated Driving Systemshttps://www.nhtsa.gov/vehicle-manufacturers/automated-driving-systems.

6 Underwriters Laboratories, Presenting the Standard for Safety for the Evaluation of Autonomous Vehicles and Other Productshttps://ul.org/UL4600 (last visited Mar. 30, 2021).

7 ISO, ISO/IEC JTC 1/SC 42, Artificial Intelligencehttps://www.iso.org/committee/6794475.html (last visited Mar. 30, 2021).

8 ISO, Standards by ISO/IEC JTC 1/SC 42, Artificial Intelligencehttps://www.iso.org/committee/6794475/x/catalogue/p/1/u/0/w/0/d/0 (last visited Mar. 30, 2021).

9 Id.

10 See Commission White Paper on Artificial Intelligence, COM (2020) 65 final (Feb. 19, 2020), https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

11 European Commission, Policies, A European approach to Artificial Intelligencehttps://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence (last updated Mar. 9, 2021).

12 Commission White Paper on Artificial Intelligence, at 9, COM (2020) 65 final (Feb. 19, 2020), https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.

© 2021 Foley & Lardner LLPNational Law Review, Volume XI, Number 90
Advertisement
Advertisement
Advertisement

TRENDING LEGAL ANALYSIS

Advertisement
Advertisement
Advertisement

About this Author

Kristin M. McGaver Litigation Attorney Foley & Lardner Milwaukee, WI
Associate

Kristin McGaver is an associate and litigation lawyer with Foley & Lardner LLP. She is a member of the firm’s Business Litigation & Dispute Resolution Practice.

Kristin was a law clerk for the Honorable Harry D. Leinenweber, U.S. District Court for the Northern District of Illinois.

During law school, Kristin was a summer associate at Foley and a student attorney with the Hennepin County Attorney’s Office in its mental health division in Minneapolis, Minnesota. She was also a judicial extern for the Honorable John R. Tunheim, U.S. District Court for the District of...

414-297-5022
Erik K. Swanholt, Foley Lardner, litigation attorney
Partner

Erik Swanholt is a partner and litigation attorney with Foley & Lardner LLP. Mr. Swanholt has substantial experience in a broad range of litigation matters, with an emphasis on product liability, pharmaceutical defects, complex commercial and consumer class action litigation, toxic torts, as well as cybersecurity, privacy, and data protection. He has defended individual and class action product liability and toxic tort claims in a variety of industries, including consumer products, fashion, pharmaceuticals, off-road vehicles, industrial safety equipment, asbestos,...

213.972.4614
Advertisement
Advertisement