While consensus on the precise approach to artificial intelligence (AI) regulation in the U.S. remains elusive, canaries in the coal mine are calling for more comprehensive federal regulation and legislation, mainly to protect privacy. Some states have taken the lead by enacting their own laws, like the Deepfakes legislation in California, albeit for a limited duration. This legislation is intended to assess the risks and use of deepfakes in California. Questions persist about the enforcement mechanisms and potential fines, mirroring the strict measures seen in the EU AI Act.
In 2024, AI will continue to stand out as a dominant and pervasive topic, particularly the impact of generative AI on enforcing privacy, securities, and antitrust laws. The legal landscape is further complicated by the number of copyright disputes making their way through the judicial system.
AI regulation is poised to take center stage next year, demanding accountability from companies of all sizes, as seen in President Biden’s Executive Order issued on October 30, 2023. For example, the EO has suggested the implementation of Red Teams for generative AI solutions, particularly in high-risk areas such as consumer health and safety, privacy of sensitive information (health, financial, identity authentication, biometrics, etc.), and employment application screening and hiring. The upcoming year signifies an important juncture in AI regulation, with significant players already committing to responsible AI practices. Nevertheless, some critics view these efforts as falling short of the steps taken by the European Union.
In a groundbreaking move, the EU recently adopted the AI Act, ushering in new restrictions on AI use cases and mandating transparency from companies, including Open AI, regarding data usage. The EU AI Act will be finalized by the end of the year, and pending final EU procedures, the Act will likely enter into law sometime in early 2024. In contrast, the United States has put forth plans and statements, such as releasing the AI Bill of Rights in October 2022 and the Biden Administration’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Despite these initiatives, concerns linger about enforceability and binding measures.
Another cause for concern lies in the unbalanced attention given to the largest corporations in AI regulation discussions, sidelining the startups and smaller companies making great strides in AI. Achieving a fair and comprehensive regulatory landscape necessitates including these smaller entities in the dialogue, acknowledging their shared scrutiny with larger counterparts.
As we approach the year’s end, there is a sense of caution and excitement regarding regulating AI. Despite concerns about potential risks and challenges associated with AI, numerous examples highlight its transformative power and positive impact across various sectors. For instance, AI has played a pivotal role in diagnosing diseases more accurately and expediting drug discovery processes in healthcare. AI-driven algorithms have improved fraud detection and enhanced personalized financial services in the financial industry. Additionally, the field of autonomous vehicles showcases AI’s potential to revolutionize transportation, making it safer and more efficient. These examples emphasize the need to shift the narrative from apprehension to enthusiastic exploration of AI’s vast opportunities, encouraging a collaborative effort to shape its future impact globally.
Some are calling for 2024 to be designated the Year of AI Regulation in the U.S., emphasizing the need for comprehensive accountability by governing bodies, irrespective of a company’s size. Advocates stress the importance of upholding collective responsibility and moving beyond incremental progress.
While there have been huge strides in technology relating to generative artificial intelligence, the road to artificial general intelligence remains very far away. The ability of artificial general intelligence to displace human logic or governance systems remains very far down the line. Beyond speculation about how robots will take over the world or how computers will turn the earth into a prison for other life forms, we do not have much hard evidence that unregulated AI poses a significant risk to society.
AI is a technology still in its early stages of development. There is much we do not understand about how AI works, so attempts to regulate AI could easily prove counterproductive, stifling innovation and slowing progress in this rapidly developing field. Any regulations issued are likely to be tailored toward existing practices and players. That doesn’t make sense when it is not obvious what AI technologies will prove the most successful or which AI players will become dominant in the industry. Nonetheless, with the potential impact of AI on society coming into focus, expect 2024 to continue bringing incremental regulatory steps as new AI applications come to market.