HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
Singapore Invites International Feedback on Model Governance Framework for Generative AI
Friday, January 19, 2024

On 16 January 2024, Singapore published a consultation paper[1] to elicit feedback from the public, and internationally, on a proposed Model AI Governance Framework for Generative AI.

The paper addresses nine “dimensions” pertaining to generative AI, namely:

  • Accountability

This involves laying down responsibilities, including to end users, from across all stacks within the AI development chain.

  • Data

As a core element to AI model development, issues pertaining to the quality of data, including copyright infringement and privacy, are relevant and important.

  • Trusted Development and Deployment

From model development to application deployment, standards should be put in place for safe and trustworthy development, evaluation and “food label”-type transparency and disclosure.

  • Incident Reporting

Establishing regulatory notification practices can help facilitate timely remediation of any incidents.

  • Testing and Assurance

Third-party testing and assurance can serve to develop common and consistent standards around AI, and ultimately demonstrate trust with end users.

  • Security

Existing frameworks for information security need to be adapted, and new testing tools developed, to address risks posed by generative AI.

  • Content Provenance

Transparency about where and how content is generated is necessary, to avoid misinformation and fraud. Use of technical solutions like digital watermarking and cryptographic provenance must be considered in the right context.

  • Safety and Alignment Research & Development (R&D)

Accelerated R&D investment is required to improve model alignment with human intention and values. Singapore hopes to achieve this alongside global cooperation among AI safety R&D institutes.

  • AI for Public Good

Democratising AI access, improving public sector adoption, and upskilling workers and developing systems sustainably, will steer AI towards outcomes for the public good.

While earlier versions of the Model AI Governance Framework released by Singapore back in in 2019, and updated in 2020[2], explained certain risks associated with AI, such as bias, misuse and the lack of explainability, the surge of interest in generative AI specifically in more recent times has warranted the need to examine other aspects of risks more closely, such as hallucination, copyright infringement, and value alignment. These concerns were flagged in a white paper titled Discussion Paper on Generative AI: Implications for Trust and Governance[3] issued in June 2023.

The consultation closes on 15 March 2024.

Disclaimer: While every effort has been made to ensure that the information contained in this article is accurate, neither its authors nor Squire Patton Boggs accepts responsibility for any errors or omissions. The content of this article is for general information only, and is not intended to constitute or be relied upon as legal advice.
 


[1] Proposed Model AI Governance Framework for Generative AI – Fostering a Trusted Ecosystem, AI Verify Foundation

[2] Artificial Intelligence Governance Framework Model Second Edition, Personal Data Protection Commission, Singapore

[3] Generative AI: Implications for Trust and Governance, Infocomm Media Development Authority

HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
HB Ad Slot
HB Mobile Ad Slot
 

NLR Logo

We collaborate with the world's leading lawyers to deliver news tailored for you. Sign Up to receive our free e-Newsbulletins

 

Sign Up for e-NewsBulletins