fbpx
artificial intelligence
In this article

As generative AI technology continues to push boundaries in content creation – from text to images, videos, and even music – the need for regulation is becoming more apparent by the day. 

For example, creating deepfakes has become surprisingly easy using apps such as Reface and FaceApp. AI studios such as Deep Voodoo have proven how convincing these videos can be, with little to indicate that the content isn’t actually real. 

In the wrong hands, such media can be a grave source of misinformation. Almost anyone’s likeness can be used to recite scripts and spread messages against their will. Scammers have already started using deepfakes to convince victims that their friends and family members are in need of money. 

In Singapore, the Infocomm Media Development Authority (IMDA) has published a discussion paper detailing the risks posed by generative AI technology and a potential framework which can be used to address these threats. Here’s a look at the key points identified in the paper:

The risks of generative AI technology

Mistakes made by AI tools such as Bard and ChatGPT are widely documented on social media. Generative models are only as accurate as the data which they are trained on and as such, they may mislead users from time to time. 

Currently, these models are unable to convey uncertainty and can appear overly-confident about false responses. Any programs built on top of these models stand to echo these same mistakes as well until the model is corrected. 

Beyond factual inaccuracies, language models are susceptible to biases as well. For example, when given the prompts ‘a doctor’ and ‘a nurse’, the Stable Diffusion image generator produced images of men for the former and women for the latter.

Without correction, AI models could perpetuate the stereotypes and biases seen in the data which they are trained on. 

ai stereotype bias
Images generated by AI prompts / Image Credits: IMDA’s discussion paper on generative AI

In a similar vein, AI tools must also reflect a set of values similar to those possessed by humans. Otherwise, as the technology develops, so would the risk of AI causing more harm than good. 

Next, there are concerns of privacy and confidentiality which need to be addressed. “There are risks to privacy if models ‘memorise’ wholesale a specific data record and replicate it when queried,” IMDA’s discussion paper states. 

Say a company uses an AI tool for financial calculations, inputting sensitive data such as revenue generated and goods sold. The AI tool might then memorise this data and reveal the company’s financial information to external users who ask for it. 

Finally, the discussion paper touches upon the challenges of training AI on copyrighted material. To protect creators, there’s a need to define what’s fair game.

For instance, it might be okay for AI to produce a summary of a book, but what if users request for the contents from specific pages? The same questions arise when it comes to replicating an artist’s style or a singer’s voice. 

A framework for generative AI governance

To foster a trusted AI ecosystem, IMDA’s discussion paper outlines a set of dimensions which must be addressed by policymakers. First off, there’s a need for accountability among those developing AI language models. 

End users, as well as businesses building on top of these models, need to be aware of the design choices made by AI developers. For example, there needs to be transparency around the type of information which was used to train a language model. This can help users gauge risks and the biases which a model might inherit. 

Policymakers can put this into motion by enforcing standardised metrics – a set of criteria based on which language models can be monitored and evaluated. The paper advocates for evaluation by a third-party which can provide objective assessments of AI tools.  

Beyond this, there’s a need for users to be able to identify AI content. In the past, AI-generated images – such as that of Pope Francis in a puffer jacket – have gone viral, causing confusion surrounding their legitimacy. These can be a key source of misinformation and needs to be addressed by regulations. 

pope in a puffer jacket
A viral AI-generated image of Pope Francis in a puffer jacket.

The paper recommends enforced watermarking of AI-generated content so that consumers can make more informed decisions.

Finally, given the rapid pace of AI development, safety research must catch up. Policymakers should invest in technology developed to keep AI systems under control and ensure that they don’t cause harm.

Along with this, there’s also a need for education programs which teach consumers about the responsible use of AI and direct innovation towards public good.

IMDA launches AI Verify Foundation

Taking a step towards a safer AI-led future, IMDA set up the AI Verify Foundation in June.

The foundation features leading industry players – including Google, IBM, and Microsoft – overseeing the development of the AI Verify testing tool, which will help ensure the responsible use of this technology. 

AI Verify uses standardised tests to measure the performance of AI systems as per a set of internationally recognised principles, including aspects such as safety, transparency, and fairness.

Since the tool is in constant need of development, IMDA has made it available to the open source community. Through the AI Verify Foundation, IMDA aims to harness the contributions of the global community and continue building AI Verify for future use. 

Featured Image Credit: Brookings Institution

Also Read: How this S’pore AI startup is transforming the marketing sector by automating content creation

Subscribe to our newsletter

Stay updated with Vulcan Post weekly curated news and updates.

MORE FROM VULCAN POST

Vulcan Post aims to be the knowledge hub of Singapore and Malaysia.

© 2021 GRVTY Media Pte. Ltd.
(UEN 201431998C.)

Vulcan Post aims to be the knowledge hub of Singapore and Malaysia.

© 2021 GRVTY Media Pte. Ltd.
(UEN 201431998C.)

Singapore

Edition

Malaysia

Edition