Vulcan Post

S’pore pilots world’s first AI governance testing to reinforce transparencies among companies

Singapore is constantly knee-deep in digitalisation and technology development. New pioneering advancements are launched or piloted frequently, almost equivalent to the speed at which Apple releases new phone models.

Market players and businesses have long begun their journey to adopt machine learning and AI for the benefit of their products and services. However, as consumers, we are none the wiser, content with the end deliverable sold in the market.

As we settle, government agencies are seeing the need for — and importance of — consumers knowing the implications of AI systems, and its overall transparency.

The growing number of products and services being embedded with AI further has cemented the important of driving transparency within AI deployments, through various tech and process checks.

In line with this growing concern, Singapore recently launched AI Verify, the world’s first AI governance testing pilot framework and toolkit.

Developed by the Infocomm Media Authority of Singapore (IMDA) and Personal Data Protection Commission (PDPC), the toolkit was considered a step towards creating a global standard for governance of AI.

This recent launch followed the 2020 launch of the Model AI Governance Framework (second edition) in Davos, and the National AI Strategy in 2019.

How does AI Verify work?

ai verify
Image Credit: Adobe Stock

The initial raw toolkit sounds promising. It packages a set of open-source testing solutions — inclusive of process checks — into a singular toolkit for efficient self-testing.

AI Verify delivers technical testing against three principles: fairness, explainability and robustness.

Essentially a one-stop-shop, the toolkit offers a common platform for AI systems developers to showcase test results, and conduct self-assessment to maintain its product’s commercial requirements. It is a no-hassle process, with the end result generating a complete report for developers and business partners, detailing the areas which could affect their AI performance.

The toolkit is currently available as a Minimum Viable Product (MVP), offering just enough features for early adopters to test and provide feedback for further product development.

Ultimately, AI Verify aims to determine transparency of deployments, assist organisations in AI-related ventures and the assessment of products or services to be presented to the public, as well as guides interested AI investors through its benefits, risks, and limitations.

Finding the technology loophole

The functions and end goal of AI Verify seems pretty straightforward. However, with every new technological advancement, there is usually a loophole.

Potentially, AI Verify can facilitate the interoperability of AI governance frameworks and help organisations plug gaps between said frameworks and regulations. It all sounds promising: transparency at your fingertips, responsible self-assessment, and a step towards global standard for governance of AI.

However, the MVP is not able to define ethical standards, and can only validate AI system developers or owners’ claims about the approach, use, and verified performance of the AI systems.

It also does not guarantee that any AI system tested under its pilot framework will be completely safe, and free from risks or biases.

With said limitation, it is hard to tell how AI Verify will benefit stakeholders and industry players in the long run. How will developers assure that data entered into the toolkit prior to self-assessment is already accurate, and not based on hearsay? Every proper experiment deserves to have a fixed control, and I think AI Verify has quite a technological journey ahead of it.

Perhaps this all-in-one development fits better as a supplemental control in addition to our existing voluntary AI governance frameworks and guidelines. One can utilise this toolkit, yet still depend on a checklist to further assure the assessment’s credibility.

As they say, “If it ain’t broke, don’t fix it. Work on it.”

– Bert Lance
Google and Meta are among some of the companies that have tested AI Verify / Image Credit: Reuters

Since the launch, the toolkit has been tested by companies from different sectors: Google, Meta, Singapore Airlines, and Microsoft, to name a few.

The 10 companies that gained early access to the MVP provided feedback will help shape an internationally applicable toolkit to reflect industry needs and contribute to international standards developments.

Developers are on a constant continuum to enhance and improve the framework. At present, they are working with regulations and standards – involving tech leaders and policy makers – to map AI Verify and establish AI frameworks. This would allow businesses to offer AI-powered products and services in the global markets.

Featured Image Credit: Avanade

Exit mobile version