Safeguarding Indonesia's financial landscape against AI-assisted threats
25 June 2024
Share
facebook1 facebook
twitter1 twitter
linkedin1 linkedin

[2024-06] Blog- AAI-1

In June 2023, an audio clip was posted on X of President Joko Widodo, the President of Indonesia, singing 'Asmalibrasi’, a popular Indonesian pop song by Soegi Borneon. Even though the President was a known fan of music, especially heavy metal music, the clip still surprised the public and drew over 5 million views on X and 188,000 likes on TikTok.

As it turned out, the clip was the work of an artificial intelligence (AI) that mimicked his voice. While it was created for entertainment, it stoked fears that AI-generated content (AIGC) could be used for spreading misinformation and even scams.

Indeed, driven by a burgeoning digital economy and a large online population, the number of deepfakes, a form of AIGC, has increased by 1,530% across the Asia Pacific between 2022 and 2023.

As the biggest economy in Southeast Asia, financial fraud remains a clear and present concern in Indonesia. According to a FICO study, 64% of Indonesians have experienced scam attempts, and 36% are afraid of identity theft. Otoritas Jasa Keuangan, the Financial Services Authority in Indonesia, noted that incidents of financial fraud in Indonesia have risen by 25% in 2023 alone. While the country has, at least for the time being, avoided high-profile AI- and deepfake-assisted financial scams, it is merely a matter of time.

Businesses have a key role to play in assuaging these well-found concerns. In this article by Anggraini Rahayu, Director of Strategic Accounts at ADVANCE.AI Indonesia, we explore the opportunities and pitfalls of AIGC, the government’s response to said threat, and how, by taking a multi-layered approach to identity verification and authentication, businesses can beat back the tide of AIGC-related fraud.

 

image-2

Anggraini (Anggi) Rahayu is the Director of Strategic Accounts in ADVANCE.AI Indonesia.

Ibu Anggi brings three decades of extensive leadership experience across AI, machine learning, analytics, financial services, IT, and retail industries. Prior to joining ADVANCE.AI, Ibu Anggi held senior management positions at PT SAS Institute, Diebold Nixdorf, and IBM Indonesia. 

Navigating the Double-Edged Sword of AI

AIGC is already the next big thing in the world of technology. In the banking and financial services industry alone, AI is used for everything from fraud detection to generating personalised, real-time responses to customer inquiries. AI systems can even analyse customer data to provide tailored financial advice.

Unfortunately, though unsurprisingly, such opportunities have piqued the interest of fraudsters. Today, these fraudsters are crafting AIGC to create convincing digital impersonations of real individuals, thus allowing them to circumvent identity verification protocols. The vulnerable areas include opening digital bank accounts, loan and credit card applications, and e-commerce transactions. Such breaches pose considerable financial and reputational risks to individuals and organisations alike.

 

Bridging the Gap in Indonesia

The public sector plays a critical role in AI development and safety. Governments can help set regulatory standards, foster ethical guidelines, and ensure equitable access to technology benefits across society.

For Indonesia, its 2020-2045 National Artificial Intelligence Strategy is a national framework to streamline Indonesia's existing technology-focused plans and projects, including AI. The framework shows promise in many aspects. For example, it outlines clear priorities for structured and focused AI development, as well as a need for a data ethics board and national standards for safer, more ethical AI development. These are strong, confident steps in the right direction.

However, gaps persist in the scheme. For instance, details about comprehensive regulations that address all facets of AI development, such as data privacy, intellectual property, and cross-border data flows, are lacking. Also, while the framework does mention the use of AI to bridge the digital divide, there is far less emphasis on educating the public on AI and AIGC-related risks.

 

Business Stepping Up to the Challenge

Yet, all of this takes time, and while the Government of Indonesia plots its next steps, the private sector must step up as well.

To that end, whilst many companies have implemented electronic Know-Your-Customer (eKYC) measures to digitally authenticate customers, fraudsters have refined their methods too. For instance, they might utilise both physical and digital items, such as photographs, highly detailed masks, digitally generated images, or videos, to trick the camera of a smartphone or laptop. There are even entire websites like OnlyFake that allow users to generate fake ID documents, such as passports and driver’s licences.

Fortunately, there is an arsenal of ready-to-deploy tools to keep fraudsters at bay, especially on the digital onboarding front. In fact, businesses must take a multi-layered approach when it comes to identity verification and authentication.

 Group 38   Tool #1: Forgery Detection

The first countermeasure is forgery detection. While facial recognition technology has come a long way, it still cannot independently verify the authenticity of an identification card. A solution, then, is to combine it with forgery detection to significantly enhance identity verification processes.

For instance, ID forgery detection solutions use machine learning models to identify common types of forged or altered ID documents, which not only facilitates remote completion of the eKYC process but also strengthens the security posture of the entire organisation.

  Group 39   Tool #2: Liveness Detection

Liveness detection can also counteract the rise of AIGC fraud. By integrating facial recognition with liveness detection, we can ensure that the individual logging into the account is a live person as opposed to a 2D printout, video, or AI-generated face.

    Group 26   Tool #3: Real-time Monitoring System

Finally, continuous monitoring and anomaly detection, too, are key. Deploy robust real-time monitoring systems that identify suspicious bot or human-driven interactions, and spikes and flag potential attacks.

 

Every Contribution Counts

The Pandora’s Box that used to contain AI is now open. Yesterday, it was a fun, relatively harmless video of a singing President. In May 2024, a Hong Kong-based staff from Arup, a multinational design and engineering firm, fell prey to a deepfake scam and transferred US$25 million to the perpetrators—what’s next for AIGC-assisted threats? What will tomorrow bring?

Even as fraudsters attempt to harness the power of AI for nefarious purposes, the public and private sectors must work together, learn, and adapt to the ever-evolving threat landscape. By embracing deeper and more meaningful collaborations, we can harness the best of both worlds: the private sector's rapid technological innovations and the public sector's commitment to safety and ethics.

Ultimately, everybody has a stake, and every contribution counts. Together, we are not just reacting to threats. Instead, we can proactively build a safer, smarter, more resilient business landscape for all.

Speak to our experts today to understand how enhance your digital onboarding solutions and let us help you mitigate the risks in your business, and grow in a scalable and secure manner.

Follow us:
facebook1 facebook
linkedin linkedin