Malaysia has a lot of catching up to do on how to tackle deepfakes and the potential threats that could arise with the use of AI, Khoo Ying Hooi writes.
Recently, a Forbes magazine article “Deepfakes Are Going To Wreak Havoc On Society. We Are Not Prepared” was widely shared in Malaysia.
The article briefly mentioned Malaysia as a case example: “… politicians in Malaysia and in Brazil have sought to evade the consequences of compromising video footage by claiming that the videos were deepfakes. In both cases, no one has been able to definitively establish otherwise – and public opinion has remained divided.”
As the author of the column, Rob Toews, rightly pointed out, the danger about deepfakes is that people find it extremely difficult to distinguish between what is real and what is fake.
This then allows different actors to exploit and manipulate a situation, which could have potentially devastating consequences for the parties implicated or involved. In the end, the question of whether the videos are real is no longer important, and the damage could have already been done.
- Sign up for Aliran's free daily email updates or weekly newsletters or both
- Make a one-off donation to Persatuan Aliran Kesedaran Negara, CIMB a/c 8004240948
- Make a pledge or schedule an auto donation to Aliran every month or every quarter
- Become an Aliran member
On the flip side, due to difficulties in identifying and proving authenticity, some people may conveniently argue that deepfake technology exists in a bid to discredit genuine video evidence.
Deepfakes are a form of artificial intelligence that generates audiovisual material designed to look and sound just like the real thing. This technology enables anyone’s face to be superimposed on a video to create a realistic effect.
Deepfakes have generated much worry across the globe, but such discourse remains limited within Malaysia.
In politics, deepfakes can easily be used to compromise or damage the reputation of political candidates and disrupt an election.
Beyond national politics, deepfakes might also be exploited to cause misunderstanding and distrust between countries, thereby posing a threat to diplomacy and national security.
What Toews pointed out is not new in Malaysia. Deepfakes became a buzzword for a short period when a sex video allegedly involving a cabinet minister surfaced in 2019. The confession video of Haziq Abdullah Abdul Aziz – that he was one of the men in the video – sparked a debate over the authenticity of the video.
The lack of clarity of the video led some to conclude that the contents were computer-generated and possibly tampered with. This then raised a debate over whether the video was genuine. Official experts concluded that there was insufficient clarity to positively identify the individuals featured in the video.
In another instance, also in 2019, popular actor Zul Ariffin denied being in a pornographic video clip that went viral on social media. He lamented about being a victim of irresponsible individuals in this age of modern technology: “…since people have already been able to deepfake Mark Zuckerberg’s face, what more me?… It’s so easy to find visuals of me to do a deepfake like this. I never imagined that I’d be a victim of this sort of thing… I thought this only happens to politicians.”
Deepfakes are dangerous as they have the ability to create deliberate falsehoods that can then be spread under the guise of truth.
Most people have little knowledge about deepfakes. To make matters worse, the untrained eye finds it difficult to ascertain if a video is genuine – or fake.
The challenge is as difficult, if not more so, when determining whether a news item released is genuine or fake.
At times, the sharing of ‘fake news’ may be innocuous, as it may be motivated by a desire to share information the sender thinks is important. Nonetheless, the content shared is still fake – and the impact can be negative and damaging. Hence, it is vital for us to be responsible enough to check the authenticity of any material before sharing it, especially if it involves sensitive matters.
Where the intention is malicious, the deliberate spreading of ‘fake news’ to damage the reputation of some individual or group is much more appalling. This must be curbed and the perpetrators punished accordingly.
To tackle the problems arising from deepfakes, some countries are coming up with legislation to manage it. However, the legal implementation remains a big challenge as the subject matter is still new and the laws could be riddled with much ambiguity.
Malaysia has a lot of catching up to do on how to tackle deepfakes and the potential threats that could arise with the use of AI.
While we have a national policy on the fourth industrial revolution, the discourse focuses on the economic sector, with little discussion about its social and political impacts. Moreover, the discourse seems to be largely confined to experts and practitioners.
While some human rights NGOs have explored the linkage between technology and its impact on human rights, the debates and discussions have been limited.
This leads to the question of what can be done to prevent deepfakes from being used and abused? Generally, the two common solutions suggested thus far are:
- coming up with technological solutions that can detect deepfakes
- drawing up appropriate legislation to punish the producers and those who intentionally spread deepfakes
But these two proposed solutions could also generate their own difficulties. For instance, to ensure legislation is not exploited, there must be a rights-based approach adopted in the drafting process, and this can often be challenging. Such legislation should not be used to prevent the truth from being revealed.
Over time, the awareness of the challenges posed by deepfakes has gradually spread, but the pace has been too slow.
Low-quality deepfakes are easy to spot, but with technological advancement, it has become more difficult to determine authenticity. As deepfakes become more common and dangerously convincing, tackling the problem has grown more challenging.
The formulation of effective policies may protect citizens from suffering the negative consequences of the abuse of new technology such as AI.
In the meantime, we should not ignore the question of how best to protect the individual right to free expression while protecting society’s right to accurate information.
The authorities should be mindful that truth and justice should prevail and that technology is meant to improve the lives of the rakyat and should not be exploited to sabotage innocent victims.
Khoo Ying HooiCo-editor, Aliran newsletter
10 June 2020
AGENDA RAKYAT - Lima perkara utama
- Tegakkan maruah serta kualiti kehidupan rakyat
- Galakkan pembangunan saksama, lestari serta tangani krisis alam sekitar
- Raikan kerencaman dan keterangkuman
- Selamatkan demokrasi dan angkatkan keluhuran undang-undang
- Lawan rasuah dan kronisme