The 2024 edition of the survey included questions on deepfakes for the first time, reflecting growing concerns over the rise of generative artificial intelligence tools that can produce highly convincing fake content
Only one in four people in Singapore can accurately distinguish between deepfake and legitimate videos, despite the majority claiming confidence in their ability to do so. This gap between perception and reality was highlighted in the Cyber Security Agency of Singapore’s (CSA) latest Cybersecurity Public Awareness Survey, released on 2 July.
The 2024 edition of the survey included questions on deepfakes for the first time, reflecting growing concerns over the rise of generative artificial intelligence tools that can produce highly convincing fake content. The findings revealed a worrying discrepancy: nearly 80 per cent of respondents aged 15 and above said they were confident in identifying deepfakes, often citing clues such as unnatural lip movements and suspicious content. However, when tested, only 25 per cent could correctly tell apart real videos from manipulated ones.
David Koh, chief executive of CSA, emphasised the importance of caution in an era where cybercriminals continually develop new scam tactics. “Always stop and check with trusted sources before taking any action, so that we can protect what is precious to us,” he said.
Conducted in October 2024, the survey polled 1,050 individuals on their attitudes towards cybersecurity, including awareness of online threats and cyber hygiene practices. While knowledge of phishing attacks has improved—with 66 per cent identifying all phishing content correctly, up from 38 per cent in 2022—only 13 per cent could distinguish between all phishing and legitimate content, a notable drop from 24 per cent in the previous survey.
The declining ability to differentiate between real and fake online content is unsurprising, according to Vladimir Kalugin, operational director at cybersecurity firm Group-IB. He pointed to the growing sophistication of cyber scams, where AI is used to spoof well-known brands convincingly, adopt flawless grammar, and even mimic multi-factor authentication prompts. “As the fake looks more like the real, even a more aware public faces greater difficulty making that final call,” Kalugin noted.
Scam tactics now also involve fake phone numbers, hijacked accounts of real people, and deepfake videos featuring public figures, which further enhance the credibility of malicious content. Kalugin warned that this erosion of digital trust is already impacting routine digital decisions such as clicking links or paying bills, threatening the broader efficiency of digital services and economic aspirations.
Despite these concerns, the survey also showed encouraging trends in cyber hygiene. The number of respondents who had installed at least one cybersecurity app rose to 63 per cent in 2024, up from 50 per cent in 2022. The adoption of two-factor authentication across online platforms also increased, reaching 41 per cent compared with 35 per cent in the previous survey.
In terms of device maintenance, 36 per cent of respondents said they installed updates immediately when prompted, while 32 per cent delayed them. Only a small minority—3 per cent—chose not to update their devices at all, a slight improvement from 4 per cent in 2022.
The findings underscore the need for continued public education and technological safeguards, as digital threats grow more nuanced and widespread.

