4 Ways Generative AI May Lead To More Cyber Frauds in Malaysia
Of the many AI tools available, generative AI will likely stand out as a potent tool for attackers, enabling impersonation and identity theft tactics.
In 2024, organisations in Malaysia will continue to face a challenging cybersecurity landscape amidst technological advancements, where enterprises rely on Artificial Intelligence (AI) or Machine Learning (ML), the cloud, and Web3 technologies for strategic advantages
As these innovations become key battlefronts, both defenders and cybercriminals will exploit their transformative potential, leading to a turbulent cyber landscape.
Business leaders in Malaysia must demonstrate adaptability, reassess workflows, and leverage technological investments to navigate the evolving threat landscape detailed in Trend Micro Malaysia's cybersecurity report.
1. Generative AI will allow fraudsters to level up their social engineering lures in targeted attacks
Like organisations worldwide, many in Malaysia are eager to harness the advantages of this year's most thrilling technology — AI. As with many emerging technologies, AI's dual nature as both a tool and a potential threat will become more pronounced in the social dimension of cyberattacks, expected to unfold in 2024.
Of the many AI tools available, generative AI will likely stand out as a potent tool for attackers, enabling impersonation and identity theft tactics. For example, attackers are already cleverly combining AI tools like chatbots and voice clones to create multifaceted threats like virtual kidnapping.
By leveraging AI technologies, virtual kidnapping uses misinformation to deceive victims into paying a ransom in exchange for a "kidnapped" loved one. In a disturbing trend that has trapped many Malaysians, fraudsters are defrauding victims under the guise of job offers and investment plans.
Similarly, AI-enabled deepfakes (a portmanteau of "deep learning" and "fake") cannot be both cheap and convincing for attackers. Hence, it’s more likely that voice cloning will be used in near-future scams and in a targeted way, rather than in volume-based attacks.
In 2024, we'll also see continued debate around AI regulations and safety. Global regulatory bodies are proactively responding to the advancements in technology, with recent legislation emerging in China and anticipated regulations from the European Union.
There are growing calls for an international summit focused on AI regulations as well. In the meantime, policymakers are exploring ways to leverage existing data protection laws to address AI misuse.
The Malaysian government is actively working towards introducing the Cybersecurity Bill and reinforcing its defences through the Malaysian Cyber Security Strategy to combat evolving cyber threats.
However, as these regulations are still in the pipeline, adopting a zero-trust approach becomes crucial for defenders to safeguard against social engineering scams, particularly those exploiting convincing deepfakes.
2. Security gaps in cloud environments will see successful cloud-based worm attacks
In recent years, we've seen cloud adoption in Malaysia expanding rapidly, driven by government initiatives and the proactive efforts of the Malaysia Digital Economy Corporation.
As dependence on the cloud increases, organisations in Malaysia must remain watchful for emerging vulnerabilities and threats linked to it. In 2024, an increase in worms that will be tailored to exploit cloud-native platforms and technologies is anticipated.
This risk extends to all organisations, irrespective of their maturity in cloud adoption, as "living off the cloud" attacks become more prevalent. More specifically, attacks in Kubernetes environments will become more automated and specialised.
Why will organisations need to brace themselves for worms in 2024?
Firstly, worms are capable of rapid propagation in cloud environments, especially through misconfigurations in APIs. This is because cloud misconfigurations serve as easy entry points for attackers, due to their overly complicated infrastructures, configuration drift, and improperly configured development environments.
Secondly, attackers can use compromised environments as a stepping stone to infect others.
Thirdly, cybercriminals will be able to scale up and automate their attacks, which is ideal for large-scale attacks on cloud environments.
Thus, organisations' security posture must proactively scan their cloud environments in search of potential worm attacks and practice zero trust where possible.
3. Data will be weaponised against fledgling cloud-based ML models
Whereas security concerns about the broader cloud environment stem from threats like misconfigurations and cloud-native attacks designed to provide attackers access to stored data, defending ML models will introduce security teams to a distinct set of challenges that threaten the integrity of the data itself.
In 2024, cloud-based ML faces a growing threat known as data poisoning, where attackers manipulate the data used to train models.
Defenders face challenges as these models draw data from diverse sources like third-party data lakes and federated learning systems. Threat actors can orchestrate attacks during data collection or compromise a model's storage and pipeline infrastructure.
Specialised models with focused data are particularly vulnerable compared to larger language models and generative AI models with extensive data.
To guard against this, organisations must validate all training data, secure cloud-stored data with encryption, employ secure transfer methods, implement role-based access control, utilise Cloud Security Posture Management tools, and regularly audit cloud infrastructure for anomalies. This will help to detect and thwart data tampering, misconfigurations, and suspicious activity.
4. Attacks will become more sophisticated as ransomware groups forge alliances
Over the past few years, ransomware groups have adapted by changing their targets, upgrading tools and techniques, and devising innovative methods to enhance profits and efficiency. Recent observations, with a forecast into the next year, indicate a significant shift in the cybercriminals landscape, as groups increasingly collaborate and form alliances.
Notably, connections between entities like Royal and the emerging player Mimic with the larger Conti ransomware group have emerged. Such collaborations are becoming increasingly popular as they lead to lower costs and expanded market presence while maintaining criminal effectiveness.
Another noteworthy trend involves the strategic targeting of vulnerabilities in Internet-facing servers or those hosting critical business information.
An illustrative example is the Clop ransomware group's exploitation of the MOVEit data transfer service, particularly against government agencies, indicating an advanced understanding of software running on services and a preference for targeting vulnerabilities with less lateral movement.
To stay ahead of these evolving threats, organisations must adopt proactive strategies, including a risk-based approach, to identify and disrupt potential intrusions upstream in the kill chain.
This proactive stance not only enhances threat visibility but also empowers organisations to detect malicious behaviour before encryption, improving the likelihood of successful prevention.
Advanced unified cybersecurity platforms equipped with sophisticated Extended Detection and Response (XDR) capabilities play a crucial role, providing businesses with comprehensive visibility to understand, communicate, and mitigate risks across the entire attack surface.
This story is a personal opinion of the writer and does not necessarily reflect the position of SAYS.
You too can submit a story as a SAYS reader by emailing us at [email protected].
Goh Chee Hoh is the Managing Director of Trend Micro Malaysia.