In today’s innovative landscape, alongside the continuous advancement of artificial intelligence (AI), generative AI has opened up a new vision, with the ability to create diverse and creative content. However, generative AI is like a “double-edged sword,” posing risks related to cybersecurity, privacy violations, and increased advertising fraud.
Innovation Driven by Generative AI
From generating text, images, and sounds to other creative forms, generative AI has surpassed traditional boundaries and expanded the creative potential of humans. This is due to generative AI’s ability to “learn” from input data to produce new and creative products. As a result, generative AI has rapidly become one of the important trends in AI today, capable of creating diverse and creative content.
Generative AI has the ability to create diverse and creative content. (Illustrative image).
The operation of generative AI focuses on identifying patterns and trends from input data, then using this information to create new content that may or may not resemble the original data. The architecture of the GAN (Generative Adversarial Network) model is a key part of generative AI, consisting of two competing models: the generative model and the discriminative model. Based on the available training data, the generative model creates content with the goal of closely aligning the information to achieve high similarity with examples in that dataset.
Next, a discriminative model is applied to evaluate the generated results by measuring the likelihood that the sample belongs to the original dataset rather than originating from the generative model. From these testing results, the model continuously adjusts and optimizes the generated content to ensure it closely matches the initial training data of the generative model.
With this operational method, in the past year, generative AI has become a popular tool on the Internet. Specifically, the launch of ChatGPT, a powerful generative AI tool introduced by OpenAI in November 2022, has caused a stir in the AI and machine learning community. Additionally, rapid advancements in AI technologies such as natural language processing have made generative AI more accessible to users and large-scale content creators.
Consequently, major tech corporations have quickly joined this race. Specifically, Google, Microsoft, Amazon, Meta, and many other names have launched their own generative AI tools within a short period. A recent report from the cloud software company Salesforce (USA) revealed that up to 67% of senior managers in the IT sector are prioritizing generative AI for their business development in the next 18 months, with one-third (33%) viewing it as a top priority.
It can be said that the recent popularity of generative AI has opened up new opportunities for many fields, such as artistic creation, marketing, and education. The ability to automatically, quickly, and creatively generate content has helped industries leverage the development of this technology more effectively. Even in creating engaging advertising content or generating new design images, generative AI has contributed to changing how people interact with various aspects of daily life.
The Threats of Generative AI
While generative AI brings outstanding potential for creativity and content generation, it also opens up opportunities that hackers and malicious actors can exploit to conduct harmful cyberattacks. The risks arising from generative AI include the creation of fake content, impacting authenticity, trustworthiness, and user privacy. This is becoming a major challenge for cybersecurity and privacy.
Generative AI creates fake videos, images, and sounds with significant realism. (Illustrative image).
One of the main exploitations of generative AI technology in cyberattacks is Deepfake technology. In this case, generative AI produces fake videos, images, and sounds with considerable realism. This can lead to the creation of misleading information, distortion of messages, and public misunderstanding; it could even affect reputations, create confusion in society, and result in serious consequences.
Generative AI can create messages, emails, and even automated conversations that mimic the tone and language of the target individual. This makes phishing attacks harder to recognize, causing recipients to be easily deceived and disclose important personal information.
Generative AI technology can also be used to create various types of malware, source codes, and malicious programs. Attackers can create sophisticated new malware with the ability to cover their tracks and evade detection by security tools. Additionally, generating fake emails and websites using generative AI can complicate phishing attacks and make them difficult to distinguish.
For the advertising industry, generative AI contributes to the increasing problem of advertising fraud. A recent report from DoubleVerify (DV – a leading software platform for measuring, data, and digital media analytics) provided comprehensive analyses based on data collected from over 1 trillion video views and ads across social media and connected television (CTV) around the globe, indicating that the number of fraud cases and new variations of fraud increased by 23% over the last year; advertisers also faced fraud rates and invalid access violations of up to 17%.
Notably, 54% of advertising buyers in the APAC region assert that the increase in low-quality content and Made-For-Advertising (MFA) pages is seriously threatening the digital ecosystem. MFA is designed to enhance specific measurement metrics such as clicks and view counts, thus often giving the impression that ads are highly effective. However, DV points out that MFA pages reduce overall engagement with display ads by 7% and 28% for video ads compared to other advertising channels.
It is clear that while generative AI offers tremendous potential, it is also likened to a “double-edged sword,” opening up opportunities for more sophisticated and complex cyberattacks. The exploitation of generative AI to create fake information, execute phishing attacks, invade privacy, and increase advertising fraud has created a significant threat to information security and the privacy of individuals and businesses.
Therefore, cybersecurity experts advise users and businesses to be aware of protecting their data; data sharing should be conducted carefully and intelligently, ensuring that security and management measures are in place. Coordination from individuals to businesses and governments will play a crucial role in shaping the future of generative AI technology, ensuring strong and sustainable development in the current digital transformation era.