Select Page

Post University Blog

Bylined by Franklin Orellana, Chair of Data Science Program at Post University:

What directs our lives is not the algorithms, it is the people behind the algorithms. Artificial Intelligence (AI) is a powerful tool that can bring great benefits to society, but it can also bring risks. The same AI that detects computer fraud is used by cybercrime to violate our identity or steal our online banking information. The same facial recognition systems used to prevent crimes can infringe our right to privacy. This dual nature of AI underscores the importance of educating students not only about its potential advantages but also the associated risks.

However, perhaps, one of the most sophisticated threats consists of new personalized manipulation strategies. Neural network technology is very efficient at predicting human behavior and exploiting their wants and needs. AI can know what we want before we know it ourselves and as a result, can be a real threat to our capacity for autonomous decision-making, especially as it becomes more mainstream and accessible to everyday individuals.

This growing accessibility to AI tools and technologies showcases the need for education that goes beyond the surface-level benefits. It’s critical for universities to equip students with a comprehensive understanding of AI’s inner workings, its ethical implications and its potential threats in order to foster a generation of digital natives who can harness its power while safeguarding against its risks.

This educational foundation should begin with the fundamentals. By doing so, students will be empowered to use this technology both responsibly and ethically, starting with detecting scammers, preventing cyber-attacks and protecting data servers.

How scammers use AI

Scammers utilize face swapping and voice synthesis technology. In fact, due to this technology, creating fake videos, audio, or texts to disseminate false information is considered an illegal act in China.

Another very common scam is carried out using ChatGPT, an AI tool that has gained widespread popularity among students. In fact, 30% of college students reported using ChatGPT for schoolwork this past academic year. Unfortunately, scammers have been known to use the OpenAI bot to impersonate a platform and convince students to hand over their access data.

Thanks to advances in artificial intelligence, it is now possible to create fake audio and video messages that are incredibly difficult to distinguish from the real thing. These “deepfakes” could be a boon for hackers in creating AI-generated phishing emails, featuring highly realistic fake videos and audio to confuse people into handing over passwords.

Poisoning the AI ​​defenses

Fortunately, security companies have been quick to adopt artificial intelligence models to help anticipate and detect cyber-attacks. However, sophisticated hackers could attempt to corrupt these defenses.

Artificial intelligence can assist us in analyzing noise signals, but in the hands of the wrong individuals, AI can also generate highly sophisticated attacks. Generative adversarial networks, or GANs, which involve two neural networks interacting with each other, can be used to attempt to deduce the algorithms that defenders use in their AI models.

Another risk is that hackers may identify the datasets used to train models and tamper with them, such as altering the labels of malicious code samples to make them appear safe and non-suspicious.

Attack from the computing cloud

Companies that host other companies’ data on their servers or manage clients’ IT systems remotely become tempting targets for hackers. By breaching these companies’ systems, hackers can also gain access to customers’ systems.

Large cloud companies like Amazon and Google can afford to invest heavily in cybersecurity defenses and offer competitive salaries to attract some of the best talent in the industry. However, this doesn’t make them immune to breaches, but hackers are more likely to target smaller companies.

In this evolving, digital landscape, it’s clear that the synergy between innovation, education, and vigilance will be the key to ensuring that AI continues to benefit society while safeguarding against its potential risks. By fostering a generation of responsible and informed individuals, we can collectively shape a future where AI enriches our lives, rather than compromising our security and privacy.