How AI is Reshaping Our Security and Privacy

Artificial Intelligence is one of the most important fields of technology today, and the effect it has on many different parts of our society is revolutionary. Although there are many positives to AI technology, it also introduces a new level of cybercrime and fraud, as criminals can use the tools AI brings for nefarious purposes. Across the world, we see crimes involving AI generated images, videos, and audio to trick people into giving money. For example, in February 2024, criminals were able to fake a video call with a finance worker for the company Arup, using AI to change their voices to sound like the CFO of Arup. Because of this, the hackers were able to steal 25 million dollars from the company.

AI and the Spread of Misinformation

AI has even had an impact on politics worldwide. In 2022, A fake video appeared to show Ukrainian president Volodymyr Zelensky telling Ukrainian soldiers to surrender and stop fighting in the current war against Russia. Although this video was quickly disproved and it is unlikely that many believed it, AI in the future could be able to replicate these videos much better, and spread misinformation at an unprecedented scale. In the future, AI could be used by governments, political parties, and other important figures to gain an advantage and to spread false information.

AI, Surveillance, and Corporate Privacy Concerns

Privacy concerns around AI are prevalent as well. Large corporations such as Microsoft have been criticized for using AI to invade users' privacy. For example, A controversial feature by Microsoft was the AI-Powered “Recall” feature, a program that would take screenshots every five seconds to provide a “visual timeline”, using AI to find useful information in those screenshots. The backlash against the feature was almost immediate, and Microsoft changed it to an opt-in feature instead of a default one. This program also raises the issue of using AI to automatically detect certain information, and how this could be used against people. Although Microsoft claimed not to access or view the screenshots, it is still an unprecedented level of digital surveillance for the everyday person. 

Global Cases and the Need for AI Regulation

Microsoft is not the only company guilty of invading users' data and privacy for its AI models. Last September, Meta admitted to scraping every Australian adult's Facebook account for photos and information to train its Llama AI model, without an opt-out option for Australian users. This information was released soon after the country promised a ban on social media apps for children, over concerns about the negative effects it may have. This revelation also led to multiple watchdog groups investigating other social media/AI companies, such as Google, which was investigated by the Irish Data Protection Committee for similar data protection problems. ChatGPT was temporarily banned last year by the Italian data protection regulator over concerns of data protection and privacy. These actions show how important regulation is over the data that different corporations collect and how they use that data, especially with training AI models. 

Toward a Safer AI Future

Each of these events serves as a growing reminder of the threats of Artificial Intelligence and how this technology, if left unchecked, can cause harm across the globe. Most concerning, AI technology is still rapidly growing. In the near future, it may pose even greater risks from breaches in computer security to threats to individual privacy. 

As AI systems become more advanced, they may blur the line between what is real and what is artificially generated, making it easier to deceive, manipulate, or even commit crimes. That’s why strong regulations are essential. Without them, AI could be used in increasingly malicious ways, spreading misinformation, violating privacy, or causing harm. To protect our future, we must act now, ensuring that AI is developed ethically, used responsibly, and guided by values that put people first.

Closing Note: 

This blog is part of #OwnTheAlgorithm, iFp’s Emerging Innovators campaign to rethink how AI is built—and who it’s built for. We invite young people and the communities they’re part of to question AI systems, claim their role in its development, and build a future where AI reflects our values, not just profit.

Question it. Own it. Build it.

Sources

https://edition.cnn.com/2024/05/16/tech/arup-deepfake-scam-loss-hong-kong-intl-hnk/index.html

Next
Next

Biases in AI - Through the Eyes of an Emerging Innovator