Image Credit: Pixabay
Artificial intelligence, or AI, has marked a revolutionary turning point within our society. This dynamic and emerging field of technology has enabled us to interact with each other in unimaginable ways. If you looked around, you would come to see just how often our routines intertwine with the possibilities and opportunities of machine learning algorithms, cloud computing platforms, virtual reality, and image processing. With everything ranging from healthcare and environmental sustainability to education and transportation, AI introduces the promise for an efficient and technologically advanced lifestyle.
Take Google Translate, a multilingual machine translation service developed by Google. Whenever we’re in need of a quick translation, we resort to pulling out our phones, recording a voice, using live translation on an image, and watching the magic happen. This is the work of deep neural networks and a method in which computers are programmed to analyze a variety of languages, called natural language processing (NLP). However, with the many prospective benefits of such advanced machinery, comes a darker side. A frightening dilemma soon emerges, commonly referred to as “deep fakes”, which gains the attention of news headlines.
Deepfakes, a form of combining existing images and videos onto source images or videos using a machine learning technique known as generative adversarial network (GAN), introduces us to the dangers and threats of media manipulation. GANs are able to receive photos and videos of a person, typically in extremely large amounts, and are “trained” based off of these inputs. They are then able to generate new images and videos that look nearly identical and indistinguishable from the original content. This new form of image altercation makes us question whether what we see is indeed real, and raises questions concerning the credibility of anything seen on the media such as news sources, online platforms, and politics. Since anyone has the ability to find resources to produce a manipulated video, deep fake technology undoubtedly opens doorways for malicious intent, public shaming, identity theft, and fraud.
The rapid pace at which deep fake production is growing is concerning, considering its capability to influence politics by unauthentically framing and exploiting one’s words and actions. Currently, there are around a whopping 14,678 deepfakes on the internet—and counting, according to CNN. It was also found that individuals and businesses have begun to make custom deepfakes for buyers and sell them for profit. So, what is being done to combat the rising exploitation of deepfake technology?
The Pentagon, through partnership with the Defense Advanced Research Projects Agency (DARPA), is working to hinder the spread of deepfakes with researchers and universities by finding ways to train computers to identify them. Additionally, organizations such as Deeptrace aim to re-establish trust in visual media by detecting and monitoring deepfakes using deep learning.
Sadly, deepfakes may be a problem that is getting too out of hand for the work of companies and organizations. In an article by Digital Trends, Luke Dormehl elaborates on why tech companies are ill-equipped to tackle this problem. He says that that deepfake technology is becoming increasingly better, and the inconsistencies present within earlier deepfakes have now been fixed. With the rate at which these visual reproductions are being created, it is nearly impossible for researchers to keep up. According to the Washington Post, Hany Farid, a computer-science professor and digital-forensics expert at the University of California at Berkeley, states that “The number of people working on the video-synthesis side, as opposed to the detector side, is 100 to 1.”
Larger companies are making an effort to combat this such as Facebook, who announced a $10 million ‘deepfakes detection challenge’ according to VICE. The challenge is expected to take place in December, where Facebook will release a data set of faces and videos for the development of methods and technologies that can detect an algorithmically generated video.
As of now, there is not much we can do as individuals to know whether the next audio we hear or the next video we see is 100% unaltered. But, educating and spreading awareness of the threats and dangers associated with such a rapid and promising period of technological growth is always important to live by.
Sources:
Moore, Jake. “Deepfakes: When seeing isn’t believing.” welivesecurity. 31 Oct 2019. https://www.welivesecurity.com/2019/10/31/deepfakes-seeing-isnt-believing/
Zegeye, Adey. “Natural Language Processing + Google Translate.” CCPT-607: “Big Ideas”: AI to the Cloud, Spring 2019 Blog. 27 Feb 2019. https://blogs.commons.georgetown.edu/cctp-607-spring2019/2019/02/27/natural-language-processing-google-translate/
Leslie, David. “Understanding artificial intelligence ethics and safety.” The Public Policy Programme at The Alan Turing Institute. 2019. https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf
https://deeptracelabs.com/about/
O’Sullivan, Donie et al. “When seeing is no longer believing.” CNN Business. Date unknown. https://www.cnn.com/interactive/2019/01/business/pentagons-race-against-deepfakes/
Cole, Samantha. “Facebook Just Announced $10 Million ‘Deepfakes Detection Challenge.’” VICE. 5 Sept 2019. https://www.vice.com/en_us/article/8xwqp3/facebook-deepfake-detection-challenge-dataset
Dormehl, Luke. “Why tech companies are ill-equipped to combat the internet’s deepfake problem.” digitaltrends. 10 Aug 2019. https://www.digitaltrends.com/cool-tech/how-do-tech-companies-solve-deepfakes/
Harwell, Drew. “Top AI researchers race to detect ‘deepfake’ videos: ‘We are outgunned.’” The Washington Post. 12 June 2019. https://www.washingtonpost.com/technology/2019/06/12/top-ai-researchers-race-detect-deepfake-videos-we-are-outgunned/
Comments
No posts