Picture of By Nouhaila Morjani

By Nouhaila Morjani

We have seen them before, those clips on the internet that look so real where celebs or politicians make crazy statements or do something unexpected. They almost look too good to be true, like something is wrong with them, and most of the time they are too good to be true. The clips we’re talking about are made with a form of artificial intelligence called deep learning. This is a technique used to compose human images based on artificial intelligence. In a deepfake, existing images and videos are combined and overlaid to create a unique new video. 3 years ago it was quite easy to spot a deepfake, but as the technology improves, it gets harder and harder to distinguish a real video from a deepfake. What does this new development entail?

The Dangerous Aspect of It All

Andrea Hickerson, Director of the School of Journalism and Mass Communications at the University of South Carolina describes deepfakes ‘as lies that are disguised to look like the truth.’ This is what makes deepfakes dangerous because the truth is so hard to distinguish. The fear for deepfakes mainly comes from politicians. This is not only because they are mostly targeted, but also because they fear the impact it may have on the political climate. Deepfakes can easily  be mistaken as the truth and this can have seriously dangerous consequences. A deepfake can easily portray a politician inciting violence. This can agitate people if they deem this is true and possibly cause riots.

A New Form of Revenge and Scamming

This technology is also often used as a form of revenge porn. To make a deepfake porn, you only need a couple of pictures and a video that is a couple of minutes long. You can make every deepfake porn you want of everyone with these two ingredients. This is troubling because as discussed earlier, it’s getting harder to identify what is real and what’s not. This shows that deepfakes are not only about deceiving the public about politics but they’re also used to harass or bully people. This can really leave an imprint because it’s hurtful and can really ruin someone’s reputation.

Another use of deepfakes is scamming people. We all are familiar with the scam over WhatsApp where someone pretends to be you and tries to borrow money from your dad or mom. Deepfakes are being used to make this process easier and more believable. Studies have shown that deepfakes don’t even have to be that good. One condition is that the person is recognizable and another that the graphics of it all are realistic enough to identify the person in the deepfake. 

Deepfake Therapy

But not all deepfakes are malicious, some can also be entertaining and even helpful. Deepfakes can be used as a means to cope with grief. ‘Deepfake therapy’ is a documentary that shows that in the process of grief, family members can have an artificial video conversation with their loved ones through the use of deepfake technology. This shows a side of deepfakes that we haven’t discussed before. The realistic technology behind it can also be helpful and used for positive purposes.

It’s a new development in the tech world, so responses will always be anxious at first. But I believe that we will slowly accept this in our daily lives.

Adjusting and Adapting is Key

So, deepfakes are up and coming in this world as the technology behind it gets better and more realistic. This can have some bad outcomes but also some good ones. It’s a new development in the tech world, so responses will always be anxious at first. But I believe that we will slowly accept this in our daily lives. It will also become easier to detect deepfakes because we will be exposed to them so much in the near future. In conclusion, it’s something we’ll have to adapt to, and in fact, we will.

 

 

Cover: Alex Iby

Google Workspace Google Workspace prijzen Google Workspace migratie Google Workspace Google Workspace