Deepfake (Generative adversarial network) | CVisionLab

 

Deepfake (Generative adversarial network)

Just think an extra couple of seconds before assuming something is real.

Just think an extra couple of seconds before assuming something is real.

Recent advances in deep learning have made significant strides in terms of image recognition, data calculation, and broader analysis.
AI has already gone beyond-human cognitive abilities but, not long ago, lacked imagination. However, the potential for generating image datasets, realistic photographs, resolution enhancement, video prediction, etc. created a need for the next wave of technology.

Generative adversarial networks (GANs) can answer these needs. By using neural networks, GANs can make a significant impact on any industry dealing with data and images. This technology showed that there is a possibility to generate realistic fake photos or replace people’s faces with other ones. This phenomenon was later dubbed deepfake (deep learning + fake).

 

How Generative Adversarial Network Works and Its Benefits

The objective of the generative adversarial network is to create something new based on previous data. For example, it can come up with a human face after studying hundreds of pictures. Or it can generate a painting resembling a particular artist's style using their work as reference material.

GANs set two neural networks in direct competition with one another – generator and discriminator. Generator produces a new image as an output based on the knowledge that the neural network has been taught. Discriminator determines whether the image is real or fake.

Both components stay in constant interaction. Generator learns how to create images that will deceive the discriminator and make it classify a produced image as a real one. Discriminator, on the other hand, learns how not to be deceived. The better the discriminator is, the harder it will be for the generator to make realistic images and, ultimately, the better job it will accomplish.

To illustrate it in simple terms, let’s use the following analogy. The better the teacher (discriminator) is at teaching, the more it will help the student (generator) learn and improve their academic performance. The student submits their work, and the teacher marks their mistakes. Only then can the student realize what they have done wrong and correct their errors.

 

Strategic Assessment and Key Problems Emerging from Deepfakes

GANs can generate almost entirely new data. Therefore, its application extends the strategic uses of the technology to practically any sector of business and government bodies. Just like any new tech penetrates all types of industries, there are no limits to where it can make a contribution.

One of the most influential GAN uses gaining more and more momentum is image creation. Technologies creating and altering imagery is not a complete novelty. Programs like Adobe Photoshop started offering regular consumer capabilities of creating images from scratch. But even with adequate skills, there were limitations in what they could achieve.

Algorithmically-generated imagination revolutionizes the industry of doctored imagery. This fact becomes even more impressive considering its relative recency. GANs were first introduced in 2014, gaining wider recognition only in 2016-2017.

The main problem around GANs is its ability to spread misinformation with high-quality fake information facts (text, image, video, speech). This issue is not inherent to generative models, but the technology is much more powerful than past primitive tools. History shows that technology itself may be neutral, but such influential tech can be manipulated.

Even though cases of manipulating data might be mishandled, there are mass of useful and, at times, even silly applications. The rule of thumb in such cases is to "do no harm". For instance, there are three rules of robotics – concerned with not injuring humans, following orders, and preserving the robot’s existence. Deep fake technology can adopt some equivalent of that. Also, regulators can focus on specific case uses to frame this around protecting human rights.

By all means, pretending to be someone else is something you shouldn’t do not only from a legal standpoint but social as well. People tend to identify themselves with other people through their feelings: hearing, eyesight, speech. The technology is already at the point of generating fake content capable of entirely deceiving all your senses, and there should be measures put in place.

 

 

 

Ethics of the Technology and Privacy Protection

GAN is a powerful tool as it can potentially mirror any data distribution based on unsupervised learning. It can generate content in various mediums with unheard-of similarity – images, audio, and video.

GANs captured the media's attention with its eerily realistic deep fakes of celebrities. Seemingly out of nowhere, the internet was swept by videos of Steve Buscemi’s face molded into Jennifer Lawrence’s and movie clips with Nicolas Cage’s being inserted into them. While this technology can indeed embed other people’s faces/voices into pre-existing content, we shouldn’t treat this technology lightly and narrow it down to memes and gags.

The possibility of imitating human voice has already been used with malicious intent. The cases vary from defrauding people to substituting people’s faces in videos with explicit content. Major tech and research companies are hunting for solutions to this problem and hold competitions to find one. Some government bodies are already contemplating to introduce regulations impelling people to tag deepfake content.

Producing deep fakes is only one aspect of a tidal shift that the technology entails. As the next step in GAN’s developments, we can expect applications aimed at not only generating content but stand-alone tools to identify deep fakes. We are conducting research and tests in both directions. Since there are always risks of criminals taking advantage of deep fakes, solutions to accurately pinpoint them will provide a decent level of protection. Also, as mentioned before, the output material can be marked as such to distinguish it from real content.

We realize that this technology has the potential to be misused, and we make efforts to address these issues. The goal is to ensure its legal and useful operation. From the technical point of view, the GAN technology can make some pretty amazing accomplishments. People can enjoy new music from musicians that are no longer alive or see new artwork by artists from centuries ago. Overall, it can have a myriad of valuable and entertaining applications.

 


See Also

 

GET IN TOUCH

By clicking "SEND", I consent to the processing of my personal data.
For more info see our privacy policy.

 

 

CVisionLab

Copyright © 2024. CVisionLab LLC
Terms of UsePrivacy Policy
Design-studio cCube.ru. Website development and support