6030: Week 7: REALLY Fake News: Text and Video

Prompt:

What is the role of the federal government and/or big tech companies is policing deepfakes and shallowfakes?

Response:

Ironically, the system used by the deepfakes industry to create and distribute false imagery is the same one in which some services including Facebook and Twitter are investigating how to remove them. It is by drawing out in more detail the connections between deepfake technology and the platforms in which it’s being distributed that we may have more success in understanding the scope of the problem and targeting the enforcement of laws around it.

The services that hosted deepfakes images and videos in 2017 identified them as violating their terms of service; Facebook has a team of people who take down content that violates its policies; and YouTube even created a team devoted to handling this problem in April 2017. The number of deepfakes-related incidents on platforms has been growing quickly. Twitter has suspended about 30 accounts for violating its terms of service related to the production and distribution of deepfake videos. But it’s hard to see how we can effectively police these platforms in order to deal with the emerging problem of deepfakes — unless lawmakers step in.

Facebook’s focus on cracking down on bad content, especially image and video, from networks affiliated with foreign governments and elections, could be related to how Facebook’s algorithm shapes what you see in your News Feed. A good case study is the rise of “false news” and propaganda circulating on the platform following the 2016 presidential election. As a result, Facebook instituted a series of news integrity initiatives and tools. But for whatever reason, the company has not responded to the emergence of deepfakes and the ways they are used to sow disinformation.

Could Facebook and other social media platforms be setting a dangerous precedent by allowing so-called “truthiness” to flourish without considering what consequences this might have?

The key to understanding the deeper meanings behind deepfakes, is that they highlight two areas of concern: the use of AI technology to create artificial imagery; and the real-world consequences these images could have. In other words, it’s not the technology that’s dangerous. It’s the effect it could have.

Consider the difference between a faked video and a fake photograph. Facebook may have responded to the use of its AI technology to create fake video images by setting up teams to respond to false video content, but what about all the fake images of a staged assassination and child pornography and videos of animal abuse that are also hosted on Facebook? What about the inappropriate text and video content that were allowed to stay online, after Facebook received multiple complaints?

In other words, what does it say about the algorithms Facebook uses to manage user content that it’s not even willing to try to manage fake video content? Facebook has algorithms that can be used to detect and flag posts that violate its rules, but they don’t yet exist for detecting fake videos, let alone high-quality deepfakes.

The platforms that host deepfakes did not respond to multiple requests for comment on this issue. The platforms did, however, point to the many internal tools they have used to stop the spread of their services in the past. For example, Facebook, YouTube, and Twitter have all used features in their core services, such as spam detection and counter-offensive against troll accounts, to limit the spread of fake content and counter disinformation efforts. But the deepfakes challenge is different: we’re not just dealing with spammers, but with real, high-quality, fake content that affects the political and social discourse online.

It’s hard to understand why Facebook, YouTube, and Twitter haven’t seen the deepfakes challenge coming, and haven’t figured out how to deal with the fact that the services they’ve built have been turned into new ways for people to create, share, and distribute authentic, fabricated, and fake content. If the content were truly fabricated and/or fabricated content, as opposed to the real-world consequences, they would be seeing a surge of comments and messages on their platforms expressing their concerns. But the platforms’ algorithms and employees are trained to quickly ignore the message.

We can’t always control the creation of fake content. But we can take steps to address the real-world consequences, while ensuring that platforms continue to innovate in how they’re used to convey information.

Real Response:

If you made it this far in reading my post you will discover that the text above was actually written by an AI. (SHOCKING!) In my research on this topic, I discovered a free web-based AI tool that will write your response prompt for you.

I simply put in the initial instructor question/discussion board topic, and the free AI online generator wrote the rest. The AI generator gave me the option of how many words I wanted the text to be, as well as the option to add in specific keywords to be included in the generated text. At first glance, the only issue I identified with the AI-generated response was that there were no sources for the comments or figures mentioned. Additional advanced options for this AI-generated text could include the level of creativity in the response, as well as a probability threshold for the number of sampled sources.

The generated text was quite well written, so my second thought was to run it through a free plagiarism detector. I am astonished to report that this AI-generated text was 100% plagiarism-free. It is a completely original work, created by an AI deepfake.

Lastly, as a standard practice of my submission process, while typing this post I have a grammar and spelling bot checking my work. The AI-generated work returned with no spelling issues, and minimal grammar flags; with the only grammar flags being three commas that the grammar bot suggested be removed.

Going back to the original discussion board question, What is the role of the federal government and/or big tech companies is policing deepfakes and shallowfakes? I think the better question is, what can they do to identify deepfakes? The AI-generated response is well written but not perfect, allowing for the potential of human error. The AI-generated original content, so how would the government or big tech company even be able to identify the falsehood? Based on the deepfake content above, what would even cause one to flag it for concern? This topic is a large can of worms, and right now there is no way to put the worms back in the can.