BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage News Microsoft Announces New Technologies to Combat Disinformation

Microsoft Announces New Technologies to Combat Disinformation

This item in japanese

In a recent blog post, Microsoft announced two new technologies to combat disinformation: a Video Authenticator tool, and technology to either detect manipulated content or assure people that the media they're viewing is authentic.  

In today's world, disinformation is trending and widespread. The blog post, based on Microsoft-endorsed research by Princeton, indicates that various countries around the world were targeted between 2013-2019 by influence campaigns to defame notable people, persuade the public or polarize debates. The public cloud vendor has joined the fight in disinformation with its Defending Democracy Program. Moreover, through the program, they helped secure campaigns such as AccountGuard and further strengthened the protection of journalism.

As disinformation comes in many forms, Microsoft has been working on two different technologies to address various aspects of the problem. The first is to tackle "deepfakes", which are media available online such as photos, videos or audio files that are manipulated by artificial intelligence (AI) in hard-to-detect ways, making people, for instance, appear in places they have never been to or say things they never did. Yet, Microsoft's Video Authenticator can analyze and score photos and videos, determining a percentage chance whether or not it is artificially manipulated. 


Source: https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/

The other technology Microsoft announced is intended to help in detecting if the content people watch is authentic or manipulated. The technology consists of two components:

  • A tool built into Microsoft Azure that enables a content producer to add digital hashes and certificates to a piece of content.
  • A reader – which can exist as a browser extension or in other forms. The reader can check the certificates and match the hashes, informing people with a high degree of accuracy whether the content is authentic, has not been changed, and details about who produced it.

Authenticators are essential in the battle of disinformation. Moreover, a potential arms race could occur with authenticators not ending on top, as a correspondent in a Reddit thread states:

The deepfaking AI can improve its model with the fake-detecting AI though. Imagine in addition to how the deepfaking AI trains already; it would also send its result to the fake-detecting AI, which will either say "not a fake" and allow the deepfaking AI to be ok with the result, or say "a fake" in which case the deepfaking AI just has to train more.

Other reasons why the authenticators may not win the race:

  • The deepfaking AI can train in secrecy, while the service of the fake detecting AI is publicly available.
  • The deepfaking AI has way more material for training with. Any photo/video starring people can be used for its training. Meanwhile, the fake detecting AI needs a good mix of confirmed fake and confirmed non-fake imagery in order to improve its detection model.

In addition, Holger Mueller, principal analyst and vice president at Constellation Research Inc., told InfoQ:

New technology can do good things, unfortunately also bad things. When it comes to cheap compute from the cloud and AI, creating deep fakes has gotten much easier than ever before. So it is good to see a tech giant like Microsoft tackling the problem, with better detection tools. Hopefully, other cloud providers will follow soon... but it will remain a cat and mouse game, as one level of detection creates the next level of criminal creative energy to bypass it.

Currently, Microsoft's Video Authenticator tool is only available through the AI Foundation part of the Reality Defender 2020 Program, and the other authentication technology for content producers and consumers will be part of an initiative announced by the BBC called Project Origin

Rate this Article

Adoption
Style

BT