In the previous article, we discussed deepfake and cheapfake detection. Such forgery verification technology is used to detect forgery after the media has been tampered with or edited. Academic circles and related industries are concerned that even new detection technologies and methods of the future will not prevent media forgery completely. Thus, you might wonder whether there is any technology that will completely prevent media forgery. Check out the following proactive media forgery prevention technology.
First, let's take a look at media forgery prevention technology that is used internationally. As classified below, image forgery verification is divided into active and passive verifications.
Deepfake and cheapfake detection fall under the passive verification category. Numerous techniques have been introduced to detect image manipulation in the passive area. However, it wasn't easy to detect them perfectly because forgery technology had advanced at a much faster rate. On the other hand, active verification can verify and prevent forgery 100%.
Then how is media forgery in the active area detected 100%? Read on to learn what kind of technologies help prevent forgery.
Images taken using smartphone cameras are created following the international JPEG standard. Images created through such digital shooting devices are distributed through multiple platforms. In this process, images can be forged, thus causing distortion of facts.
The JPEG International Standardization Organization (ISO) has established and applied privacy & security standards to prevent digital image forgery. For digital signatures in the JPEG privacy & security area, a hash value of the image and certain metadata* is extracted and made into an encryption key. The signed data is included in a certain metadata area called the JPEG Universal Metadata Box Format (JUMBF).
* Metadata refers to information in the JPEG file, such as the device, time, and place of generating, taking, and storing the image.
Therefore, every image contains the hash value of the original image. If the metadata or image is modified, the hash value changes, so you can easily check whether the image was modified by checking whether the hash value has changed. Team9 applied this technology for the first time in the world and is using it to prevent digital forgery.
Developed by Team9, Multi-point Focus (MpF) distinguishes a three-dimensional object from a two-dimensional one. First, the three-dimensional image is split horizontally and vertically. Focusing the camera on one of the split segments will result in different levels of sharpness among segments. If you direct the camera focus to another segment, the sharpness will change depending on the distrance in focus. The team got the idea from such changes in sharpness. They took photos of several three-dimensional and two-dimensional objects and taught AI how to distinguish whether an object is three-dimensional or two-dimensional.
Time of Flight (ToF) is one of the light detection and ranging (LiDAR) sensors included in the terminal. The sensor measures the time it takes for a camera signal to reach the subject and then return back within the range. Team9 assumed that the ToF of a three-dimensional object is different from that of a two-dimensional object. In other words, three-dimensional objects have different ToF depending on the surface curvature, whereas the ToF of a two-dimensional object has little difference.
Based on these measured data, Team9 has secured the technology to determine whether a photographed object is two or three-dimensional.
Moiré patterns appear differently in photoshoots that took the real object, paper, monitor, etc. Artifact analyzes these characteristics and learns whether the photograph is a real object, or a forged image on a monitor or printed on paper. Refer to the previous article for more information: What Are Cheapfakes (Shallowfakes)?
Adobe Photoshop is a representative image editing program. Adobe created the Content Authenticity Initiative (CAI) to validate all types of digital content, including photos and videos. In particular, the initiative developed an open-source that can add a layer of verifiable trust, leading a safer system (C2PA: https://c2pa.org/) that can verify digital content sources through collaboration and operation between companies.
With CAI technology, one can check the details of how an image was shot, edited, posted, or shared via social media. As seen below, the user can check various information regarding shooting, editing and posting by clicking the ⓘ icon in the upper right corner of an image.
The Adobe CAI community includes major IT companies such as Microsoft and Twitter, as well as manufacturers such as ARM and Nikon, and media companies, including the BBC. The CAI is expected to gather greater influence as the technology manages content history generated by the digital camera sensor, and users can retrieve the information on the cloud. Team Nine is also reviewing whether to include such features in the solution later.
If you take a picture with Team9's mobile forgery prevention application, it will be sent to the server immediately.
Photos taken in real-time with this dedicated application go through the following steps: server transfer > blockchain linkage > storage. The application verifies the authenticity of photos via real-time analysis, and the result can be viewed in a PDF report file.
Forged media data are still circulating on the Internet as well as originals or copies of photos and videos, wreaking damage to individuals. Team9 developed a forgery prevention technology to block media forgery and verify media authenticity, thus preventing distortion of facts.
Next time, we'll look at cases of blocking deepfake and cheapfake and the related business areas.
※ This article was written based on objective research outcomes and facts, which were available on the date of writing this article, but the article may not represent the views of the company.