Gonzalez bill to combat “deepfake” technology passes Committee on Science, Space and Technology
U.S. Congressman Anthony Gonzalez’s (R-OH) new bill to combat manipulated media content known as “deepfakes” passed the U.S. House of Representatives Committee on Science, Space and Technology on Wednesday. The bill, H.R. 4355, the Identifying Outputs of Generative Adversarial Networks Act (IOGAN Act), was introduced last week alongside Reps. Jim Baird (R-IN), Haley Stevens (D-MI) and Katie Hill (D-CA) and supports critical research to accelerate the development of technology to identify deepfakes that could erode public discord, scam the American public and endanger national security.
CONGRESSMAN GONZALEZ: “Deepfakes are not new a new phenomenon… you will remember the famous scene where Forrest Gump was filmed shaking hands with historic presidents. At that time, the technique was revolutionary and very expensive and difficult to reproduce—only big Hollywood studios could afford to reproduce deepfakes with such images. Fast forward a few decades, and we now live in a world where advancements in technology and computing power has increased exponentially… Given the national security and societal implications that undistinguishable deepfakes can pose for our country, my legislation directs the NSF, in consultation with other Federal agencies, to conduct research on the science and ethics of deepfakes.”
Deepfake technology has developed rapidly over the past several years with no clear method of identifying and stopping it from becoming a major national security threat. The IOGAN Act directs the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) to support research to accelerate the development of technologies that could help improve the detection of such content. Advancements in computing power and the widespread use of technologies like artificial intelligence over the past several years have made it easier and cheaper than ever before to manipulate and reproduce photographs, video and audio clips potentially harmful or deceptive to the American public. The ability to identify and label this content is critical to preventing foreign actors from using manipulated images and videos to shift U.S. public opinion.