The United States Federal Trade Commission (FTC) is seeking to update a regulation to address the growing threat of deepfakes, which involve the use of artificial intelligence (AI) to impersonate businesses or government agencies. The goal is to protect consumers from harm caused by these deceptive practices. The proposed update could potentially make it illegal for generative artificial intelligence (GenAI) platforms to offer products or services that they know could be used for consumer impersonation. FTC Chair Lina Khan stated that the updated rule would enable the agency to take legal action against scammers who acquire funds through impersonation. The final rule will take effect 30 days after publication in the Federal Register, and there will be a 60-day public comment period for feedback. While federal laws currently do not specifically address deepfake images, some lawmakers are taking steps to tackle this issue. Victims of deepfakes, including celebrities, can potentially seek legal recourse through copyright laws, rights related to their likeness, and various torts. However, pursuing cases under these laws can be time-consuming and demanding. In January, the Federal Communications Commission banned AI-generated robocalls, following an incident in New Hampshire where a deepfake of President Joe Biden was used to discourage voting. Several states have also passed laws to make deepfakes illegal.