Tech
YouTube Takes Measures Against the Misuse of Content Generated with Artificial Intelligence
YouTube explains the measures taken to combat the misuse of content generated with artificial intelligence. Details about efforts to ensure the security of videos and content created with artificial intelligence technology.
YouTube continues to take new measures to prevent the misuse of content created with artificial intelligence. Currently, the video sharing platform requires content creators to label realistic content created using AI tools.
In addition to this requirement, you now have the option to report content generated by artificial intelligence that resembles you visually or vocally without your knowledge or consent. According to TechCrunch, YouTube has outlined the factors that will be considered when evaluating your complaint in the updated support page:
- Whether the content is altered or synthetic.
- Whether the altered or synthetic nature of the content is disclosed to viewers.
- Whether the individual can be uniquely identified.
- Whether the content is realistic.
- Whether the content contains parody, satire, or other public interest value.
- Whether the content features a well-known individual engaging in sensitive behaviors such as criminal activities, violence, or endorsing a product or political candidate.
If you believe that AI-generated content resembling your visual or vocal likeness has been created on the platform, you can initiate a Privacy Complaint Process with YouTube. However, for the content to be removed, YouTube specifies that the content must depict a ‘realistic, altered, or synthetic version of you.’ Upon notification, the content creator is given 48 hours. If personal information is not edited or the content is not removed, the review process is initiated. YouTube requires claims to be submitted by the affected party and generally does not accept claims made on behalf of others, with some exceptions.