How should AI companies handle labeling or disclosing the content they produce? originally appeared on Quora: the place to gain and share knowledge, empowering people to learn from others and better understand the world.
Answer by Joshua Xu, CEO and Co-founder of HeyGen, on Quora:
As generative AI technology advances and produces more realistic outputs, it will increasingly change how we interact with technology and media. As such, the companies developing generative AI technology align on adequate standard processes for disclosing, labeling, and managing concerns around generative AI. In tandem, the businesses beginning to adopt generative AI tools into their workflows must establish policies and processes that govern generative AI’s internal use and how its use is disclosed to customers. Establishing trust and safety practices is not a one-and-done deal; adequate safety practices take time and resources to develop but offer returns that increase an organization’s credibility, enhance the value of the content for readers, and build stronger trust between businesses and their customers. By disclosing and labeling all AI-generated or altered content, businesses are showing customers that they have nothing to hide, empowering customers with the knowledge of where and how a piece of content was developed so they can draw their own conclusions about its value. Organizations that disclose and properly label AI-generated content offer their customers the ability to distinguish between honest actors and those concealing their true intent and methods.
Companies should address trust and safety concerns surrounding AI-generated content by incorporating principles of Responsible AI and disclosure standards into the product’s design. This ensures that safety is considered from the product’s inception throughout the development lifecycle. Developing consistent trust and safety practices is an ongoing process that requires thorough and consistent monitoring (by both machine and humans), customer feedback considerations, and product tweaks to ensure that businesses and their customers are safe and using AI as intended.
While key trust and safety concerns will likely differ between the businesses developing and those leveraging AI tools, at HeyGen, we combine automation with a human-in-the-loop approach to moderation. Our main concerns are identity authentication and ensuring that content generated on our platform is appropriate and in line with content guidelines. Our machine moderators review HeyGen’s AI outputs first for any flags, including text, image, video, and consent, before a team of human moderators conducts final approvals for the most critical review points, improving the process with human feedback.
This question originally appeared on Quora – the place to gain and share knowledge, empowering people to learn from others and better understand the world.