OpenAI, the artificial intelligence company backed by Microsoft, is facing an investigation by US regulators regarding the potential risks to consumers posed by its ChatGPT model generating false information. The Federal Trade Commission (FTC) sent a letter to OpenAI, requesting information on how the company addresses risks to individuals’ reputations. This inquiry highlights the increasing regulatory scrutiny surrounding AI technology.
ChatGPT is known for generating human-like responses to user queries, revolutionizing the way people obtain information online. Competitors are rushing to develop their own AI products, leading to debates concerning data usage, response accuracy, and potential copyright violations during the training process.
The FTC’s letter specifically focuses on OpenAI’s efforts to address the possibility of the AI model generating false, misleading, disparaging, or harmful statements about real individuals. The commission is also examining OpenAI’s data privacy practices and data acquisition methods used to train and inform the AI.
OpenAI’s CEO, Sam Altman, expressed the company’s commitment to cooperating with the FTC. Altman stated that OpenAI had dedicated years to safety research and spent months ensuring ChatGPT’s improved alignment and safety before its release. He emphasized OpenAI’s dedication to user privacy and the design of systems that learn about the world rather than private individuals.
Altman had previously called for regulations and the establishment of a new agency to oversee AI safety during his appearance before Congress. He acknowledged the potential for errors and the need to prevent negative outcomes while highlighting the significant impact AI technology will have on various sectors, including jobs.
The FTC’s investigation, which is still in its preliminary stage, was reported by The Washington Post, which also published a copy of the letter. OpenAI and the FTC declined to provide additional comments.
Lina Khan, the current chair of the FTC, has been active in policing large tech companies and has faced criticism for pushing the agency beyond its traditional boundaries. During a recent hearing in Congress, Khan expressed concerns about ChatGPT, citing instances where sensitive information appeared in response to inquiries from others, as well as cases of libel and defamatory statements.
OpenAI previously encountered challenges related to these issues when Italy banned ChatGPT in April due to privacy concerns. The service was later reinstated after implementing age verification tools and providing additional information about its privacy policy.