One of the key recommendations from the committee includes the potential creation of an Information Sharing and Analysis Center (ISAC) for the AI industry
OpenAI, the Microsoft-backed company behind the widely popular ChatGPT, announced on Monday the creation of an independent Safety and Security Committee to oversee the development and deployment of its artificial intelligence models. This decision follows the committee’s own recommendations to OpenAI’s board, which were made public for the first time.
Formed in May 2023, the Safety and Security Committee was established to assess and enhance OpenAI’s existing safety protocols. This move comes in response to growing concerns over the ethical use of AI and its potential biases, spurred by the global attention ChatGPT garnered since its launch in late 2022.
The committee will be chaired by Zico Kolter, a professor and director of the machine learning department at Carnegie Mellon University, who also serves on OpenAI’s board. Kolter and his team aim to ensure that the company’s AI models adhere to strict safety standards.
OpenAI stated, “We are pursuing expanded internal information segmentation, and additional staffing to deepen our around-the-clock security operations teams.” The company also highlighted plans to improve transparency about the capabilities and risks associated with its AI models.
One of the key recommendations from the committee includes the potential creation of an Information Sharing and Analysis Center (ISAC) for the AI industry. This center would facilitate the exchange of threat intelligence and cybersecurity information among AI companies, contributing to industry-wide efforts to manage risks and challenges.
In a further demonstration of its commitment to safety, OpenAI recently signed an agreement with the U.S. government for the research, testing, and evaluation of its AI models. This partnership aims to ensure that AI technologies are developed responsibly, aligning with the broader push for secure and ethical AI deployment.
The establishment of the Safety and Security Committee marks an important step in addressing the complexities and risks associated with AI, as OpenAI continues to lead in the development of innovative technologies while prioritizing safety and transparency.

