From the course: Next Generation AI: An Intro to GPT-3

The OpenAI charter

- The OpenAI organization is distinguished, not just by their track record in releasing groundbreaking AI solutions, but by its mission to support the friendly use of AI technology. With many concerned about the possible nefarious implications of future AI, OpenAI's mission is considered a refreshing and enlightened organizational approach to an important topic and technology domain. The subject of how we approach AI is not novel to OpenAI. Machine ethics, which include the ethics of AI, is a topic dedicated to the subject and includes participants such as universities, philosophers, corporations, and others. Governments are involved, too. The topic of AI regulation and legislation is now part of the discourse. In the United Arab Emirates, for example, there's a Minister of AI whose purview includes the ethics of AI. Machine ethics include complex areas, such as bias in AI, weaponization, morality, robot autonomy, and liability. Each of these are beyond the scope of this course, but dependent on your interest, may warrant further research, including delving into the excellent AI courses in the LinkedIn Learning Library. Now, let's turn our attention to OpenAI and machine ethics. In 2019, in an attempt to capture the principals by which they execute their mission of enabling AI for the benefit of humanity, OpenAI published a charter. The document is a codified expression of their strategy and was developed with feedback from inside and outside the organization. It's an important artifact that goes beyond communicating their values. It can provide inspiration and guidance to others who are looking to adopt similar approaches to AI. While I encourage everyone to read the charter, and I include a link on the screen here for you, here is a brief overview of the main four principles. First, the benefits of artificial general intelligence, or AGI, should be enjoyed by everyone. The focus should remain on the advantages to all of humanity. Second, AGI research and development should be safe. In addition, and particularly notable, if an AGI project emerges from a competitor that could incentivize speed to market at the cost of safety, OpenAI will not compete and will instead partner and assist the competitor. Third, OpenAI will strive to be leaders in both AI and AGI as knowledge of the cutting edge capabilities of the technologies are necessary in order to mitigate the risks and amplify the benefits. Finally, they are committed to broad collaboration with research and policy institutions across the world. Central to this is sharing as much of their research as is reasonable. They want to participate in the building of a strong AGI community across the planet to ensure that AGI challenges are addressed. I'm sure you'll agree that these are admirable and lofty values. However, the degree to which they can be met over the long term are unknown at this time. In my view, there are two key ideas to take away from OpenAI's charter. Number one, we should recognize the novel and unique nature of this organization and its mission. It was born and lives on the foundation that AGI must be created safely and for the good of humanity. And two, these principles can be leveraged to develop an AI or AGI charter for any organization that believes in protecting humanity and amplifying the benefits of this remarkable technology for all. I hope it inspires you and your organization as well.

Contents