My worst fears are that we cause significant(重大的) harm(伤害) to the world. I think if this technology goes wrong, it can go quite wrong, and we want to be vocal(歌唱的) about that. We want to work(使工作) with the government to prevent that from happening, but we try to be very clear-eyed about what the downside case is and the work that we have to do to mitigate(减轻) that. For several months now, the public has been fascinated with GPT, DALY, and other AI tools. These examples, like the homework done by chat GPT or the articles and op-beds, that it can write feel like novelties(新颖), but the underlying(成为……的基础) advancement(进步) of this era(时代) are more than just research experiments. They are no longer fantasies( 想象) of science fiction.
They are real and present. The promises of curing(医治) cancer(癌症) or developing new understandings of physics and biology or modeling climate and weather, all very encouraging and hopeful(有希望的), but we also know the potential(潜在的) harms(伤害). We've seen them already, weaponized disinformation, housing(房屋) discrimination(歧视), harassment(骚扰) of women and impersonation fraud, voice cloning, deep fakes(假货). These are the potential risks, despite(不管) the other rewards(报答). For me, perhaps the biggest nightmare(梦魇) is the looming(可怕地出现) new industrial(工业的) revolution, the displacement(取代) of millions of workers, the loss of huge numbers of jobs, the need to prepare for this new industrial revolution(革命) in skill training. Mr.Altman, we're going to begin with you if that's okay.
Thank you. Thank you, Chairman(主席) Blumenthal, ranking(排列) member Hawley, members of the judiciary committee(委员会). Thank you for the opportunity to speak to you today about large neural networks. It's really an honor to be here, even more so in the moment than I expected. My name is Sam Altman. I'm the Chief( 领导) Executive(执行者) Officer of OpenAI.
OpenAI was founded(创立) on the belief that artificial(人工的) intelligence has the potential(潜在的) to improve nearly every aspect of our lives, but also that it creates serious risks we have to work together to manage. We're here because people love this technology. We think it can be a printing press moment. We have to work together to make it so. OpenAI is an unusual company, and we set it up that way because AI is an unusual technology. We are governed(统治) by a nonprofit, and our activities are driven by our mission(使命) and our charter(宪章), which commit us to working to ensure(保证) that the broad(宽的) distribution(分发) of the benefits of AI and to maximizing the safety of AI systems.
We are working to build tools that one day could help us make new discoveries and address(写名字地址) some of humanity(人性)'s biggest challenges, like climate change and curing cancer. Our current systems aren't yet capable(有能力的) of doing these things, but it has been immensely(非常) gratifying to watch many people around the world get so much value from what these systems can already do today. We love seeing people use our tools to create, to learn, to be more productive(能生产的). We're very optimistic(乐观的) that there are going to be fantastic jobs in the future, and the current jobs can get much better. We also love seeing what developers(开发者) are doing to improve lives. For example, Be My Eyes used our new multimodal technology in GPT-4 to help visually impaired individuals navigate(驾驶) their environment.
We believe that the benefits of the tools we have deployed( 部署) so far vastly(极大地) outweigh the risks, but ensuring their safety is vital(至关重要的) to our work, and we make significant efforts to ensure that safety is built into our systems at all levels. Before releasing any new system, OpenAI conducts(指挥) extensive(广泛的) testing, engages(使从事于) external(外部的) experts for detailed(细节的) reviews and independent audits, improves the model's behavior, and implements(实施) robust(精力充沛的) safety and monitoring systems. Before we release GPT-4, our latest model, we spent over six months conducting extensive evaluations( 评价), external red teaming, and dangerous capability(才能) testing. We are proud of the progress that we made. GPT-4 is more likely to respond helpfully and truthfully, and refuse harmful(有害的) requests than any other widely(大大地) deployed( 部署) model of similar capability(才能). However, we think that regulatory( 按规矩来的) intervention(干涉) by governments will be critical(决定性的) to mitigate(减轻) the risks of increasingly powerful models.
For example, the US government might consider a combination(结合) of licensing and testing requirements(需要) for development and release of AI models above a threshold(入口) of capabilities(才能). There are several other areas I mentioned in my written testimony(证词(尤指在法庭所作的)) where I believe that companies like ours can partner with governments, including ensuring that the most powerful AI models adhere(粘附) to a set of safety requirements(需要), facilitating((不以人作主语的)使容易) processes to develop and update safety measures, and examining opportunities for global coordination(协调). And as you mentioned, I think it's important that companies have their own responsibility here, no matter what Congress(大会) does. This is a remarkable(异常的) time to be working(使工作) on artificial(人工的) intelligence. But as this technology advances(前进), we understand that people are anxious(焦虑的) about how it could change the way we live. We are too.
But we believe that we can and must work together to identify and manage the potential downsides so that we can all enjoy the tremendous(极大的) upsides(优势). It is essential that powerful AI is developed with democratic(民主的) values in mind. And this means that US leadership(领导) is critical(决定性的). I believe that we will be able to mitigate the risks in front of us and really capitalize on this technology's potential to grow the US economy and the world. And I look forward to working with you all to meet this moment. I look forward to answering your questions.
Thank you. Should we consider independent testing labs to provide scorecards and nutrition(营养) labels or the equivalent(等价物) of nutrition(营养) labels, packaging that indicates to people whether or not the content can be trusted(相信), what the ingredients are, and what the garbage going in may be because it could result in garbage going out?