We live in extremely exciting times, in case you haven't noticed. And why is that? Well, if you look back for thousands, hundreds of thousands of years, humans have been evolving(发展) very, very slowly. Every now and then, there is a good mutation(变化). But essentially( 本质上), we're pretty much the same. Machines, on the other hand, are following an exponential(指数) curve(曲线).
And we haven't really noticed it for a long time, but now we're starting to see it. So what's happening with machines, and in particular machine intelligence, is that it's coming to a point that for many tasks, if you have enough resources and are well enough to find job to do, machines will outperform humans. So we're getting closer and closer to this magic point, where they will actually supersede(取代) us, in many cases. And essentially, what we've done is we've built an amplifier([电子] 放大器) of ourselves. Anything we do, we can now get machine assistants to do even better. I've been excited about this for a very long time.
In fact, 10 years ago, I gave a talk, a TEDx talk, talking about the opportunities that were lying ahead. Using the internet as a knowledge source, and AI and machine learning as a mechanism(机制) to change how humanity(人性) is perceiving(察觉) and understanding and planning for the future. I'm still an optimist( 乐观主义者), but in all seriousness( 认真), if you look at the world right now, it's definitely taking a turn for the worse. So the new world order is one where we, on a daily basis, get news about war, terror(恐怖), climate-based natural disasters, pandemics(全国流行的). On the cyber side, we're seeing attacks against infrastructure(基础设施), we're seeing industries being shut down, financial crisis(危机) caused by that. And maybe worst of all, we're seeing attacks on our brains.
Influence operations, where bad malicious(怀恶意的) actors are attacking us, changing the storytelling(说书), trying to attack our democracies(民主) by inducing(劝诱) false news into our systems. And if you look at it, the war in Ukraine was highlighting that in a whole different way. All of a sudden, we saw things like cyberattacks attacking entire(全部的) nations infrastructure(基础设施). We saw new kinds of influence operations, even suggesting that Astrid Lindgren was a Nazi. And of course, we saw all the horrors of kinetic((运)动的) war. So this has to keep going on, it seems, or does it?
Can we do anything about it? Or is it even getting worse? Well, unfortunately, AI, as I just mentioned, is a great amplifier. But it amplifies(放大) not only the good things we do, it amplifies the bad things malicious actors can do as well. So I think we've come to a point now where we have to start thinking about how can we use AI as a tool against evil(坏的)? We're entering an era(时代) in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.
So, for instance(实例), they could have me say things like, I don't know, "Killmonger was right," or "Ben Carson is in the sunken place," or how about that simply, "President Trump is a total and complete dipshit." So, of course, this was not the real Barack Obama. This was a combination(结合) of deep fake(假货) generated(产生) AI and a good voice actor, giving a message which by many could be perceived as true, but it's not. This is one example of how AI is being used by the bad guys. Another one, as you've probably all heard about, is ransomware and all kinds of cyber attacks. And these are becoming not only commonly(普通地) used, they're being commoditized.
You can now go onto a dark website and buy a license to something called WormGPT, which is sort of ChatGPT's evil cousin. A tool built with the same technology, but used to produce fake business males luring(引诱) people into sending money to the criminals and things like that.