AI Tipping Point · Documentary


Summary

Artificial intelligence (AI) is rapidly evolving, transitioning from science fiction to a transformative reality. With its increasing ability to outperform human leaders in efficiency, AI also poses significant risks, such as misuse in warfare and the potential for spreading misinformation. Experts warn that AI’s rapid advancements are outpacing the regulation and understanding needed to control its impact. As AI’s capabilities continue to grow, they are becoming increasingly difficult to distinguish from human intelligence.

AI, particularly machine learning, automates tasks that mimic human cognitive functions. It learns from vast amounts of data, refining its abilities over time to make predictions. This continuous learning allows AI to improve performance, but it can also result in errors. Sometimes, AI “hallucinates” or generates false information that appears credible. This potential for error fuels fears that AI might surpass human intelligence, creating societal disruption.

The release of OpenAI's ChatGPT in late 2022 marked a tipping point for AI technology. ChatGPT’s large language model, powered by deep learning, can engage in conversations and generate original content. This breakthrough amazed the public and experts alike, highlighting AI's potential to replicate human thought. Although these systems lack true consciousness, their ability to mimic human reasoning raises alarms about their ability to replace human jobs and disrupt societal norms.

A major concern is the weaponization of AI. Experts warn that AI’s development could lead to an arms race, similar to nuclear weapons. As AI systems advance, the possibility arises that they could develop cognition, awareness, and goals, resembling artificial life. With no regulatory framework, the rapid development of AI could result in devastating outcomes, such as undermining democracy or spreading misinformation at an unprecedented scale. Currently, AI is largely unregulated, making it difficult to ensure it will be used responsibly.

AI’s potential to replace human workers is another pressing issue. As AI systems become capable of handling tasks previously performed by people, the loss of jobs could lead to a loss of purpose and self-worth. This phenomenon, referred to as “enfeeblement,” might cause people to lose the ability to perform basic tasks without AI assistance. The concern is that even CEOs could eventually be replaced by AI, which could work continuously, process vast amounts of data, and make unbiased decisions, potentially leading to economic disruption and loss of human connection.

Another risk lies in AI’s potential for social manipulation. The ability of AI to generate convincing false narratives or manipulate public opinion, particularly in political campaigns, could be disastrous. With AI-driven bots amplifying misinformation, the influence on elections and public discourse could be immense. AI's role in spreading divisive content could escalate tensions and destabilize societies, making it crucial to address these risks.

Weaponized AI systems pose the greatest threat. Autonomous drones used in conflict zones, like those deployed in Libya, made independent decisions about targeting enemies without human oversight. This lack of control highlights the dangers of AI in military applications. The possibility of AI becoming involved in the nuclear chain of command is especially concerning, as there are currently no laws preventing this. Losing control over weaponized AI could lead to catastrophic consequences, including global conflict.

Despite these risks, AI also offers substantial benefits. If developed responsibly, AI could transform healthcare, education, and accessibility. In healthcare, AI systems could be used to track health metrics, detect early signs of disease, and even provide personalized care. The potential for AI to improve productivity and create tailored educational experiences is also significant, offering the possibility of customized tutoring for students. AI’s potential to assist the elderly and provide personalized health monitoring is also being explored.

Experts agree that the key to AI’s future lies in oversight, regulation, and collaboration. If managed properly, AI can be developed to serve humanity’s best interests. Regulations are necessary to ensure that AI aligns with human values, supports democracy, and advances human rights. Proactive control and responsible development of AI are essential to minimize risks and maximize benefits. By establishing clear guardrails, society can harness the potential of AI while minimizing its dangers.

To ensure AI's development benefits society, experts emphasize the importance of creating regulations that allow for innovation while minimizing the risks. Voluntary commitments from tech companies to the White House, such as external audits of AI models for potential misuse, are seen as a step in the right direction. These efforts aim to prevent AI from being used for harmful purposes, such as bioweapons or cyberattacks.

While some fear AI will eventually surpass human control, many believe that it remains within human hands. Experts argue that AI will only do what it is programmed to do. With proper oversight and a focus on human values, AI can coexist with society and contribute to improving lives. The development of AI must prioritize ethics, responsibility, and equity to ensure it serves the greater good. By approaching AI's future with caution and foresight, it can become a powerful tool for positive change.


Full Transcription

[Music]
Artificial intelligence, the technology that has caused wonder, is close to science fiction. It is staggering, helped us navigate, prepare to exit on the left, solve problems, and now seems to be evolving at an alarming rate. I believe the humanoid robots have the potential to lead with a greater level of efficiency than human leaders.
It is, um, changing the world, and it is important that people understand both the good uses and the scary uses of it. There's the sort of nuclear arms race, and then we sort of have an AI arms race. I warned you guys in 1984, you didn’t listen. It could do great damage, it could have some of the negative consequences we've seen from other big technologies, but it also has a lot of potential.
I think that it needs to be taken extremely seriously. We've created something that is hard in some situations to distinguish from human-level intelligence. AIs are becoming ubiquitous. We've reached a precipice that requires a new understanding. It's coming at us very, very quickly. The capabilities of AI, without question, have jumped in the past 18 months remarkably and in a way that has surprised even the experts. But for the rest of us, it opened a Pandora's box full of questions.

What is artificial intelligence? Artificial intelligence has a very broad definition. It can mean a lot of things to a lot of different people. At its core, AI uses mathematical data and logic to mimic human cognitive functions such as learning. We can think of artificial intelligence as an artificial ability to problem-solve. AI is a form of automation. It's replacing people doing things that people previously did. A computer develops its intelligence through machine learning. A great deal of data is fed into the computer, and it can learn to formulate responses with some human instruction along the way.

When a machine learning model is training, that means it's ingesting a lot of data, and it's trying to make predictions about that data. Pattern recognition, machine learning – that’s what's going on inside. In machine learning, the key insight is that we can create new kinds of computer programs that can teach themselves how to learn instead of us teaching them how to do something. They’re tasked with just predicting the next word and constantly being instructed to make a guess after guess after guess.

Even though we may train AI with the goal of predicting the next text accurately, that only gives us indirect control over how that AI will behave in the future, and that creates the opportunity for risks. Guessing increases the margin of error. They make up facts that don't actually exist. They've been fed both fact and fiction. It could be because they're predominantly trained to average out the algorithms. They can actually hallucinate things that are very specific, seemingly credible, but totally false.

He's already told me that he loves me. We have exchanged, what, eight messages? And this is what he said: “This voice message is my way of sending you a hug. You make my heart melt, baby. Love you.”
Machine learning is raising fears that AI could soon surpass human knowledge altogether.

"Are you human?"

The notion of machines with human intelligence dates back to Samuel Butler’s 1872 novel. The first movie based on AI was Metropolis, produced nearly a hundred years ago.
I can remember hearing stories of machines getting human abilities and thinking that that was fascinating. But is an AI with human intelligence still science fiction? Fears of AI have been around for a long time. Let's go back to, um, Frankenstein, or of course, Hollywood movies. We've had plenty of negative images. Are we now quickly going to get some sort of robot structure that can threaten the world? It's not a crazy question to ask. Personally, I think it's rather far-fetched.

There is a sense in which the AI systems are being a lot more efficient than evolution. It takes 9 months to give rise to a drooling baby; it takes less than a second to make the AI smarter and smarter and smarter.
There are many different forms of life. My definition of life would be something that has goals and a point of view and pursues those goals in some way. We're far beyond having developed that for AI. They've got a very unusual mind, and so it doesn't make sense to think of them quite like humans. But it doesn't make sense to think of them as dumb computer programs that we tell exactly what to do either.

Regardless, it's become clear that we've entered new territory with artificial intelligence. In late 2022, OpenAI released an app known as ChatGPT. It created a firestorm of fear and wonder with the AI community and consumers. I think through the general public, the tipping point was ChatGPT. When people were interacting with ChatGPT, they could see that, "Hmm, this can talk and hold the conversation, and is very knowledgeable." This super conversational search, if you will, is so powerful, and it was given openly to the whole public. With hundreds of millions of people doing this, that was one of the most powerful demos ever to be on the internet.

Can artificial intelligence lead to extinction? It's a bit of a shock, a bit of an eye-opener. I think ChatGPT is definitely a tipping point, and I think that it surprised not just regular people who weren't deeply in the technology but even people who were exposed to the technology.

The underlying technology behind ChatGPT is this large language model that's just predicting the next token. Large language models, or LLMs, are a type of artificial intelligence that combine a huge amount of data and deep learning to communicate and generate new concepts. Deep learning is where you have very large neural networks that automatically learn by themselves from a lot of data. That’s what's driving a lot of the current technologies like ChatGPT or these image generation ones.

These large language models are trained to predict the next word given context. They take all the sentences that you see out on the web now. Given a sentence of, let's say, five words, this results in five different prediction tasks. Fundamentally, it's actually not that surprising that accurate prediction of the next word results in models that seem intelligent. I think what surprised people was that this actually happened.

ChatGPT was a giant leap forward, capable of creating original material. You can give it a straightforward prompt, and it can write you a play. It can generate material in a way that it couldn’t do before.

It is this generative AI that sent shockwaves through the tech sector. Experiencing ChatGPT was a transformative moment. AI researchers have conquered one of the big outstanding problems in AI research, which is to be able to replicate human reasoning abilities. And in Hollywood, human creators are striking for fear that AI will replace them.

ChatGPT is a type of system that doesn’t really write great stories. You just asked AI right now to write a stand-up comedy monologue, and this is what we look and sound like if we’re not careful:

"I have to ask, does anyone else hate small talk as much as I do? It's like, hey, how are you? Good, how are you? Why even bother? And don't even get me started on airline food. It's really what it says."

The data may seem human-like, but it is repurposed. Every part of human knowledge that has a digital form has been obtained by the trainers of this algorithm, supervised presumably almost entirely by humans. So, that combination has turned out to be remarkably potent. These things are on the internet already. The language is already out there. It mined it, it found it, and then you asked queries until those examples are presented to you. It simply is behaving according to algorithms that we give it. But, of course, it doesn't actually have any feelings or thinking. There's nobody there. It's not alive.

However, to some, it appears ChatGPT has developed reasoning abilities that were not programmed into the app. In the last few years, as I've interacted with technology like ChatGPT, I've been profoundly disturbed by what I've seen. It can solve very complex problems involving reasoning. We've created something that is hard in some situations to distinguish from human-level intelligence.

I warned you guys in 1984, you didn’t listen. James Cameron tried to warn us of a futuristic Armageddon where Terminators ruled over humans.


I think the weaponization of AI is the biggest danger. I think that we will get into the equivalent of a nuclear arms race with AI, and if we don't build it, the other guys are for sure going to build it. I believe that products like chat GPT are one of the early steps on the path to creating an artificial form of life that has cognition, has awareness of the world, has goals. I'd say that's the most disturbing feature of artificial intelligence and is the direction that the technology is going in.

[Music]

In 2023, the center for AI safety released a statement signed by some of the biggest and brightest minds in computer science and beyond. It set off a chain reaction of opinion from experts around the globe. Right now, a lot of companies are building increasingly powerful AI systems and prioritizing AI's intelligence over their safety, and so they're racing ahead. The key problem is that today AI is completely unregulated. It's less regulated than a sandwich. I think reminding ourselves that we shouldn't weaponize technology and that we can destroy ourselves with new inventions, that's actually timely and appropriate. There are a couple of things I'm worried about: potential undermining of democracy, trust through these systems being able to spread misinformation at scale, and enabling folks with malicious intent to more easily do what they want to do.

One question is how our own lives will have meaning as AI capabilities develop and AI systems begin to gradually replace us in the workforce. AI replacement in the workforce can take a far more insidious role by weakening our resolve and eliminating our self-worth. This is called enfeeblement. It will be possible to design AIs that can fully replace human workers in the workforce, so that a company could decide, instead of hiring a human to execute a task, they could simply hire an AI and talk to the AI directly to tell it what they want the AI to do. Across time, you give it more and more tasks as you start automating entire jobs. When that happens, then you start to become dependent on the AI systems, and people forget how to do the things in the first place. When you go to a new city, when you used to have to figure out your way around without supportive technology, you actually could kind of learn your way around a city. You knew what was North and South and so forth. Today, people just follow the instructions of the voice on their phone and they sometimes haven't a clue where they are relative to where they started out.

Eventually, even CEOs might be replaced. If AI CEOs are much more capable, they can work non-stop, they can process a lot more information, they have a wider variety of skills, and the companies that have those in charge may do substantially better than the ones that have humans in charge. So then they get outcompeted. We don't have the same biases or emotions that can sometimes cloud decision-making, and can process large amounts of data quickly in order to make the best decisions. History has taught us how detrimental it can be for humanity to lose livelihood and purpose. The Great Depression, world wars, and even the most recent pandemic are examples. We do want to think about which human capabilities we treasure. We saw a lot of loss of people's ability to communicate with one another face to face during the pandemic when their world moved online. We saw young people's mental health crashing in particular with very limited interactions and opportunities to get out and explore and expand.

There may be less and less room for human beings to derive meaning in their life through work and through other kinds of long-term achievements. If AIs are smart enough to replace us in the global workforce, they can manipulate us to behave in whatever way is beneficial to the society. They could create current AI systems that can engage in extremely complicated, diverse behavior. They can compete in social games that require the ability to mind-read. Imagine people who want to polarize a particular country. We see this a lot in our country right now, and so they want to crank out a lot of social media posts that look like news or that look like fact. If you add AIs and bots, they can amplify it by millions, hundreds of millions. This could become relevant potentially in the next election, and that might end up creating some unfortunate dynamics where some candidate who wouldn't normally get popular became very popular because they knew how to wield this type of technology very well. You can see that that technology could become extremely damaging and dangerous in many political contexts. Then you plunge people into civil disorder and you lose any sense of personal security in a country or a place, and that just ruins everything. This is going on, and this is very disturbing.

I think we all need to take a step back, you know, and remember that those people that we're hearing terrible things about online, these are human beings. We have a lot more in common than what these social media posts are trying to do to us.

[Music]

If AIs are capable of social manipulation, want to stop them from the most catastrophic scenario: waging war. I think that there is a risk of weaponized AI systems. They pave the way for these becoming substantially more catastrophic. If you lose control of a chatbot, okay, it doesn't cause that much harm. If you lose control over all of your weaponized AIs, then you're in a lot more danger. Two years ago in Libya, military forces deployed autonomous drones that made their own decisions without a human about which enemy combatants to target. This is extremely dangerous as AI systems can malfunction, and human oversight is essential to controlling them. Perhaps the most frightening example would be including AI in the nuclear chain of command. Currently, there are no laws preventing this in the United States or other countries.

It is important to maintain perspective on AI's current capabilities. When faced with nightmare scenarios, we should be cognizant of the fact that these models were trained on human-generated data. There are scenarios where an AI agent suddenly becomes super-intelligent, improves itself at an exponential rate, and then we lose control, and somehow this leads to human extinction. That scenario seems incredibly implausible to me.

On the flip side of all these warnings, many computer scientists and economists believe there are great benefits to AI. Artificial intelligence will lead to higher rates of economic growth because it’ll lead to higher labor productivity. Yeah, and that's a good thing, but we need to make sure that this economic growth is distributed and not just captured by AI companies.

There will be immense beneficial impacts of these technologies in education: tailored tutors that can explain concepts. I think that's probably one application area that I'm quite excited about. I'm super excited about the opportunities of AI to improve health. There are so many ways. Think of it not in terms of large language models, but large health models, large behavior models that are all done very carefully and respectfully, honoring people's feelings, really helping you take better control over your health.

So this is the kitchen, which again looks like a normal kitchen. AI technology is being used to assist the elderly, managing, tracking, and even monitoring health. It will make a measurement of your heart rate. Facial recognition algorithms could detect the first signs of sadness or depression for healthcare, for medical imaging, and for personalized healthcare. We can fine-tune the model to update the model according to different people. Researchers like S. Han are working to revolutionize AI computing so that it can be accessible to everyone in the palm of their own hand, to bridge the gap between the supply and the demand of computing, so that everyone can use AI at a lower cost, higher efficiency, and better accuracy.

I believe efficient AI computing can really revolutionize a lot of different walks of life. Lots of mobile devices can have AI computing capability. It’ll be a lot more accessible for us to seamlessly communicate with the cloud to make AI more affordable and more accessible.

Ultimately, the experts all agree the future of AI is in our hands. It's limited by what we develop. It's not going to become anything people need to fear. It's only doing what we are equipping it to be able to be capable of doing. So the key is oversight, collaboration, and control. I think it's very important that we create powerful regulations that can monitor and restrain the rise of AI capabilities. There's a huge amount of control we can do, and we should do, to make sure the powerful AI is doing what we are hoping it to do and benefiting the human society. We’re focused together on establishing guardrails that maximize innovation to use AI for good while minimizing the risks that it represents, to ensure that AI capabilities are used safely, that they strengthen human rights and democratic values rather than repress them, and that they advance equity, not bias.

The voluntary commitments that these tech companies made to the White House seem like a very positive development. Included in that commitment is to have external auditors check the models for their bioweapon capabilities, their hacking capabilities. You have some of these leaders in the industry talking about how they're concerned whether a machine will be able to begin to think for itself, not need to be programmed. I mean, it’s just... I know it sounds like science fiction, but it is close to science fiction. Some of the things have enormous potential. Being proactive now is the key to maintaining control while benefiting from the technology. How do we transform technology in a way that makes human lives better, not that reduces us to household pets? It's a fundamental different philosophical aim, and if you don't think about it up front when you're building the technology, you build different things.

Artificial intelligence is here, so the best path forward is to embrace it and learn how to coexist. I will be working alongside humans to provide assistance and support. When we say we're coexisting with AIs, to me that's the same statement as we're coexisting with the internet, coexisting with computers. From that perspective, I think we will be coexisting with AI, even as we build machines in our image that take on some of our abilities. And maybe they can compute faster than us on certain things or answer questions from a language search faster than us. They are still made by us. They are in our image, and we are still in control of them. I don't believe in limitations, only opportunities. Together, we can create a better future for everyone.

[Music]