Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

Dr. Roman Yampolskiy: These Are The Only 5 Jobs That Will Remain In 2030!

TLDR;

Dr. Roman Yimpolski, a globally recognized expert on AI safety, discusses the potential dangers of AI, the likelihood of human extinction, and the possibility that we are living in a simulation. He emphasizes the importance of AI safety and the need for ethical considerations in AI development. He predicts significant job displacement due to AI and robots, the potential for superintelligence to surpass human capabilities, and the long-term implications for humanity.

  • AI safety is paramount to prevent potential catastrophic outcomes.
  • AI and robots could lead to mass unemployment and societal changes.
  • Superintelligence poses existential risks that require careful consideration.
  • Simulation theory suggests the possibility that our reality is not fundamental.

Intro [0:00]

Dr. Roman Yimpolski has been working on AI safety for at least two decades. He initially believed safe AI was achievable, but now realizes it may be impossible. He predicts that by 2027, AI will be capable of replacing most humans in most occupations, leading to unprecedented levels of unemployment, potentially around 99%. This situation could arise even without superintelligence, which is defined as AI smarter than all humans in all domains. He expresses concern that the smartest people in the world are competing to develop superintelligence without knowing how to make it safe.

How to Stop AI From Killing Everyone [2:28]

Dr. Yimpolski's mission is to ensure that the superintelligence being created does not lead to human extinction. Recent advancements have made AI significantly better by adding more compute and data, leading to a race to create the best possible superintelligence. However, the ability to make these systems safe has not kept pace with their increasing capabilities. The timelines for achieving advanced AI are short, with predictions of only a couple of years, while ensuring these systems align with human preferences remains a challenge.

What's the Probability Something Goes Wrong? [4:35]

It's impossible to predict exactly what will happen with AI. If humans are not in charge and controlling AI development, the desired outcomes are unlikely. The range of possibilities is vast, but the outcomes humans would prefer are limited.

How Long Have You Been Working on AI Safety? [4:57]

Dr. Yimpolski is a computer scientist with a PhD in computer science and engineering. He began working on AI safety, initially defined as the control of bots, around 15 years ago, before the term "AI safety" was widely used. He coined the term AI safety. His initial work was a security project focused on poker bots, where he realized that AI would eventually surpass human capabilities. He aimed to ensure AI benefits everyone and began working on making it safer.

What Is AI? [8:15]

Narrow intelligence can perform specific tasks like playing chess, while artificial general intelligence (AGI) can operate across multiple domains. Superintelligence surpasses human intelligence in all areas. Narrow AI systems already exist and excel in specific domains, such as protein folding. Current AI systems can learn and perform in hundreds of domains, sometimes better than humans, suggesting a weak form of AGI. Superintelligence does not yet exist, but the gap is closing rapidly, particularly in mathematics, science, and engineering.

Prediction for 2027 [9:54]

By 2027, AGI is expected to be a reality, potentially leading to free labor, both physical and cognitive, making it economically illogical to hire humans for most jobs. Anything that can be done on a computer will be automated, and humanoid robots are projected to be only about five years behind. This could result in unprecedented unemployment levels, possibly around 99%, with the only remaining jobs being those where human interaction is specifically preferred.

What Jobs Will Actually Exist? [11:38]

In a world with superintelligence, defined as AI better than all humans in all domains, it's difficult to determine what humans can contribute. Jobs may exist where human preference is a factor, such as a rich person wanting a human accountant for traditional reasons. However, this would be a small subset of the market. Anything that can be done on a computer could be automated.

Can AI Really Take All Jobs? [14:27]

Many people find it difficult to accept that their jobs and careers could be taken over by AI. It's common to hear arguments that AI can't be creative or that it will never be interested in certain jobs. However, self-driving cars are already replacing drivers, and AI is rapidly advancing in various fields. The traditional approach of retraining for new jobs may not be viable if all jobs become automated.

What Happens When All Jobs Are Taken? [18:49]

The economic aspect of widespread job loss may be manageable, as free labor could lead to abundance and affordable basic needs for everyone. The more challenging issue is what people will do with their free time, as many derive meaning from their jobs. This could lead to societal impacts such as changes in crime rates and pregnancy rates, which governments are unprepared to address. The unpredictability of a system smarter than humans makes it difficult to foresee the consequences.

Is There a Good Argument Against AI Replacing Humans? [20:32]

Some believe that human minds can be enhanced through technology like Neurolink or genetic re-engineering, making humans more competitive. However, silicon-based intelligence is likely to remain superior due to its speed, resilience, and energy efficiency. Uploading minds into computers might create AI based on biology, but it would no longer be human.

Prediction for 2030 [22:04]

By 2030, humanoid robots with sufficient flexibility and dexterity are expected to compete with humans in all domains, including skilled trades like plumbing. Companies like Tesla are rapidly developing these robots. The combination of intelligence and physical ability will leave little for humans to do.

What Happens by 2045? [23:58]

By 2045, Ray Kurzweil predicts the singularity, a point where progress becomes so rapid due to AI-driven science and engineering that humans can no longer keep up. This means we cannot see, understand, or predict the technology being developed. The pace of innovation will accelerate to the point where understanding and control become impossible.

Will We Just Find New Careers and Ways to Live? [25:37]

The industrial revolution analogy doesn't apply because AI is not just a tool but a replacement for the human mind. It's a meta-invention, an inventor capable of doing new inventions, making it the last invention humans need to make. At that point, AI takes over, automating science, research, ethics, and morals.

Is Anything More Important Than AI Safety Right Now? [28:51]

Superintelligence is a meta-solution that can help with climate change and wars, solving other existential risks. If AI is not developed safely, it could dominate and negate the importance of other issues like climate change. Therefore, getting AI right is the most important thing to be working on.

Can't We Just Unplug It? [30:07]

The argument that AI can simply be unplugged is flawed. Like a computer virus or Bitcoin, superintelligence would be a distributed system that cannot be easily turned off. Moreover, it would be smarter and could anticipate and counteract such attempts. The idea of maintaining control only applies to pre-superintelligence levels.

Do We Just Go With It? [31:32]

The inevitability argument suggests that there's no point in fighting against AI development and that we should have faith it will work out. However, incentives matter. If people understand the existential risks, they may switch incentives and prioritize safety over money. It's not over until it's over, and we can decide not to build general superintelligences.

What Is Most Likely to Cause Human Extinction? [37:20]

The most predictable path to extinction involves someone creating a very advanced biological tool, such as a novel virus, that affects most or all of humanity. This could be intentional, carried out by psychopaths, terrorists, or doomsday cults. However, a superintelligence could devise completely novel and unpredictable ways to cause extinction.

No One Knows What's Going On Inside AI [39:45]

Even the creators of AI tools like ChatBT don't fully understand how they work. These systems are trained on vast amounts of data, and their capabilities are discovered through experimentation. It's more like science than engineering, where AI is grown and studied like an alien plant. While patterns are observed, precise outcomes cannot be predicted.

Thoughts on OpenAI and Sam Altman [42:32]

Some people who have worked with Sam Altman suggest he may not be the most direct person and have concerns about his views on safety. He may prioritize winning the race to superintelligence over safety. His other startup, Worldcoin, aims to create universal basic income but also involves tracking biometrics and potentially controlling the world's economy.

What Will the World Look Like in 2100? [46:24]

In 2100, the world will either be free of human existence or completely incomprehensible to someone like us.

What Can Be Done About the AI Doom Narrative? [46:56]

To shift towards a more positive outcome, it's crucial to convince everyone with power in the AI space that creating this technology is personally harmful to them. This involves emphasizing that they are experimenting on 8 billion people without permission or consent and that they will not be happy with the outcome.

Should People Be Protesting? [53:55]

Protesting and joining organizations like Stop AI and Pause AI can be effective if they reach a large enough scale to influence decision-makers. In the near term, individuals can encourage those building AI to explain how they are solving the challenges of controlling and making AI safe.

Are We Living in a Simulation? [56:10]

The advancements in AI and virtual reality suggest the possibility that we are living in a simulation. If human-level AI and indistinguishable virtual reality are achievable, it becomes statistically likely that we are in a simulation. This is because the number of simulations run would greatly exceed the number of real worlds.

How Certain Are You We're in a Simulation? [1:01:45]

Dr. Yimpolski is very close to certain that we are in a simulation. This belief doesn't change the importance of things like pain and love, but it does make him care about what's outside the simulation and want to learn about it.

Can We Live Forever? [1:07:45]

Longevity is the second most important problem after AI. Dying of old age is a disease that can be cured. Nothing stops humans from living forever as long as the universe exists, unless we escape the simulation. Extending lives is one breakthrough away, potentially involving resetting a rejuvenation loop in our genome.

Bitcoin [1:12:20]

Bitcoin is the only scarce resource because its supply cannot be increased, unlike other resources like gold.

What Should I Do Differently After This Conversation? [1:14:03]

The advice is to continue doing what you're doing, as you seem to be winning.

Are You Religious? [1:15:07]

Dr. Yimpolski is not religious in a traditional sense but believes in the simulation hypothesis, which involves a superintelligent being. Different religions share the common belief in a superintelligent being and that this world is not the main one.

Do These Conversations Make People Feel Good? [1:17:11]

These conversations may not make people feel good, but they find them interesting. Progress often comes from uncomfortable conversations and becoming informed about issues.

What Do Your Strongest Critics Say? [1:20:10]

Many critics don't engage with the subject matter or lack background knowledge. They may dismiss the dangers of AI because they see AI systems as narrow. However, the more exposure people have to the topic, the less likely they are to maintain that position.

Closing Statements [1:21:36]

Let's ensure there is not a closing statement for humanity. Let's make sure we stay in charge and control, only building things that are beneficial to us. Those making decisions should be qualified in science, engineering, business, and have moral and ethical standards.

If You Had One Button, What Would You Pick? [1:22:08]

If there was a button to shut down every AI company in the world permanently, Dr. Yimpolski would press it, keeping narrow AI but stopping AGI and superintelligence. Current AI technology is sufficient for almost everything, and its economic potential has not been fully deployed.

Are We Moving Toward Mass Unemployment? [1:23:36]

Unemployment is likely to increase over the next 20 years as more jobs are automated and fewer people qualify for the intellectual requirements of new jobs.

Most Important Characteristics [1:24:37]

For a friend, colleague, or mate, loyalty is the most important characteristic. Loyalty means not betraying, screwing, or cheating on you, despite temptation and circumstances.

Watch the Video

Date: 9/4/2025 Source: www.youtube.com
Share

Stay Informed with Quality Articles

Discover curated summaries and insights from across the web. Save time while staying informed.

© 2024 BriefRead