80,000 Hours
80,000 Hours
  • 224
  • 2 781 526
Christian Ruhl on why we're entering a new nuclear age — and how to reduce the risks
_Originally released March 2024._ In this episode, Luisa Rodriguez and Christian Ruhl discuss underrated best bets to avert civilisational collapse from global catastrophic risks - things like great power war, frontier military technologies, and nuclear winter.
They cover:
• How the geopolitical situation has changed in recent years into a “three-body problem” between the US, Russia, and China.
• How adding AI-enabled technologies into the mix makes things even more unstable and unpredictable.
• Why Christian recommends many philanthropists focus on “right-of-boom” interventions - those that mitigate the damage after a catastrophe - over traditional preventative measures.
• Concrete things policymakers should be considering to reduce the devastating effects of unthinkable tragedies.
• And on a more personal note, Christian’s experience of having a stutter.
In this episode:
• Luisa's intro [00:00:00]
• The three-body problem [00:04:11]
• Effect of AI [00:07:58]
• What we have going for us, and not [00:13:32]
• Right-of-boom interventions [00:17:50]
• Deescalating after accidental nuclear use [00:24:23]
• Civil defence and war termination [00:30:40]
• Mitigating nuclear winter [00:37:07]
• Planning for a postwar political environment [00:40:19]
• Experience of having a stutter [00:53:52]
• Christian’s archaeological excavation in Guatemala [01:09:51]
Learn more and find the full transcript on the 80,000 Hours website:
80000hours.org/after-hours-podcast/episodes/christian-ruhl-nuclear-catastrophic-risks-philanthropy/
----
_80k After Hours_ is a podcast by the team that brings you _The 80,000 Hours Podcast._ It features resources on how to do good with your career - and anything else we feel like releasing.
Переглядів: 80

Відео

Highlights: Kevin Esvelt on cults that want to kill everyone and stealth vs wildfire pandemics
Переглядів 864 години тому
This is a selection of highlights from episode #164 of _The 80,000 Hours Podcast._ These aren't necessarily the most important, or even most entertaining parts of the interview - so if you enjoy this, we strongly recommend checking out the full episode: ua-cam.com/video/u9r3XviC6Jo/v-deo.html In this episode, host Luisa Rodriguez interviews Kevin Esvelt - a biologist at the MIT Media Lab and th...
DeepMind and trying to fairly hear out both AI doomers and doubters | Rohin Shah (2023)
Переглядів 8559 годин тому
_Originally released June 2023._ Can there be a more exciting and strange place to work today than a leading AI lab? Your CEO has said they’re worried your research could cause human extinction. The government is setting up meetings to discuss how this outcome can be avoided. Some of your colleagues think this is all overblown; others are more anxious still. Today’s guest - machine learning res...
Hannah Boettcher on the mental health challenges that come with trying to have a big impact (2023)
Переглядів 12712 годин тому
_Originally released July 2023._ Host Luisa Rodriguez and therapist Hannah Boettcher discuss various approaches to therapy, and how to use them in practice - focusing specifically on people trying to have a big social impact with their careers. They cover: • The effectiveness of therapy, and tips for finding a therapist • Moral demandingness • Motivation and burnout • Grappling with world probl...
The precipice and humanity's potential futures | Toby Ord (2020)
Переглядів 49416 годин тому
_Originally released March 2020._ This week, Oxford academic Toby Ord released his new book: _The Precipice: Existential Risk and the Future of Humanity_. It’s about how our long-term future could be better than almost anyone believes, but also how humanity’s recklessness is putting that future at grave risk - in Toby’s reckoning, a 1-in-6 chance of being extinguished this century. Toby is a fa...
Highlights: Rachel Glennerster on “market shaping” to help solve climate change, pandemics, and more
Переглядів 6019 годин тому
This is a selection of highlights from episode #189 of _The 80,000 Hours Podcast._ These aren't necessarily the most important, or even most entertaining parts of the interview - so if you enjoy this, we strongly recommend checking out the full episode: ua-cam.com/video/e1gwXnKcbsM/v-deo.html In this episode, host Luisa Rodriguez speaks to Rachel Glennerster - associate professor of economics a...
Why information security is critical to the safe development of AI systems | Nova DasSarma (2022)
Переглядів 468День тому
_Originally released June 2022._ Today’s guest, the computer scientist and polymath Nova DasSarma, works on computer and information security for the AI company Anthropic with the security team. One of her jobs is to stop hackers exfiltrating Anthropic’s incredibly expensive intellectual property, as recently happened to Nvidia. As she explains, given models’ small size, the need to store such ...
Climate change, societal collapse, & nuclear energy | Mark Lynas (2020)
Переглядів 177День тому
_Originally released August 2020._ A golf-ball sized lump of uranium can deliver more than enough power to cover all your lifetime energy use. To get the same energy from coal, you’d need 3,200 tonnes of the stuff - a mass equivalent to 800 adult elephants - which would go on to produce more than 11,000 tonnes of CO2. That’s about 11,000 tonnes more than the uranium. Many people aren’t comforta...
The world is weird, our intuitions are dubious, and the US might be conscious | Eric Schwitzgebel
Переглядів 434День тому
In today’s episode, host Luisa Rodriguez speaks to Eric Schwitzgebel - professor of philosophy at UC Riverside - about some of the most bizarre and unintuitive claims from his recent book, _The Weirdness of the World._ They cover: • Why our intuitions seem so unreliable for answering fundamental questions about reality. • What the materialist view of consciousness is, and how it might imply som...
Highlights: Matt Clancy on whether science is good
Переглядів 9014 днів тому
This is a selection of highlights from episode #188 of _The 80,000 Hours Podcast._ These aren't necessarily the most important, or even most entertaining parts of the interview - so if you enjoy this, we strongly recommend checking out the full episode: ua-cam.com/video/iS5E6Rn6F5I/v-deo.html In this episode, host Luisa Rodriguez speaks to Matt Clancy - who oversees Open Philanthropy’s Innovati...
What to do - if anything - about wild animal suffering | Persis Eskander (2019)
Переглядів 13614 днів тому
_Originally released April 2019._ We tend to have a romanticised view of nature, but life in the wild includes a range of extremely negative experiences. Most animals are hunted by predators, constantly have to remain vigilant lest they be killed, and perhaps experience the terror of being eaten alive. Resource competition often leads to chronic hunger or starvation. Diseases and injuries are n...
Julian Hazell - Off the Clock with 80k #4
Переглядів 24814 днів тому
Matt, Bella, and Cody sit down with Julian Hazell to discuss the UK recession, religion, higher education, and whether being an amateur swordfighter should give you the right to vote.
Accidentally teaching AI models to deceive us | Ajeya Cotra
Переглядів 1 тис.14 днів тому
_Originally released May 2023._ Imagine you are an orphaned eight-year-old whose parents left you a $1 trillion company, and no trusted adult to serve as your guide to the world. You have to hire a smart adult to run that company, guide your life the way that a parent would, and administer your vast wealth. You have to hire that adult based on a work trial or interview you come up with. You don...
Andrés Jiménez Zorrilla on the Shrimp Welfare Project (2022)
Переглядів 6914 днів тому
_Originally released September 2022._ Rob Wiblin interviews Andrés Jiménez Zorrilla about the Shrimp Welfare Project, which he cofounded in 2021. It’s the first project in the world focused on shrimp welfare specifically and now has six full-time staff. In this episode: • Rob's intro [00:00:00] • Andrés’ background [00:01:31] • History of shrimp welfare work [00:07:59] • What shrimp farming loo...
How “market shaping” could help solve pandemics, climate change, and more | Rachel Glennerster
Переглядів 20321 день тому
In today’s episode, host Luisa Rodriguez speaks to Rachel Glennerster - associate professor of economics at the University of Chicago and a pioneer in the field of development economics - about how her team’s new Market Shaping Accelerator aims to leverage market forces to drive innovations that can solve pressing world problems. They cover: • How market failures and misaligned incentives stifl...
Highlights: Zach Weinersmith on whether we can and should settle space
Переглядів 14821 день тому
Highlights: Zach Weinersmith on whether we can and should settle space
The alignment problem | Brian Christian (2021)
Переглядів 1,7 тис.21 день тому
The alignment problem | Brian Christian (2021)
Highlights: Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, & more
Переглядів 12321 день тому
Highlights: Bob Fischer on comparing the welfare of humans, chickens, pigs, octopuses, bees, & more
Is science net positive for humanity? | Matt Clancy
Переглядів 26828 днів тому
Is science net positive for humanity? | Matt Clancy
Large language models, OpenAI, and striving to make the future go well | Richard Ngo
Переглядів 1 тис.28 днів тому
Large language models, OpenAI, and striving to make the future go well | Richard Ngo
Causation without correlation, money & happiness, hype vs value, and more | Spencer Greenberg
Переглядів 332Місяць тому
Causation without correlation, money & happiness, hype vs value, and more | Spencer Greenberg
Whether we can and should settle space | Zach Weinersmith
Переглядів 583Місяць тому
Whether we can and should settle space | Zach Weinersmith
OpenAI's leadership drama, red-teaming frontier models, and recent AI breakthroughs | Nathan Labenz
Переглядів 887Місяць тому
OpenAI's leadership drama, red-teaming frontier models, and recent AI breakthroughs | Nathan Labenz
Why the abolition of slavery wasn't inevitable | Christopher Brown
Переглядів 523Місяць тому
Why the abolition of slavery wasn't inevitable | Christopher Brown
Rendering bioweapons obsolete and ending the new nuclear arms race | Andy Weber
Переглядів 491Місяць тому
Rendering bioweapons obsolete and ending the new nuclear arms race | Andy Weber
How quickly could AI transform the world? | Tom Davidson
Переглядів 19 тис.Місяць тому
How quickly could AI transform the world? | Tom Davidson
Why evolution left us so vulnerable to depression and anxiety | Randy Nesse
Переглядів 676Місяць тому
Why evolution left us so vulnerable to depression and anxiety | Randy Nesse
Why babies are born small in Uttar Pradesh, and how to save their lives | Dean Spears
Переглядів 164Місяць тому
Why babies are born small in Uttar Pradesh, and how to save their lives | Dean Spears
The future of mental privacy in the neurotechnology age | Nita Farahany
Переглядів 56 тис.Місяць тому
The future of mental privacy in the neurotechnology age | Nita Farahany
Worldview diversification and how big the future could be | Ajeya Cotra
Переглядів 313Місяць тому
Worldview diversification and how big the future could be | Ajeya Cotra

КОМЕНТАРІ

  • @bobtarmac1828
    @bobtarmac1828 3 дні тому

    Laid off by Ai then human extinction? An Ai new world order? With swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?

  • @michelleelsom6827
    @michelleelsom6827 3 дні тому

    So my worry is that AGI & ASI will simply use the energy they need. They won't worry about leaving some energy for us unless they are aligned to do so 😢

  • @aisle_of_view
    @aisle_of_view 3 дні тому

    We're already seeing accounts of people losing their jobs to AI, and upon the trend you can depend.

  • @nicholascurran1734
    @nicholascurran1734 4 дні тому

    I have liked the discussions I've listened to on this channel, and am wondering where to find them that isn't a year later than the original release date?

    • @eightythousandhours
      @eightythousandhours 3 дні тому

      We've only recently started uploading new podcast episodes regularly to our UA-cam channel, so you'll find new episodes uploaded here but we're still catching up on uploading the full archive. You can get new audio episodes the moment they're released by subscribing to the 80,000 Hours podcast on either Apple Podcasts or Spotify.

  • @MegaNatsirt
    @MegaNatsirt 4 дні тому

    Very well done video 👍

  • @rainynight02
    @rainynight02 5 днів тому

    I listened to the book on audible and honestly, I was hoping it would help me but I got nothing out of it myself.

  • @voodoochild420ai
    @voodoochild420ai 6 днів тому

    awesome podcast, ai is nutty

  • @vayalinda4762
    @vayalinda4762 6 днів тому

    great content , but you talk so fast ! I recommend playing this one at .75x

  • @roermy
    @roermy 7 днів тому

    Good!

  • @neilcreamer8207
    @neilcreamer8207 7 днів тому

    There are huge problems with the arguments here. I notice that when people talk about AI, they ignore the difficult questions and answer easy ones instead. There also seems to be a lot of ignorance about the humans that AI is supposedly designed to emulate. Here are a few issues I noticed with what Tom said. What does it mean to reward an AI? In order to reward a person you give them something they want. What would an AI want? How could an AI “want” anything except the goals you give it? The idea that giving an AI all the data we have would enable it to solve problems is just a replay of the last failed technology project: Big Data. Manipulating data had never produced ideas. The belief that it can is based on an ideology that the human brain is like a computer and that it processes data. At best, this might describe only a fraction of what humans do when they think. Where has scanning people’s brains ever told you anything about what they’re thinking except in a SciFi film? This is another piece of ideology that has never been seen. The closest thing to this we have seen is that certain brain areas “light up” in connection with certain cognitive processes but nothing in the images we see in various scans tells us anything about the contents of the associated thinking. “There’s nothing magical about [the human brain]”. Given how little we understand of what the brain does and how it does it, it might as well be magic at this point. We can see correlations with various cognitive processes but that hasn’t actually shown us what the brain is doing. We might be able to develop machines to do what the human brain does one day but as far as understanding the brain and its connection to thinking we aren’t even at square one yet. Tom said that if you’d told hunter-gatherers what the future held they wouldn’t have believed it and that “It’s the norm for things to go in a completely surprising direction”. Yet here we are predicting the future. Has anyone ever looked into our previous record on predicting how technology would advance? It’s laughable. A lot of the argument for AI being made here is based on assuming that it will be able to do what proponents claim it will. These are theoretical claims, especially when it is claimed that AI will solve “social and political problems”. This is the sort of thing you’d see in a research proposal or some other sales pitch aimed at getting funding. Tom said, “Evolution wasn’t trying …” Evolution isn’t goal-oriented. It’s just what happens. His whole take on evolution is incorrect. Evolution doesn’t direct what happens or “make tweaks”. It doesn’t DO anything. Evolution is not a project manager. The sense in which Tom is saying that AI is smarter than most of the people he talks to is very limited. He means “knowledgeable”. It’s like calling an encyclopaedia smart. The fact that an AI can outperform humans on an exam is hardly a predictor of its ability to come up with new ideas or to analyse and solve problems. In summary, there is a vast amount of ignorance about how humans think. This isn’t Tom’s fault but a refection of the whole field of AI. We’ve managed to make remarkable programs that can beat humans at chess and Go. We’ve developed large language models that can predict text. There are programs that can produce mediocre art and music by processing human-made originals. However, no-one knows how humans think or how a child learns to navigate the world without being given explicit rules. The AI industry knows a lot about computers and little about the human mind which it still conflates with the brain. It’s laughable that we’re claiming that we will produce AGI within years when we don’t even know what it’s supposed to do.

  • @tweber2546
    @tweber2546 7 днів тому

    I love the previous book! Thank you! a quick shoutout for rational animations here on youtube, which also cover the topic well

  • @BadWithNames123
    @BadWithNames123 9 днів тому

    do you have any plans to record new podcast?

  • @RowanSheridan
    @RowanSheridan 11 днів тому

    What a great speaker

  • @user-id6xm1qw9l
    @user-id6xm1qw9l 11 днів тому

    Evolution fine tuned us for consciousness. I don't see it 'just happening' in this way.

  • @Dan-dy8zp
    @Dan-dy8zp 11 днів тому

    Consciousness in fetuses arrives suddenly. The brain of animal fetuses start out with random individual brain cell firing not in waves such as even adult lobsters have. These fetuses do not move. No reaction to pain or light. Then, suddenly, late in pregnancy, the brain cells start to fire together is waves, and then movement starts, reactions to stimuli start, and a sleep wake cycle starts (not a 24 hour one, but much shorter, the same as newborns). I think most scientific study has been in other mammals like lambs for ethical reasons but no evidence humans are different.

  • @user-id6xm1qw9l
    @user-id6xm1qw9l 12 днів тому

    I never had any resistance to consciousness being made by an organ other than human heads, or that neurons could be replaced by (conscious) bugs, but with america, all the stuff 'it' does is stuff humans consciously decided to do. The bug-neurons would not know what the alien knows, or consciously make decisions for the alien.

  • @Dan-dy8zp
    @Dan-dy8zp 12 днів тому

    Zvi says alignment isn't enough because . . . and describes AI that are *not aligned*. It's aligned with our *values*, not obedience, that makes them *aligned*.

  • @kimayadelu8099
    @kimayadelu8099 12 днів тому

    This is an AMAZINGLY good interview - super informative, informed, insightful & thoughtful. Thank you both so much.

  • @club213542
    @club213542 12 днів тому

    this didn't age well... looks like its an all out race to agi no safety at all and i doubt its just for OpenAi. Seems to me its practically here I mean llm's have beat the turing test already things are just plain smart in ways we don't even understand.

  • @bobtarmac1828
    @bobtarmac1828 13 днів тому

    Bad Algorithms? Maybe. But with swell robotics everywhere, Ai jobloss is the only thing I worry about anymore. Anyone else feel the same? Should we cease Ai?

  • @leroyessel2010
    @leroyessel2010 14 днів тому

    To slow down the runaway crazy AI, AGI Train it should be mandatory for consumer protection there is transparency, human DAO voting guidelines and no centralized computer cloud with DFINITY and Internet Computer Protocol and UTOPIA.

  • @askingwhy123
    @askingwhy123 15 днів тому

    I think Zvi is barking up the wrong tree at when he intuits that "one party" would be more open to eliminating the Jones Act. Let's look at coal for comparison. Costly, horrible externalities, no future just due to levelized cost of generation. Republicans continue to pledge that "Coal's back!" etc. There are about 43,000 coal miners in the US, with an average age of 40 years. Paying every miner $100K for the next 25 years (until the average age is 65) would cost $107 billion, just over $4 billion per year -- two ten-thousandths of gross domestic product -- the kind of money you find in the Pentagon's couch cushions. The fact that the United States allows itself to be held hostage by such a tiny but politically expedient sliver of the working population demonstrates a catastrophic failure of both imagination and political will. Some problems are complex, but this one is both simple and inexpensive to solve--unlike climate change. And Republicans are dead-set against risking the ire of this tiny minority.

  • @radnaut
    @radnaut 16 днів тому

    AI for first graders

  • @jaomello
    @jaomello 16 днів тому

    So glad I found this ep, currently studying Williams in college and the reflections of Dr Brown are quite enlightening to our subject. This has to be the most underrated podcast on YT!

  • @gerftrztr
    @gerftrztr 16 днів тому

    Interesting that Julian seems to want to reinvent the Landsgemeinde (see wiki) _Historically, the only proof of citizenship necessary for men to enter the voting area was to show their ceremonial sword or Swiss military sidearm (bayonet); this gave proof that they were a freeman allowed to bear arms and to vote._

  • @aisle_of_view
    @aisle_of_view 16 днів тому

    This is probably why the Taiwan situation is getting so hot. They're the chip manufacturing center of the world. If China really moves to annex them, it'll be a huge advantage for our rival. But at least it will let us hold onto our jobs a bit longer.

  • @askingwhy123
    @askingwhy123 16 днів тому

    "Working on AI to change the culture from within" focused on factory farming but never touched on policing and the military. Right-wing extremists gravitate to empowered professions with the opportunity to exercise violence. Many police departments & SOFOR have struggled with recruiting different personalities, but some have been successful, so I think it's wrong to completely discount this approach in AI.

  • @jmosf
    @jmosf 17 днів тому

    These "shooting the breeze" podcasts are a lot of fun

  • @c.guinevere
    @c.guinevere 17 днів тому

    All this tech is a mimick of our latent organic abilities.

  • @UD-Blackknight
    @UD-Blackknight 17 днів тому

    Already happen

  • @flickwtchr
    @flickwtchr 17 днів тому

    I should have added, that I appreciated very much the range of interesting perspectives conveyed by Ajeya Cotra, and the dialogue of the interview.

  • @flickwtchr
    @flickwtchr 18 днів тому

    "Yeah, I think that might have to explain why a lot of people who maybe haven't been paying so much attention to this do feel unnerved on some level, that it doesn't feel like things are fully handled, things are fully understood, even if you're an optimist." Uh, actually, people who are the most concerned are those that HAVE been paying attention, who understand that the industry as a whole, and particularly the handful of large corporations training and deploying these LLMs/multi modal and otherwise, has not had a concomitant commitment of research and investment dedicated to safety, alignment, and responsible deployment. Those of us paying the most attention are those of us who are aware of the arguments of Geoffrey Hinton, Max Tegmark, Joshua Bengio, Connor Leahy and many others vs the arguments of Sam Altman, Yann LeCun, Melanie Mitchell, Joshcha Bach, etc. Those of us paying attention also have heard and contemplated the arguments to rush forward as fast as possible such as effective accelerationists, effective altruists, and those invested in the tech of course who are also mostly of the libertarian/neoliberal economics mindset that abhors government regulation. It is noteworthy however, that two camps have emerged in that regard, one group, the advocates of open sourcing even the most powerful models point their collective finger at the leading large corporations in AI tech accusing them of pursuing "regulatory capture". Those of us paying attention know that none of the people above, in the entire spectrum of AI tech, currently have developed the technology of safety/alignment to reliably control (against relatively easy "jailbreaking") the current crop of models that fall somewhere in their capability between ANI and AGI, as they rush forward to develop AGI/ASI. So yeah, the people MOST concerned have been aware of the scope of the debate and understand just how messy the entire situation is. A great example of how irresponsible AI tech has been, is to simply point to how deep fake AI tech has been platformed to the public without any regulation in place other than disclaimers that essentially say "don't do mean stuff with our tech, but if you do, we are indemnified against any consequence from users' behavior, so have fun!".

  • @epapanak
    @epapanak 19 днів тому

    If you look at panel prices (less than 0.15 USD/w) battery price (less than 80 usd/kWh, total equipment for 10 kw installation less than 5000 USD. Receipts available) and the amount of subsidies, tax brakes and grid upgrades governments give to big companies to provide energy to homes at an ever increasing prices and compare it with how much it will cost governments to provide 100% financing for all required renewable energy equipment , so that families will have all energy required at zero cost forever , then you will find that the second option (financing the families) is much cheaper and it gives not only abundance but also freedom to the individual person. This option requires no talk of UBI, which is promoted, because every family will be able to have everything it requires including manufacturing capabilities . This distributed energy enhances the country's security (see drone hits in Russia) and doesn't require grid upgrades. There are forecasts that AI will take all jobs away from humans. The antidote to this is to provide homes with energy (as described above) and allow families to live in abundance and freedom. No UBI necessary.

  • @raghavendrakaushik4871
    @raghavendrakaushik4871 19 днів тому

    This is great! This should have a lot more views!

  • @cogent211814
    @cogent211814 19 днів тому

    Not until the electoral college is eliminated. This guy is a dangerous idiot.

  • @epapanak
    @epapanak 20 днів тому

    You are wrong in looking only for jobs in the future. You should look for intenueship in the individual. People in the future will not have to work as we know work today. They could use the one thousand workers future robots will offer to produce abudance of goods. p🎉

  • @KeithDraws
    @KeithDraws 20 днів тому

    All of these improvements will only benefit the billionaire class. They have never wanted to pass the money around and I seriously doubt they will change. I fear for the future of ordinary people. Perhaps this is why the billionaires are building bunkers now?

  • @adriansskapars746
    @adriansskapars746 20 днів тому

    These vids are very watchable :)) does 80 000 hours have a tiktok account? I know it does ads on social media already

  • @beilkster
    @beilkster 21 день тому

    This is a discussion about changing grant or government funding, but they call it "markets". They also use the term "innovation" inplace of the phrase "the change we want in the world". And "push funding" instead of "profit". I feel like the whole issue they are addressing could be solved MUCH faster by open publishing business cases on solutions you want to see in the world. I'm only 30 minutes in when I posted this. Correct me if I got this wrong.

  • @user-jh2yn6zo3c
    @user-jh2yn6zo3c 22 дні тому

    How could a reward be better than simple novelty -- so that reinforcement learning doesn't get stuck watching TV in the maze?

  • @vl4394
    @vl4394 22 дні тому

    Ah yes, 3333.3333, 55th prime, etc.

  • @kinngrimm
    @kinngrimm 22 дні тому

    31:00 There are think tanks if not even faculties within universities that try to predict, some call themselves futurologists others have described themselves as being Cassandra's. Overall maybe this needs some more funding and more focus on specialisations on dangers coming with new technologies and then a subcategory being AI Safty. Having heard Elias Yudkowski, Texmark and others, they have produced a variaty of scenarios, some more or less viable, more or less likely. Listening then to Jan Lacun is like putting your head in the sand ignoring anything that might make your job more difficult in finding AGI (G standing for general) upto the point he denies without proof theories as he thinks the engineer will always have the head to avoid the worst case scenario.

  • @jsivonenVR
    @jsivonenVR 22 дні тому

    This didn’t age well…

  • @lj1653
    @lj1653 23 дні тому

    imagine GLADOS from portal, except it is constantly doing all scientific testing and research imaginable, using robot workers

  • @enlightenment5d
    @enlightenment5d 23 дні тому

    Still actual) Amazing!

  • @BarbaraBrasileiro
    @BarbaraBrasileiro 26 днів тому

    It's funny that what he said about new models being much more cost effective and yet using more and more compute is exactly what open Al and Microsoft have announced recently with gpt 4o and the next models.

  • @BarbaraBrasileiro
    @BarbaraBrasileiro 26 днів тому

    Did he mention the potential energy and chip limitation? There's a possible physical limitation for things going all the way he's predicting. I'd like to see his take on that.

    • @spazneria
      @spazneria 17 днів тому

      Take what I'm about to say with a grain of salt, I'm just a dude replying to a UA-cam comment. That being said, however, there is no physical limitation for things going 'all the way' - the proof the limitation doesn't exist is that you are reading this right now. They're currently discussing developing gigawatt (what the hell is that?!) datacenters for AI training which will be on the order of hundreds of ExaFLOPS in compute. The human brain operates on 20 watts of power (50,000,000x less energy) and performs right around an ExaFLOP of compute - obviously it's not a 1:1 correlation, but the point stands. As computing has increased in power exponentially, our energy efficiency has also increased exponentially. We are in wild wild times right now. Even if you were to accept that human-level reasoning is the limit, with enough compute and energy there will be billions of einstein-level thinkers tackling literally every single scientific research question that exists. And all this in our lifetime. Sorry for the long response.

  • @Loroths
    @Loroths 26 днів тому

    I do find it interesting when people freak out about the potential of AI being deceptive, having ulterior goals, harming humans. It is a valid concern sure but it seems to me we have millions of humans who pose such threats. Every day. For thousands of years. We haven't figured that problem out yet. If that's the case, given a future of human rulers or AI being potentially millions of times smarter than the smartest human minds - I'm tempted to prefer AI. "But it may harm us!" Yes, but humans harm humans all the time. I don't feel safe in today's world so bring on the future.

    • @lpslancelot05
      @lpslancelot05 23 дні тому

      Yes. But humans who want to hurt others are generally limited in their power, also, they’re afraid of the consequences so they generally act in somewhat reasonable ways. ASI would be almost infinitely more intelligent and nearly impossible to understand it or to reason with it if it were able to “get out.” It’s motives, moves, and power would likely be alien to anything we could comprehend.

    • @FlaviusAspra
      @FlaviusAspra 18 днів тому

      You're right, except the deceptive humans: - don't increase in number exponentially - don't increase in smartness exponentially It's the exponentiality which freak out people, not one individual AI, no matter how strong it is.

    • @getsjokes24
      @getsjokes24 15 днів тому

      It's also that we don't know what kind of intelligence AGI/ ASI will be. We assume it will be similar to ours, and will operate on a similar time frame.