Discussion about this post

User's avatar
Donald's avatar

It's possible to think AI is very dangerous, and the best way to mitigate that danger involves studying it, not ignoring it.

If everyone who thought AI was dangerous went and became artists, all the people left working on it would be the ones who didn't think AI was dangerous. Probably even worse.

If you wanted to minimize the risk of nuclear meltdown, you would probably get quite involved in design of nuclear reactors.

Yes there is a large amount of double think, motivated reasoning and the like going on.

People study AI, become AI experts, make AI their personality and career, then start to understand the risks of AI.

It's better to admit that you have got yourself into that sort of mess, than to pretend the risk is small.

Also, you "vast number of future humans" is about x-risk in general, not AI in particular.

Expand full comment
Donald's avatar

The "argument from godel's theorem" is nonsense.

Yes there are a big pile of these "fundamental limit" theorems.

But none of these stop an AI being smarter than us. None stop an AI destroying the world.

The "no bottlenecks" was about the practicality of training giant neural networks, which does seem likely to be practical.

There are all sorts of limits, but not very limiting ones.

Expand full comment
2 more comments...

No posts