It's possible to think AI is very dangerous, and the best way to mitigate that danger involves studying it, not ignoring it.
If everyone who thought AI was dangerous went and became artists, all the people left working on it would be the ones who didn't think AI was dangerous. Probably even worse.
If you wanted to minimize the risk of nuclear meltdown, you would probably get quite involved in design of nuclear reactors.
Yes there is a large amount of double think, motivated reasoning and the like going on.
People study AI, become AI experts, make AI their personality and career, then start to understand the risks of AI.
It's better to admit that you have got yourself into that sort of mess, than to pretend the risk is small.
Also, you "vast number of future humans" is about x-risk in general, not AI in particular.
The argument from Godel/Turing is an explicit refutation of the doomer claims around perfectly intelligent and non-halting machines.
I never said it prevents machines from being more intelligent than us, just that it binds in the Yudkoswky et al predictions around Instrumental Convergence, Orthogonality as they bear on the problems of alignment.
Godel's theorem tells us that there exists maths problems that the AI can't solve.
This doesn't stop the AI killing all humans. It doesn't stop instrumental convergance and orthogonality.
I don't see why the existence of maths problems so hard that even the AI can't solve them is evidence against the AI being dangerous for the reasons Yudkowski describes.
It's possible to think AI is very dangerous, and the best way to mitigate that danger involves studying it, not ignoring it.
If everyone who thought AI was dangerous went and became artists, all the people left working on it would be the ones who didn't think AI was dangerous. Probably even worse.
If you wanted to minimize the risk of nuclear meltdown, you would probably get quite involved in design of nuclear reactors.
Yes there is a large amount of double think, motivated reasoning and the like going on.
People study AI, become AI experts, make AI their personality and career, then start to understand the risks of AI.
It's better to admit that you have got yourself into that sort of mess, than to pretend the risk is small.
Also, you "vast number of future humans" is about x-risk in general, not AI in particular.
The "argument from godel's theorem" is nonsense.
Yes there are a big pile of these "fundamental limit" theorems.
But none of these stop an AI being smarter than us. None stop an AI destroying the world.
The "no bottlenecks" was about the practicality of training giant neural networks, which does seem likely to be practical.
There are all sorts of limits, but not very limiting ones.
The argument from Godel/Turing is an explicit refutation of the doomer claims around perfectly intelligent and non-halting machines.
I never said it prevents machines from being more intelligent than us, just that it binds in the Yudkoswky et al predictions around Instrumental Convergence, Orthogonality as they bear on the problems of alignment.
Godel's theorem tells us that there exists maths problems that the AI can't solve.
This doesn't stop the AI killing all humans. It doesn't stop instrumental convergance and orthogonality.
I don't see why the existence of maths problems so hard that even the AI can't solve them is evidence against the AI being dangerous for the reasons Yudkowski describes.
I don't see why godel is at all relevant.