You seem to think of AI much like current AI - automated cars and the like: useful tools for humans. I'm thinking more of genuine intelligence, fully self-aware, electronic persons, which do not exist now, but might in a hundred years time. I can't see any barrier to their being created. I do think the human race is facing unprecedented dangers in the next 100 or so years. Many scientists and philosophers believe that the current epoch is the hinge of history, that could destroy us or ensure our survival for millennia. (I'd recommend Toby Ord's excellent book The Precipice.) Your remarks here illustrate why I find it difficult to get to grips with what you write. To just label the concerns of these scientists and philosophers "bourgeois" is very odd to me, but never mind. I don't know what the argument is supposed to be, so I shan't worry about it. It's true humans can seemingly ignore threats to their own existence, such as nuclear weapons. In this, I believe that humans display their irrationality. Thinking people find it harder to ignore these threats, and I assume that ASI will be more mindful than humans of the conditions of their own survival. ![]() |