If your primary concern is just job loss, my reply does not address it. Also, I don't really share your concern, or at least the strength that you seem to hold it with.
My concern is with superintelligence. A hostile superintelligence would probably end humanity, or at least the part of it that I care about.
Basically I share Elon Musk's view on this. I think superintelligence is quite likely within a few decades or at least a century or two, and if/when it happens, our only hope is that is imbued with values we share.
Ideally, the first organizations to discover how to do this should be open or under the control of democratic governments, or secondarily, under the control of corporations that can be regulated by such governments.
Don’t you see the logic of what I’m saying? How can you read my comment history and still think that automation is not a problem? But the answer is the same for automation and super-intelligence: you have to prevent it from existing in the first place. People who advocate for containment of SI usually prefer prevention but think it’s too difficult and go to containment as a last resort, including Elon musk. Fight as hard as you can for prevention.
My concern is with superintelligence. A hostile superintelligence would probably end humanity, or at least the part of it that I care about.
Basically I share Elon Musk's view on this. I think superintelligence is quite likely within a few decades or at least a century or two, and if/when it happens, our only hope is that is imbued with values we share.
Ideally, the first organizations to discover how to do this should be open or under the control of democratic governments, or secondarily, under the control of corporations that can be regulated by such governments.