I have what I call "the flying car theory" that, at its core, states that, when new tech comes out, society instinctively assume the best and worst case scenarios. Probably part of our survival instinct.
Case and point, here's a great video by Graham Stephan that talks about two different reports on AI, one that assumes the worst, the other, the best.
The reality will land somewhere in the middle. That's where our experience, AI training, and proactive agility come into play.
Check it out.