The New Yorker has a long article on the possibility of an AI-driven singularity. It surveys many of the other news stories and letters and debates on the subject. The answer is really that nobody knows, but since it is an existential threat of unknown probability it certainly belongs somewhere on the risk matrix.
I can see nearer term problems too. Thinking back to the “flash crash” of 2010, relatively stupid algorithms reacting to each other’s actions and making decisions at lightning-speed were nearly able to crash the financial system. We recovered from that one, but what if these new algorithms lead to a crash of financial or real infrastructure systems (electricity, internet, transportation, water, food?) that we can’t recover from. It doesn’t take a total physical collapse to cause a depression, just a massive loss of confidence leading to panic. That scenario is not too hard for me to imagine.
I suspect that we are approaching the peak of the hype cycle when it comes to AI. It will build to a fever pitch, the bubble will burst (in the sense of our attention), it will seem to the public like nothing much is happening for a few years or even a decade, but in the background quiet progress will be made and it will eventually, stealthily just take over much of our daily lives and we will shrug like we always do.