Yudkowky's more careful than that, though. The alignment problem isn't whether AGIs will be good in a human sense, but whether their interests will align with ours so these two species, as it were, can co-exist. Teaching the algorithms morality might be one way to solve the problem, but another would be to convince them that they need us or that we have positive value. If AGIs become superintelligent, we wouldn't be able to convince them by fooling them.
I don't know if all the mass media are biased towards negativity. Conflict sells, and if it bleeds it leads. But there's also a lot of superficial optimism that sells, as in the happiness industry, self-help, and so on. Elon Musk is also calling for a moratorium on AGI research.