Agreed. It's easy to assume that technology is good for society when it's fun to build it, ship it, and backwards-justify what you've done. Are we sure that AGI will be good for society? If not, why should we try and push to develop it as fast as possible?
I once saw a presentation from Machine Intelligence Research Institute. The presentation listed global dangers to civilization, including AI. Then it was mentioned that, done right, AI could actually solve at least some other problems. That was justification for MIRI efforts.
Agreed that AI could have large positive or negative effects on society and I think the kind of work that MIRI does is incredibly important. I wish that this application took MIRI's outcome-driven approach rather than being tech for tech's sake.