The author writes…
“It really shows how you can use the emerging technology of deep learning in an innovative manner to discover new chemistries.”
Ok, we want to discover new chemistries to save lives. Somebody else will want to discover new chemistries to take lives. Thus, discovering new chemistries is not automatically a good thing, there are pros and cons, benefits and risks, which have to be weighed against each other.
In the past we could usually confidently plunge ahead in unlocking some new secret and if there were downsides we figured we’d fix that later. And because in the past the new secrets were typically revealed slowly and in a limited manner, that usually worked.
That equation begins to change as we apply powerful new tools like machine based deep learning to projects like discovering new chemistries. The benefits can now be greater, but also the risks. As the tools of discovery become more and more powerful, the scale of the benefits increasingly expands, as does the scale of the risks.
This issue of scale seems all important, because when any risk becomes big enough it threaten to erase all the benefits.
So for example, while one team solves the antibiotic problem, a huge benefit, another team learns how to create highly contagious fatal viruses which can be targeted at specific populations. If the fatal viruses are deployed and escape the control of their authors, then it won’t matter that the antibiotic problem has been solved.
This isn’t alarmist speculation. It’s history.
We unlocked the secret of the atom and developed a significant new form of clean energy, a huge benefit.
And at the same time we made it possible for one person clicking one button one time to erase modern civilization in less than an hour. Should that button ever be clicked, it won’t matter that we have clean energy. And any objective observation of thousands of years of persistent human conflict suggests that sooner or later somebody will probably click the button.
Assuming that safety issues and problems like toxic waste can be successfully resolved, clean nuclear energy could be a great benefit. But is it worth the price that one person can now destroy modern civilization in just a few minutes?
As powerful tools like machine learning bring new knowledge online at an ever faster pace, and as the scale of the new powers expands, the room for error steadily shrinks and odds that one of the new discoveries will bring down the entire system grows.
Thus, what I hope to find on Quanta are thoughtful articles which don’t relate to the development of new knowledge as if it were some kind of “one true way” religion. I hope to meet scientists and others who are willing to take the same kind of detached objective critical scrutiny that they routinely use in their work, and apply it to the future of science itself.
How much power can human beings successfully manage?
I don’t claim to know, but before we assume that the answer is any amount of power, we might recall that we are the species with thousands of hair trigger hydrogen bombs aimed down our own throats, an ever present self extinction threat which we typically don’t find interesting enough to discuss.
This is who we are handing all these new discoveries too, a species which can quite reasonably be labeled brilliant, but also insane.
If I were a genius with a gun in my mouth, would you hand me another gun?