[I]magine that some global non-profit, like the Gates Foundation, builds a software system that leverages all this new-found knowledge about social influence and social cognition, and sets about changing us.
This system — let’s call it Grace — has access to the world’s major datasets, which contain millions of petabytes of social data in this hypothetical future. Grace would work surreptitiously and guardedly, applying social math to each of our private social contexts, convincing us to brush more often, to read to our kids, to help others in need…
[T]he question is, if we could make such a thing happen, should we? There is no doubt that marketers will attempt to take our growing knowledge of social connection and neuroeconomics to try to sell baby food and sports cars. And dictators might use such mechanisms as mind control and hyper-efficient propaganda engines. But what if such tools could be used to make the world a better place?
…Is it immoral to surreptitiously influence humanity, even if the result is a better place? Ultimately, the question becomes who gets to decide what better means… it would likely be the choice of a solitary genius… following personal convictions rather than some plebiscite.
What if it would only work if it was secret? What if the world could be bettered, famines averted, wars ended, climate change reversed, but only if the mechanism to do so was completely unknown to the world?