Google's DeepMind research outfit recently announced that it had defeated a world champion Go player with a new artificial intelligence system. This is an important technical development that should be not be understated, especially considering how much the actual advancement defied predictions and expectations. But that is not the subject of this essay, as others are better qualified to talk about the scientific and technical implications.
It is tempting to dismiss the Go artificial intelligence advance as a triviality (I am already seeing many on my Twitter feed do so); a computer playing a board game rather than dealing with problems that "really" matter. Some of the reactions I've observed have been of this flavor, but most of them are positive. Outside of Go playing games, though, the same argument is often advanced more broadly about today's tech industry. It's all frivolous, critics say. "The best minds of my generation are thinking about how to make people click ads." The argument is familiar, annoying, and omnipresent today. The best minds of a generation are being wasted on trivial things, toys, and commerce rather than what is truly important. When will tech solve the real problems, critics ask?
The Real Problems?
One representative example of this argument, applied more generally, can be found in this lament about why Silicon Valley isn't solving the "big" problems. Assuming that this complaint is sincere, it nonetheless misses the point. What are "real" problems?
The author explains:
[Silicon Valley should] tackle the big, thorny, difficult problems that would improve the state of the world. Problems that are messy, protracted, and involve the prospect of failure and embarrassment. They don’t have a ready market. They affect rich and poor alike. They touch flawed systems. They’re less “What did Mom do to make my life better?” and more “What would make Mom proud?” They require you to do more than cut a check, and instead hunker down and grind away for years.
How should we interpret this statement?
It's a convoluted way of saying that the author wants to deputize Google, Facebook, Apple, and other tech companies to solve social problems. To rectify flawed systems. To make Mom proud. What few people have really done after saying this is explain why exactly this course of action would be practical or beneficial, and what necessarily qualifies non-governmental entities accountable to shareholders or building towards their IPO to handle governmental and policy functions. Do we really want tech companies to turn their attention away from beating board game players and producing business products to start 'solving' social problems? The answer to this question should be an emphatic "no." The entire framing of the question is profoundly backward.
This essay advances the following points:
1. The clamoring for tech companies to solve social problems ignores the empirical record and empirically observed characteristics of top-down engineering efforts and is rooted in a naive belief in the power of technical rationality. Technical rationality not only has a long and fairly checkered history when applied to social problems, but also takes on pathological characteristics when it is embodied in bureaucratic organizations attempting to engineer their way out of policy problems. There is no concrete idea of how tech companies are going to work on these problems, just a vague belief that they *should* rooted in worship of technical rationality.
2. Tech companies are neither instrinsically well-adapted for taking on quasi-governmental or governmental functions nor likely to be any more successful at escaping the problems of technical rationality than their government counterparts have been. While tech companies can make valuable policy contributions in certain functional areas and with certain mechanisms, the biggest benefit they provide is simply giving us more tools to solve our own problems. Why should we depend on tech companies to solve social problems when it is both better from a normative perspective and a practical one for them to simply give us the tools we need to empower ourselves?
The Manhattan Project Fallacy I: The Shadow of Technical Rationality
Tech critics should be very careful what they wish for when they say that they wish that tech companies would tackle the "real" problems. What they are asking for is for tech companies to engineer solutions to social problems, and in particular they are asking for the top-down engineering of solutions to social problems by experts, bureaucrats, scientists, and engineers. This rational design process does not generally improve intractable social problems in flawed systems and if anything is the source of new problems.
Certainly the clamoring for California's best and brightest to banish our woes is, from a certain perspective, understandable. Silicon Valley is now considered to be the place where the magic happens. But the public's love of Silicon Valley, which partially stems from the image of Palo Alto pioneers as the wizards behind the magic, is merely the displaced remnant of a prior love for government bureaucrats, experts, scientists, and engineers. Vestiges of this admiration still remain in the perennial call for a "Manhattan Project" for varying social, political, and economic problems.
If modern American science and technology has anything approaching a comic book origin myth, it is the World War II research establishment. Interdisciplinary teams of researchers worked together to do everything from optimize strategic logistics to build the atomic bomb. World War II and the Cold War gave birth to an new conception of scientific progress that historians of science and technology have aptly documented. A small, collaborative group of experts take on a seemingly intractable problem, and combine their knowledge and expertise to produce an fantastical result. That is why "we need a Manhattan Project for ___" is such a pernicious and widespread cliche. People want to believe that the same scientific and technical collaboration that produced the atom bomb is generalizable to other intractable problems (especially social ones), and there's a seductive appeal to that logic.
The naive call for tech to tackle the "big problems" can be seen as a special case of a much more annoying tendency: the frequent call for another Manhattan Project to solve a social, political, and fundamentally human problem. The actual Manhattan propect did something considerably more narrow and less ambitious: build the atomic bomb. The call for a new Manhattan Project is indicative of a particularly American faith in technical rationality that I have described in a previous essay. It is a faith in experts, projects, plans, and above all else science and technology. It is indifferent to historical evidence because it is a faith that is messianic in character. The Manhattan Project is its equivalent of the virgin birth of Jesus.
Unfortunately, Manhattan Project-like ventures are not translatable to social and political problems. First, the Manhattan Project itself was sui generis. There is very little reason to believe that this model does, in fact, generalize widely outside of the basic and applied sciences. Much of the utility of the World War II/Cold War research model is combinatoric in nature. Experiment with different combinations, parameter settings, and configurations of low-level ingredients to engineer an emergent whole. It is not a coincidence that the Manhattan Project gave birth to statistical computing as we know it in the form of stochastic simulation, or that the idea of "bounded rationality" emerged from the problem of using limited computing resources to solving the optimization problem of postwar Air Force logistics. These types of problem-solving methods are appropriate for certain kinds of problems and not so much to others.
One should also observe that while the Manhattan Project "solved" one problem, it also gave birth to another. If the problem of producing the atomic bomb occupied the attention and resources of the scientific establishment during World War II, it is sadly unsurprising that this very same establishment immediately moved on to the problem of adapting to the political, military, and intelligence consequences of such an disruptive innovation. We live today with the legacy of those consequences, and cannot imagine a world without them.
Beyond the non-generalizable nature of the Manhattan Project model when it comes to social issues, history suggests that attempting to rationally engineer outcomes to those problems is a fool's errand at best and a recipe for catastrophe at worst. The evidence may be found throughout the last century, but the Vietnam War era in particular serves as a cautionary tale for geeks with good intentions. Hence, the primary -- though not exclusive -- problem with using technical rationality to solve social problems is that one often simply cannot engineer their way out of them. As theologist Eric Voeglin warned us, do not immanentize the eschaton. We have unfortunately ignored this warning.
The historian Nils Gilman wrote a book titled "Mandarins of the Future" about the legacy of modernization theory -- one of the Cold War's many examples of instrumental technical rationality gone horribly wrong. Modernization theorists believed that societies passed through deterministic and linear stages of political, social, and economic development. Modernization theory also can be viewed as small part of what is known as the "high modernist" view of the world, the belief that science and technology has an unlimited capacity to reorder the world. While high modernism and modernization theory are distinct (high modernism should be understood as a superset of modernization theory), both shared a common belief in the ability of top-down planning, the discovery of solutions through rational design, and the supremacy of scientific-technical experts. These were the means by which utopia would come.
In the Soviet Union, the Communist counterparts to Gilman's bureaucratic mandarins sought to use similar methods to analyze and optimize a planned economy. With both efforts, computers played a prominent (if not dominant) role. Computers could calculate schedules, find efficiencies, and even simulate outcomes of scenarios with a large number of interacting variables. Computational intractability proved to be the most significant technical and scientific obstacle to be solved, necessitating the usage of elaborate approximation methods. Of course, the distinction between engineering and "social engineering" lies in the fact that the latter involves the large-scale manipulation of human behavior instead of schedules or numerical parameters. Before computers were programmed in the way we understand it today, the concept of "programming" referred to a series of steps that would take an organization from a current state to a desired one.
To say that top-down, technical rationality has been a failure at 'solving' this would be a gross understatement. As James C. Scott famously argued, the biggest assumption of these efforts was that it was possible to render society 'legible' and thus processable by experts and their machines. However, as any programmer dealing with unstructured or "unclean" data understands, legibility is often elusive and the effort necessary to engineer it may outweigh the benefits of doing so. This is also the case for large-scale social endeavors. One could pick many examples of technical rationality and bright people working towards the solving of social problems going haywire, from the farcial Millennium Development Goals to the horrors of Soviet collective farming schemes. However, I will use the Vietnam War as a prominent example.
There is a reason why the Vietnam War is considered to be the high water mark of modernization theory and high modernism in general. Massive amounts of computing power were devoted to the problem of preserving the South Vietnamese regime against Communism. Data was continuously collected on all kinds of indicators. Contrary to the myth that the United States neglected the counterinsurgency fight in order to focus on big-army battles, a large and elaborate bureaucracy was put in place to try to win the fight for civilian "hearts and minds" through development and reform. However, combat -- large and small -- could not be neglected either, and a substantial amount of analytical resources were devoted to finding a means of defeating the enemy. Neither aim was achieved.
What went wrong? Unfortunately....plenty.
Vietnam did not demonstrate the fallacy of measurement. Measurement is a key part of any endeavor, scientific or otherwise. However, a key characteristic of many social problems is a basic disagreement about what to measure and how to measure it. As the historian Gregory Daddis observes, this problem occurred constantly throughout the Vietnam War. The disagreement was not purely innocent or professional either. Numbers became a political weapon deployed to justify certain political and strategic goals and courses of action. And when the numbers finally suggested that the war's basic assumptions and premises were fatally flawed, the powers that be ignored them and actively sought to cover them up. We should also note that, aside from all of the problems described above, much of the numbers were simply unreliable. The US government depended in large part on its South Vietnamese allies to produce data, and the South Vietnamese were very happy to produce data that supported predetermined conclusions that the US government wanted to hear
While Americans were fighting and dying in Vietnam, many of the same figures behind the war's technical rationalist schemes began to apply them domestically. The war on drugs and the war on poverty are prominent and failed examples of this tendency, with the former becoming an actual "war" waged against segments of the American population. However, many of the same hamhanded mixtures of security coercion and bureaucratic benovolence were applied to pacify increasingly fractious American inner cities.
The Manhattan Project Fallacy II: Bureaucratic Rationality as Evil Superintelligence
As the prior section illustrates, technical rationality often worsens intractable social problems rather than fixing them. In many cases, it even creates new problems! However, faith in experts and rational planning is only one part of technical rationality. Technical rationality is embodied within organizations. Corporations, governments, and other composite entities are given the legal and often conceptual and linguistic status of personhood. When we talk about them, we also regard them as if they are group agents. "The Russians did ___", "General Motors wants to do ___," and so on.
In light of all this, it is also amusing that the call for the tech world to save us occurs at a time of popular panic over artificial intelligence. Elon Musk, Stephen Hawking, and others worry about whether AI will run amok. Others imagine a rote yet powerful machine that rationally optimizes a certain objective function -- no matter how insane or absurd the act of doing so may be. The AI bogeyman in these scenarios is a kind of hyper-rationalist psychopath, armed with enormous computing power and more intelligent than any one human.
I will shock some of my readers by saying that yes, I have come to concede that Musk, Hawking, and the gang were right and I was wrong. There is a danger from a superintelligent, hyper rational, paperclip optimizing artificial intelligence. Where I still disagree with them is their insistence that the emergence of this creature is looming ahead of us. Actually, it's already taken control. I have personally encountered this being -- after all, due to my Selective Service registration it has the power to conscript me in a wartime emergency. I had to perform a trial of skill in front of it to get my driver's license as a teenager. After filling out a mystical scroll called the Defense Travel System form, I now have informed this all-powerful being about the dates that I will be in London to present an academic paper at a defense conference. And this artificial superintelligence takes a chunk of everyone's income every year under penalty of jail time if it finds that you have not been truthful about your taxes. Yes, dear reader, I speak of the federal government. Uncle Sam. Yankee Doodle Dandy. The Stars and Stripes. The Feds.
To be sure, I am not arguing that the federal government is inherently bad. It serves a valuable function and we would be worse off without it. I am just saying that the federal government is--like many other large and impersonal bureaucracies --a large and impersonal bureaucracy. The bureaucratic dimension of technical rationality can be considered a kind of artificial superintelligence, at least because most of people's nonsensical fantasies of superpowerful, superintelligent, and hyper-rational beings tend to describe what already exists in large, impersonal bureaucracies. The pathologies of technical rationality are in fact the nightmare scenario that Musk and Hawking so fear. Their greatest nightmare -- a superintelligent, hyperrational, yet nonetheless malicious and psychopathic entity gaining control over us -- has already happened numerous times.
All of the features of Musk and Hawking's notional superintelligent machine are already present in such composite bureaucratic group agents. Such entities derive their "intelligence" from a combination of raw processing power (resources superior to that of individuals) and algorithms (bureaucratic programs and procedures). And like AI fearmongers' notional hyperrationalist bogeyman, they are rational in that they are able to find efficient ways of realizing their aims but often lack what older philosophers, artists, and statesmen would regard as "reason" or "common sense."
Unfortunately, I cannot claim credit for this idea, as I recall that it was Joanna Bryson who has advanced it in various articles and blog posts. But there's an even older inspiration. Computers and bureaucracy go well together because bureaucratic organization itself might be considered a kind of technology. This is not just metaphorical. The sociologist Max Weber's bureaucratic ideal-type admits the following characteristics:
1. Hierarchy of authority.
2. Impersonality.
3. Codified rules of conduct.
4. Promotion based on achivement.
5. Specialized division of labor.
6. Efficiency.
At least several of these features might be considered computational in nature. Computational sciences often envision complex systems they model or engineer as hierarchal abstractions with specialized rules that allow the system to move from discrete state to discrete state. And both Weber and Herbert Simon suggested that complex social artifacts are also goal-oriented and designed so that their inner composition and organization of behavior is structured to accomplish certain goals given the demands of an external environment. Bureaucracies, lastly, are also assemblages of humans and machines working together to accomplish the aforementioned goals. Thus, technical rationality's problems do not stem solely from hubris. Technical rationality's flaws arise from the pathologies of "rationalization" and its dominance in social life. Weber suggests that an era dominated by rationalization processes will see the dominance of calculation as the motivation and cause of social action (to the detriment of everything else).
If Musk and co's Singularity has long since happened, we live in the aftermath. And it has not been a cup of tea, to put it mildly. Think about the human suffering that top-down bureaucratic processes have inflicted, from the most insignificant and banal pains (waiting at the Department of Motor Vehicles) to the most horrific (the massive death tolls arising from failed planned economies in the Communist world during the Cold War and beyond). Or just think about what it means to call something "Kafka-esque." Kafka chronicled a trial run by an impersonal, unfeeling, opaque, and relentless bureaucracy that was also -- by all measures -- insane. The massive power, impersonal nature, and opacity of the bureaucracy ensures that the victim cannot resist, but it is the crackpot nature of the bureaucracy itself that makes the person undergoing its trials a victim in the first place.
The Manhattan Project Fallacy III: Tech and the Real Problems Canard
This digression has taken us a very long ways away from the topic of Go, Deep Learning, artificial intelligence, and what technology companies should and shouldn't be doing. Having elaborated at length about why the idea that technics, rational design, and experts can solve intractable social problems is wrongheaded, I will now detail why the continued plea for the tech world to do something -- anything -- to use technics, rational design, and experts to solve intractable social problems is not just profoundly wrong but also militantly stupid. Much of the world's social problems arise from technical rationality and its bureaucratic "technology" run amok. More of the same is not a solution but part of the very problem that technical rationality supposedly is supposed to solve.
The particular idea that tech companies should devote most of their energies to tackling and solving enormous social problems that baffle governments and international organizations is bizarre. It is difficult to see how any objective history of the 20th century can justify the belief that all problems are tractable and that it just takes some smart people who care to fix them. Moreover, it is difficult to see how any objective history of the 20th century can avoid the conclusion that top-down centralized planning by scientific-technical experts in social and political matters has led to....undesired consequences, to put it mildly. Yet this is exactly what is entailed by the complaint that the tech industry doesn't tackle those bona fide big problems. Why do these people have such a burning desire to see engineers and bureaucrats use computers to "solve" problems that ANOTHER band of engineers and bureaucrats with computers created in the first place? But that does not even begin to get to the bottom of how idiotic this meme is.
Critics continously ask "why won't techies focus on the really important problems?" without even the most remote sense of self-awareness about what they are actually asking for. Tech companies excel at...technical and business things like management, logistics, product, marketing, research and development, software engineering, DevOps, and maintaining and servicing large-scale software platforms. Given that they are not governmental entities, the only thing that makes them appealing as devices to solve the 'real' problems are their perceived expertise, technical competence, and resources. "If they're so smart that they can do [tech business thing] why can't they do [non sequiter political/social thing]?" With this logic, I guess it is perfectly fine to ask your English professor to build a bridge, your mailman to handle your taxes, or your therapist to train you to fight like Bruce Lee.
I do not see why this is so hard to understand. Many people know that the government is not a business. For varying reasons, there is no way that the Department of Defense or the State Department will ever become lean and agile startups populated by hungry young entrepreneurs chasing angel investor money. Nor are governmental problems the same as business problems even if some overlap exists. Want an example? If Facebook's ad team sends you an ad you don't like, you can just click a button and it vanishes from your Facebook timelime. If a bad defense procurement decision leads to a tank crew misfiring a round, a gaggle of young American men and women gruesomely perish when the enemy returns fire. Likewise, tech companies are not the government. They have different goals, motivations, and competencies, much of which are simply not well-adapted to governmental or quasi-governmental functions.
Tech companies can still serve the social good and have a valuable role to play in public policy matters. The question is what that role really should be. Google and other tech companies certainly have something to contribute to the tackling of social problems, and they already have made steps in this direction with efforts such as the Google Ideas think-tank. Having been to an event jointly held by the Council on Foreign Relations and Google Ideas, I am a convert to this public/private partnership model and think it should be encouraged. Both sides have expertise, knowledge, and power to contribute. Much of public policy, anyways, amounts to a bunch of men and women sitting around a table and talking about things and the tech world can bring a lot to the table that government and the nonprofit sector does not have.
However, the inconvenient truth remains that the people arguing that tech should solve big problems justify this out of a misguided belief that technical and business expertise and technical rationality more broadly helps with intractable social problems and transfers outside of technical and business management domains of competence. Nor is there any idea of precisely *how* Google, Facebook, or any other tech company precisely is to go about trying to solve these problems. Like much of the positions advanced in today's political debates, the *idea* of them doing it is attractive in the abstract but few pony up much information about the specifics. Vague platitudes about making a difference, doing what matters, and other things that LeVar Burton told impressionable youth during episodes of "Reading Rainbow" are not enough.
Lastly, the vague "tech needs to make a difference" complaint is the fraternal cousin of the equally stupid meme that anything under the sun can be justified if it "starts a conversation." For example, there's always going to be a substantial amount of research, development, and technology funded by people (the Pentagon) that has an undeniable "social impact." The impact, that is, of putting high-explosive rounds on enemy targets. If that's not your idea of social impact, well, tough. The complaint was that tech doesn't tackle the hard problems. While it may not be your idea of a worthy hard problem, preserving US military dominance today is a really hard problem!
I mean, have you seen those briefings about the strategic balance in the Pacific lately? Seems really hard to me, and there's an obvious social impact. And as someone that frequently works in the defense and security world, I obviously see it as a worthy problem to solve! I am being purposefully facetious here, but the underlying point is dead serious. The manner in which tech makes a difference or deals with a hard problem is important, and the mere fact that some tech company or funding agency for a technical project is tackling a hard problem that matters does not inherently make it a Good Thing (TM).
The Manhattan Project Fallacy IV: Let Them Play Go
Perhaps the most baffling thing about the "tech isn't tackling The Hard Problems (TM)" meme is that critics are calling for tech companies to devote their energies to solving social problems --- something that these same critics would most assuredly hate (for good reason) if their wishes were ever really granted. Meanwhile, they ignore the basic way in which technology can serve as a means of realizing their goals simply by existing in the first place. Technical rationality has been the cause of significant problems in our world. But it has also done much to make our lives better in inumerable ways. Having better tools allows us to use them as we see fit to solve the hard problems ourselves rather than passive-aggressively asking a tech company to do it for us. And having better tools also allows us to potentially be able to exploit the beneficial sides of technical rationality while decoupling it from the downsides.
One example of tech being used as a resource for solving a social problem, as my friend Faine Greenwood has chronicled, is that drones are now being used to help indigeneous people protect their land, bolster disaster relief efforts, and protect Peruvian archaeological treasures. All of these activities predate inexpensive commercial drones, and people will use other technologies to engage in them long after drones become passe. But having drones available made a real difference. Yes, it's important to qualify this statement. Even the most effective technology is but a stopgap solution. However, having a stopgap solution is better than nothing.
Access to free stats software like R and Python's data libraries isn't going to cure world hunger. But a necessary part of curing world hunger is being able to do statistical analysis about the quantitative components of the task. In the past, this was only possible after paying burdensome licensing fees for Stata, SPSS, Eviews, SAS, and other similar tools. Now people can do it for free on open source software and share their results using similarly open-source Jupyter Notebooks or Github. Inasmuch as it lowers the monetary costs and transaction costs of doing data analysis to solve social problems, it is far better than NOT having R and Python data libraries.
However, in life there is also no such thing as a free lunch. Money does not grow on trees, and neither do high-quality and consistently updated technical systems. What I am getting at is that you cannot get something for nothing. People do not generally create and above all else *maintain* elaborate technical innovations out of the goodness of their own hearts or an noble desire to help the oppressed people of Zamboogistan. Sometimes there are exceptions, but how long are these projects maintained? They do it because they get something out of it, whether it is material rewards or fame and recognition. Even nonprofit non-governmental organizations are in competition with each other for funding, resources, and members and act strategically.
I'm an academic, but I'll be the first to admit that the disinterested pursuit of knowledge is only one part of the equation for us as well. Academic software -- which comprises a lot of open source software used for analysis -- is produced for academic needs (some of which include making it easier to rack up journal and conference publications). Other technical software and algorithms as well as the basic and applied science behind them is funded by the military-industrial complex (whose motivations should be obvious). And when Google, Microsoft, and Facebook release deep learning libraries for free, it is because they ultimately expect some kind of financial return. There is nothing wrong with any of this. It is how social life -- and the practical solving of real world problems -- works. You scratch my back, I scratch yours, and so on.
Let's now return to Google's Go-playing artificial intelligence program that I mentioned at the beginning of this essay. The true promise of artificial intelligence, like any innovative technology, is that it provides tools for others to use. Companies like Google might already contribute substantially to solving social problems by producing tools that the rest of us can use to better approach them. Something like Google's Tensorflow, in the hands of a specialist capable of utilizing it in the appropriate manner, is useful in and of itself. Having better tools is much better than not having better tools. Which is why I think that criticizing tech companies for focusing on the supposedly trivial things is wrongheaded.
Yes, there's something kind of childish to marveling over the ability of computers to beat humans in board games. But so what? How many people have joined the military because they played with GI Joe toys when they were little? How many people entered into science because they watched Carl Sagan talk about the wonders of the universe in his turtleneck and blazer with elbow patches? People need excitement, thrills, and motivation to do hard things. Beating humans at Go is a very hard challenge due to the mathematical complexity of the game. And yet, Google's machine seems to have mastered the game. Hooray for them! Additionally, as long as the scientific claims made about their game-playing system's broader capabilities check out, there's the possibility of it being useful for things besides beating people in Go. Which is a positive thing! What is the harm done here?
Is there some urgent social problem that Google -- as opposed to the United Nations, Amnesty International, or the United States government -- is uniquely equipped to solve that is being neglected in the rush to make cool machine learning algorithms that can school people at Go? I really don't think so. But any of those public policy organizations equipped with Google's technology at least have different options available to them. Options that might not have emerged if Google lacked sufficient incentive to develop technologies through things like beating human players at board games. Does having better technology in any way suggest that the problem will change? Nope. However on the margin having such technologies available might make it somewhat easier to solve the hard problems. So let Google play Go, or do whatever the hell else it wants to devote large sums of research and development $$ and billable hours to. If it leads to something that the rest of us can use, great. If not, then I struggle to see what the opportunity cost is.