The merging of machine capability and human consciousness is already happening. Writing exclusively for WIRED, DARPA director Arati Prabhkar outlines the potential rewards we face in the future - and the risks we face
Peter Sorger and Ben Gyori are brainstorming with a computer in a laboratory at Harvard Medical School. Their goal is to figure out why a powerful melanoma drug stops helping patients after a few months. But if their approach to human-computer collaboration is successful, it could generate a new approach to fundamentally understanding complexities that may change not only how cancer patients are treated, but also how innovation and discovery are pursued in countless other domains.
At the heart of their challenge is the crazily complicated hairball of activity going on inside a cancer cell - or in any cell. Untold thousands of interacting biochemical processes, constantly morphing, depending on which genes are most active and what's going on around them. Sorger and Gyori know from studies of cells taken from treated patients that the melanoma drug's loss of efficacy over time correlates with increased activity of two genes. But with so many factors directly or indirectly affecting those genes, and only a relatively crude model of those global interactions available, it's impossible to determine which actors in the cell they might want to target with additional drugs.
That's where the team's novel computer system comes in. All Sorger and Gyori have to do is type in a new idea they have about the interactions among three proteins, based on a mix of clinical evidence, their deep scientific expertise, and good old human intuition. The system instantly considers the team's thinking and generates hundreds of new differential equations, enriching and improving its previous analytical model of the myriad activities inside drug-treated cells. And then it spits out new results.
These don't predict all the relevant observations from tumour cells, but it gives the researchers another idea involving two more proteins - which they shoot back on their keyboard. The computer churns and responds with a new round of analysis, producing a model that, it turns out, predicts exactly what happens in patients and offers new clues about how to prevent some cases of melanoma recurrence.
In a sense, Sorger and Gyori do what scientists have done for centuries with one another: engage in ideation and a series of what-ifs. But in this case, their intellectual partner is a machine that builds, stores, computes and iterates on all those hundreds of equations and connections.
The combination of insights from the researchers and their computer creates a model that does not simply document correlations - "When you see more of this, you'll likely see more of that" - but rather starts to unveil the all-important middle steps and linkages of cause and effect, the how and why of molecular interactions, instead of just the what. In doing so, they make a jump from big data to deep understanding.
ADVERTISEMENT
More than 3,220km away, another kind of human-machine collaboration unfolds at the University of Utah as Greg Clark asks Doug Fleenor to reach out and touch the image of a wooden door on a computer monitor.
Clark knows that Fleenor cannot physically touch this or any other object; Fleenor lost both his hands in a near-fatal electrical accident 25 years ago. But Fleenor's arm has a chip in it that communicates with the computer, so when he moves his arm the image of a hand on the monitor also moves. He's done this before - raising his arm, watching the cartoon hand move in sync and seemingly stroke the face of the door - but this time it's different. He lurches back and gasps. "That is so cool!" he blurts.
What's so cool is that as he guides his virtual hand across that virtual plank, he literally, biologically and neurologically, feels its wooden surface. Thanks to some new software and an array of fine electrical connections between another embedded chip and the nerves running up his arm to his brain, he experiences a synthesised sensation of touch and texture indistinguishable from a tactile event.
For someone who hasn't actually touched anything with his hands for a quarter of a century, this is a transcendent moment - one that points to a remarkable future that is now becoming real… and in Fleenor's case, even tangible.
In ways as diverse as a shared understanding of causal complexity as in Peter Sorger's lab and the seamless commingling of software and wetware as in Greg Clark's lab, it's a future in which humans and machines will not just work side by side, but rather will interact and collaborate with such a degree of intimacy that the distinction between us and them will become almost imperceptible.
ADVERTISEMENT
"We and our technological creations are poised to embark on what is sure to be a strange and deeply commingled evolutionary path"
Building on adaptive signal processing and sensitive neural interfaces, machine reasoning and complex systems modelling, a new generation of capabilities is starting to integrate the immense power of digital systems and the mysterious hallmark of Homo sapiens - our capacity to experience insights and apply intuition. After decades of growing familiarity, we and our technological creations are poised to embark on what is sure to be a strange, exciting and deeply commingled evolutionary path.
Are we ready? Some signals suggest not. Even setting aside hyperbolic memes about our pending subservience to robot overlords, many are concerned about the impact of artificial intelligence and robotics on employment and the economy. A US survey last year by the Pew Research Center found that people are generally "more worried than enthusiastic" about breakthroughs that promise to integrate biology and technology, such as brain chip implants and engineered blood.
My particular vantage point on the future comes from leading the Defense Advanced Research Projects Agency (DARPA), the US government agency whose mission is to create breakthrough technologies for national security. Over six decades, we've sparked technological revolutions that ultimately led to some of today's most advanced materials and chip technologies, wave after wave of artificial intelligence, and the internet.
Today, Clark's work and Sorger's are part of the couple of hundred DARPA programmes opening the next technological frontier. And from my perspective, which embraces a wide swathe of research disciplines, it seems clear that we humans are on a path to a more symbiotic union with our machines.
What's drawing us forward is the lure of solutions to previously intractable problems, the prospect of advantageous enhancements to our inborn abilities, and the promise of improvements to the human condition. But as we stride into a future that will give our machines unprecedented roles in virtually every aspect of our lives, we humans - alone or even with the help of those machines - will need to wrangle some tough questions about the meaning of personal agency and autonomy, privacy and identity, authenticity and responsibility. Questions about who we are and what we want to be.
What DARPA has given us
- The internet
- Inspired by the vision of computer scientist JCR Licklider, DARPA in 1969 demonstrated the first computer-to-computer communication system: a four-node network. It was the first in a long series of advances that led to today's global internet
Technology has long served as a window into our tangled inner nature. With every advance - from the earliest bone tools and stone hammers to today's jet engines and social media - our technologies have revealed and amplified our most creative and destructive sides.
For a long time, while technology was characterised primarily as "tools to help us do", the fear was that machines would turn ourselves into machines, like the blue-collar automatons in Charlie Chaplin's Modern Times. More recently have come "tools to help us think", and with them the opposite fear: that machines might soon grow smarter than us - or at least behave as though they are our boss.
Neither of these two fears has proven completely unfounded: witness, respectively, the daily hordes of zombie-like commuters staring at their phones, and today's debates about how and when to grant autonomy to driverless cars or military systems. But, although we're still grappling with these ideas, today, a third wave of technological innovation is starting, featuring machines that don't just help us do or think. They have the potential to help us be.
For some, this new symbiosis will feel like romance, and for others it will be a shotgun wedding. But either way it's worth understanding: how did we get here?
"A third wave of technological innovation is starting, featuring machines that don't just help us do or think - they have the potential to help us be"
As with many revolutions, the roots of this emerging symbiosis run deep. All the way back in 1960, the visionary psychologist and computer pioneer JCR Licklider wrote with remarkable prescience of his hope "that, in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought and process data in a way not approached by the information-handling machines we know today."
Licklider helped to launch the information revolution of the past half century, but the full realisation of this particular dream had to wait a few decades for two technology trends to mature.
The first of these trends is a direct outgrowth of that information revolution: today's big-bang-like expansion of capabilities in data science and artificial intelligence is coming into confluence with an unprecedented ability to incorporate in these systems human insights, expertise, context and common sense.
We humans have been very good, it turns out, at creating hugely complex systems - consider the multibillion-node internet, chips with billions of transistors, aircraft that have millions of individual components - and at collecting data about complex naturally occurring systems, from microbial interactions to climate dynamics to global patterns of societal behaviour.
But it's proven much more difficult to grasp how or why these super systems do what they do or what hidden nuggets of wisdom these datasets may contain, much less how to access that embedded knowledge to fuel further progress.
Here are some complex things we don't fully understand today: what is it about the combination of individually sensible algorithms that sometimes causes a flash crash on a stock exchange? What factors lead people in different parts of the world to develop a shared identity or sense of community, and what influences are most likely to break those bonds and fuel chaos, mass migration or revolution?
Of the countless factors that contribute to or protect against certain diseases, which are primary, and how do they interact? And for each of these puzzles, where are the most potent nodes or pressure points for which a modest fix might offer the biggest benefit? Today, as we humans start to work and think with our machines to transcend simple correlation - the powerful but ultimately limited basis of the initial wave of big-data applications - and perceive the deep linkages of causation, the answers to complex questions like these are coming within reach.
What DARPA has given us
- Autonomous vehicles
- The 2004 DARPA Grand Challenge invited innovators to develop cars that could complete a course with no human on board. It stumped all entrants, but in 2005 a Stanford team won the prize and helped launch the revolution in self-driving cars
DARPA's Big Mechanism programme, of which Sorger is part, is one such effort, and it's not just about refining the picture of how drugs and genes work on melanoma cells. In another part of that programme, researchers have created machines that use advanced language processing algorithms to read scientific journal articles about particular cancer genes overnight, and then, each day, submit everything they've learned into a giant, continuously evolving model of cancer genetics.
These machines can read tens of thousands of scientific-journal articles per week - orders of magnitude more than a team of scientists could ingest - and can perform deep semantic analysis as they read, to reveal not just snapshots of cellular activities, but causal chains of biochemical reactions that enable the system to build quantitative models. In collaboration with human experts studying those results, the programme has begun generating promising new hypotheses about how to attack certain cancers with novel combinations of already approved medicines.
Along similar lines, the Bill & Melinda Gates Foundation has used DARPA-developed analytic tools to build a picture of the scores of factors related to child stunting, malnutrition and obesity. An effort of this scale would ordinarily take months of literature review, but it took just a few days to sketch out.
The resulting web of influences includes such disparate factors as breastfeeding, urbanisation and government subsidies for processed foods. It's a web to which humans can bring their own expertise - such as insights into the economic and political realities that might make it more practical to focus on one of those influences rather than another - so they can generate with their inanimate partners new public health strategies that scale from biochemistry to policy.
Cancer-research predicaments and the problems of childhood stunting and obesity are "Goldilocks" challenges - extremely difficult but plausibly tractable - in terms of the number of contributing factors and degrees of complexity that must be sorted through by humans and their machines to eke out solutions. What we learn from these efforts will have application in a range of national-security quandaries. Imagine, for example, developing practical insights by analytically modelling questions such as "What will be the impacts to critical infrastructure if a region's drought lasts for another five years?" and "To what degree might a country develop its human capital and economically empower women, and what impact would this have on its future political and economic trajectory?"
More broadly, it's difficult to find a programme at DARPA today that doesn't at some level aim to take advantage of a melding of human and computer traits and skill sets, from the design of novel synthetic chemicals, to the architecting of elaborate structures made possible by the advent of 3D additive manufacturing methods, to the command and control of unmanned aerial systems and management of the spectrum in highly congested communications environments.
What DARPA has given us
- Microsystems
- From the micro-electromechanical chips that tell your phone when it has moved to the gallium arsenide circuits that transmit radio signals to cell towers, DARPA-initiated technologies have enabled the hand-held devices we so depend on today
To understand the second fast-moving trend that's supporting the emerging human-machine symbiosis, we need to move from hardware and software to wetware, and the progress being made in the field of neurotechnology.
It wasn't that long ago when everything we knew about the brain was basically learned by doctors "one stroke at a time", as they correlated medical injuries with functional deficits. With an estimated 80 or 100 or 120 billion neurons in the human brain - no one really knows how many - and trillions of interconnections among them, there was little reason to believe we would be cracking deep neurological secrets any time soon.
Yet in just the past few years, neuroscientists armed with new, high-resolution neural recording and stimulating devices have begun to decode the electrochemical signals in the brain and, perhaps more astonishing, compose and deliver instructions that direct neurons to respond in precisely desired ways. In DARPA programmes, that has meant devices that allow people with whole-body paralysis to operate prosthetic arms with their thoughts alone. They are able to feed themselves for the first time in years, and to reach out with mechanical hands to touch loved ones - robotic but surprisingly emotional acts.
It has allowed some who are missing limbs, including Fleenor in Utah, not only to control the environment around themselves, but to regain an authentic experience of touch and the sense of physical identity that only tactile feedback can provide.
Restoration of motor and sensory function is only the beginning. We're working to develop neurotechnologies to help people with traumatic brain injury to re-establish the ability to recall memories, even when the brain has lost the ability to do so, and to help people with post-traumatic stress disorder or other neuropsychiatric conditions feel healthy again - goals we're working to achieve with precisely delivered digital signals to the brain.
Interestingly, some of the newer advances in these domains are coming not from direct stimulation of the brain, but via the more accessible peripheral nervous system, which, we are learning, can send specific functional messages to the brain in response to a mild stimulus through the skin, and perhaps even in response to an imperceptible, precisely tuned ultrasonic signal.
And, of course, in terms of technology, it's a short journey from mere restoration to the wide-open frontier of augmentation - from seeing or even hearing wavelengths of light outside the usual visual spectrum, for example, or accelerating the pace of learning so we can acquire new cognitive skills more quickly, or helping us advance from normal memory to better-than-normal.
What DARPA has given us
- Artificial intelligence
- DARPA has been a force behind the core artificial intelligence and machine- learning technologies that are now powering a dizzying array of applications, including personal assistants such as Siri and Alexa, face recognition in photos, robotics and genomics
It's easy to picture how the expansion and convergence of new capabilities such as complex causal reasoning and neurotechnology are catalysing a bio-info-electro-mechanical symbiosis that changes what's possible in domains as diverse as health, entertainment, design, education, research, and national security. But what is most exciting about this new relationship is what it is already starting to do to us.
By surprising us with their increasingly rich perspectives, our machines are starting to trigger in us new ways of thinking and imagining - even new ways of dreaming. Imagine experiencing an entirely new palette of colours that never existed before, or adding a fourth physical dimension in space. Looking back, our current reality will feel like black and white.
These changes will be small at first. In Fleenor's case, his nervous system is wired a little differently today than it was before he started training on the Utah setup. His neurons shifted almost imperceptibly and made new connections as he learned to communicate directly with the computer. In a way, of course, he is no more a new person than he was when he first learned how to ride a bicycle or use a computer - a device that Steve Jobs once called a "bicycle for our minds".
ADVERTISEMENT
But with the ability not only to move a virtual hand but to feel what that digital image is "feeling", his world changed and he stepped on to an evolutionary path populated by a growing number of others who are creating digital extensions of themselves, in ways as small as Fitbits and as big as wiring their brains to computers.
Will this symbiotic transformation lead to problems? For sure. The meanings of individual identity, personal agency and authenticity will all require recalibration. We might even have to rethink objective reality. There will be abuses and mistakes.
But here's the kicker: unlike Darwin's version, this act of evolution gets to choose its trajectory. We get to define what we want to become, and in so doing not just remake ourselves but reveal ourselves.
Doing so wisely will demand deeply considered answers to some profound questions. If we know that our thoughts and creative efforts are being processed by a computer for possible improvement, iteration and - oops - accidental public release, will we be as open with others or even with ourselves? Will we take the risk of thinking impossible thoughts - the necessary genesis of every great advance? What societal norms will develop as people consider changing their personalities or ranges of perception?
Is adding a new sense fundamentally different to learning a new language, with all the new opportunities and perspectives that skill can deliver? Who on the IT team should have administrative control - or for that matter, who in the world should have intellectual property rights - over software that was in part the product of our most personal thoughts? How will we weigh and consider the identity and authenticity of one individual in this supremely melded world ahead? And what about the corporations and governments behind these integrated and potentially invasive technologies? Who gets to see the data and processes at play?
These issues are daunting, but that is precisely the reason to be excited. It's the best evidence that we are truly at a frontier. And the fact that we are inexorably drawn to these questions even as we struggle to answer them - that we are both excited and unsettled - suggests that this co-evolution of our long-standing selves and our new technological partners will, by forcing us to examine our deepest desires, make us more human than we've ever been. Maybe even unrecognisably so.
Arati Prabhakar is director of the United States Defense Advanced Research Projects Agency