The rise of the machines
Technology is providing new, more innovative ways to augment us – human beings – thus enabling us to better respond to a world moving at a faster pace, and more easily secure that all-important competitive edge for businesses and industry. Institutions and government are facing mounting social costs, while communities and individuals alike are looking for a healthier, happier life. These challenges and aspirations press researchers to push past technology’s boundaries to develop smarter machines, since these intelligent constructs represent the most practical way to augment our capabilities.
Since the beginning of Homo sapiens, we have crafted machines to help us; what we are seeing today is an acceleration of this process. Part of this hastening is the result of having reached a tipping point: we are no longer required to transfer human knowledge to a machine for it to become smarter. We have forged machines that can learn on their own by observing us, making sense out of big data, and watching experiences as they unfold on the web. Just as important is the ability of these clever creations to test their newfound knowledge both against other machines and internally through the use of virtual clones.
To be sure, Artificial Intelligence (AI) has progressed significantly, though it’s more in terms of its applicability than in terms of absolute theoretical progress. AI can do far more today because processing power is abundant, cheap and (almost) ubiquitous. This, along with pervasive communications, and sensors and storage capabilities, has led to an inflection point in the availability of data. The consequence is twofold: the data space is so big that it is beyond human comprehension, and it is fueling machines’ capabilities, intelligence, and continuous learning.
This process has just begun, and there’s no end in sight. Machines have evolved beyond their clockwork origins, and are likely to surpass humans in a growing number of areas.
Man over machine
Until now, building a better mousetrap has served to improve the human condition. We have consistently benefitted from machines and have always had the power to shut them off…and sometimes we did. Consider the devastating weapons that while immensely effective, could lead us down the path to wholesale destruction.
The growing issue is the vital role played by machines – both as single entities and collectively as infrastructures – means we basically no longer have the option of just “shutting them down”. Think about the power grid, with its hundreds of thousands of people working around the clock to ensure that it stays on. The idea of turning it off (e.g. for decreasing CO2) is simply no longer an option.
Surgery is becoming progressively robotized, and medical diagnostics have become fully machine-dependent. Automated machines today manufacture drugs. Pulling the plug on these operations would have dire consequences for thousands, if not millions of people worldwide.
Yet, we can still claim that we are using machines as extension of ourselves, leveraging them as stronger, faster, and cheaper hands. Because of this, humans still triumph over the machine, but it is up to us to decide where we go from here.
More recently, machines have risen to become more than merely our augmented hands; they are now beginning to amplify our cognitive capabilities. It’s subtle, but it is happening.
Most of us have a symbiotic relationship with our smartphone. But how often do you really use it to call other people? More often than not, we’re using our devices to call machines. Need help reaching a destination? Let your smartphone show you the way. Want to impress your guests by cooking a gourmet dinner? Get out your smartphone and look for a recipe. Feeling awkward? Dr. Smartphone can check your pulse, face color, and other health cues, and then offer advice.
These are but a few examples of how we’re starting to engage with machines, but the list is growing rapidly. In the coming years, we can expect more people to have a digital doppelgänger flanking them. On one hand, this twin will be a digital mirror allowing for monitoring of any telltale signs of health problems. On the other hand, it will take on a life of its own in cyberspace, becoming a repository of our existence, a sort of black box that can be queried at will. It could even become a means of simulating next steps, whether in business, education, health, or even entertainment, allowing us to determine the best path forward.
Machine over man
Extending our sphere of knowledge by using machines as tools is a grand idea…as long as we’re on the winning side. But are we truly still in command of our intelligent machine creations? If the answer to this is “no,” we may find that we have a lot to lose.
That human jobs are being lost to machines is nothing surprising or new. The revolution in agriculture machine technologies has multiplied the yield making it possible to feed 7 billion people better on the average then 1 billion people were fed two centuries ago. There’s no going back now – if we were to turn off the machines that produce fertilizers and insecticides, and that make it possible for one man to do the harvesting work of 10, the penalties would be famine and quite possibly, mass human extermination.
If robots have eaten human jobs like candy, then autonomous vehicles could be like setting a glutton loose at an all-you-can-eat buffet. From this perspective, we’re losing to machines, because they perform better than we do at certain things. The digital transformation promises to be even worse. During the late industrial revolution, when machines stepped in to take our jobs, it was all about automation. With the digital revolution, jobs aren’t being replaced by machines; they’re simply disappearing. There’s no longer a need for automated paper shuffling because there’s no paper anymore.
Sure, AI can replace a few brains, however, more critically, it makes those brains redundant. Think about self-driving, autonomous vehicles. You no longer need operators, which equates to the potential loss of 3 million or more driving jobs in the U.S. alone, but you also no longer require that many vehicles. Today, we buy a car and then park it along the sidewalk or in a garage 90 percent of the time. With their commoditization, a mere 40 percent of today’s vehicles would be ample for satisfying all of our transportation needs. This translates into reduced manufacturing demand, along with fewer red lights, traffic signs, and police.
A world permeated by machines performing in the knowledge space, a world customized for machines, is quite different than the one we know now. The corollary is that we lose along several fronts, and they win.
But hang on a sec – it’s clear that in such a world, we would lose…but doesn’t winning require some sort of awareness or sentience? Would a machine ever be motivated by winning to pursue a strategy that would allow them to do so? It might seem far-fetched, but we’re now seeing the first examples of this motivation, leading machines to devise strategies for winning on their own.
For example, DeepMind’s AlphaGo did just that, leading to its defeat of world Go champion, Lee Sedol. I am also fairly certain that militaries worldwide are building smart weapon systems capable of pursuing their (assigned) goals using strategies of their own making. Bots being used in the financial markets are acquiring self-awareness, finding new means of meeting their programmed targets.
Now, we can take some comfort from the “assigned” part of this narrative; however, notice how I included “assigned” in parentheses. How long can we trust an intelligent autonomous system to play within human-assigned boundaries? This isn’t just about software bugs or hacking. The very concept of “autonomous and intelligent” means these machines are taking up lives of their own. And in many areas, a fully performing autonomous system needs leeway to wholly exploit its intelligence and logic capabilities.
It‘s a sort of catch-22 situation – the more intelligence and autonomy you allow a machine, the better it will perform and the more useful it will be. But at the same time, the more leeway the less control you have.
As noted at the start of this piece, we’ve already reached a turning point where humans alone can no longer able to extract all potential value from cyberspace; we can do that only through the use of machines. For machines to be able do that, they need to be, well, better than us. So we’re already in that catch-22 situation.
Man and machines together
So far, the division between humans and machines has been clear – I’m here, the machine is there – but that boundary is getting fuzzier. Smart prosthetics fuse seamlessly with our bodies, making up for lost limbs or providing additional strength, stability, or resilience, as seen in exoskeletons donned by assembly line workers.
We use our smartphones symbiotically, but what if they were integrated directly into our bodies? Think a smartphone in the form of a contact lens capable of transparently delivering augmented reality images straight to the brain. Think it sounds like science fiction? Think again. The first prototypes have already been built.
Soon, brain-computer interfaces could become seamless as well, creating a new synergistic relationship between the cloud and us. At that point, the question of who knows what would be moot; you ask me a question and I know the answer. Sometimes that answer will be stored in my own neural circuitry, but most of the time it would come from the connection of my neurons to the web.
Of course the real problem is not about where the knowledge is stored, as long as it is seamlessly accessible. The real problem lies in where the decision-making process takes place! The answer to this is very complex; it’s already an issue today and truthfully, it has been an issue for centuries.
Think about it. Our brain decision process is influenced by the way it has been “educated” by the cultural context. These external factors are influencing our decision processes to the point that in certain situations, we can legitimately claim that influence has been so strong that our brains can’t be held accountable for the choices made.
The point I’m trying to make is that we humans are in symbiosis with our cultural environment, and the tools – both physical and conceptual – that we have been taught to use.
In a way, it will be no different in the coming decades. Our context will change, becoming permeated by intelligent machines. Much like we do today with our fellow humans, we’ll have to contend with and negotiate our decisions with these smarter mechanized constructs. And just like with humans, there will advantages and disadvantages alike.
My guess is that the transformation will be subtle. We’ll neither realize it is happening, nor that it has happened. How deeply do we comprehend that our decision to buy a certain brand of toothpaste was influenced by a commercial that we saw a month ago and have since forgotten?
This isn’t a win or lose situation. We’re going to wind up as a partner to our smarter machines, and that partnership will be fostered by our augmentation through technology. Machines will play an essential role in this augmentation and, as with any successful technology, they will fall below our level of perception. In the end, the revolution will be silent and invisible.