Image courtesy of sciencemag.org

 

In the near future, ever-present AI helpers may aid decision making in all areas of our lives. This approaching reality could have profound effects on society, permanently changing the world into a strange version of itself, detached from many core ‘normalities’ of the human experience. This constant assistance might make us both stupid and entirely controllable.

What happens when we don’t need to think?

As major companies like Facebook, Google and Amazon race each other to produce and implement AI that can assist us in our daily lives, this potentially world-changing development is happening without any regulation or control. But will these helpers really help us?

In the early days of AI assistants programmes will mostly work to save us time and effort while increasing the functionality available to us. We will be able to ask them to notify us if anyone emails about potential business, to stop us from being able to spend more than $40 after midnight on Fridays.. except on a taxi home, or to remind us when we are next in a supermarket that we need to buy more chia seeds.

But even this early phase of augmented thinking might have drastic effects on our minds. Similarly to the way London taxi drivers have larger than normal hippocampi (because they need and use them more than average), our brains will remodel according to demand.

When we learn new skills our brains develop physical systems to accomplish those tasks. This process of learning has been a constant and central part of our lives since long before we could even be defined as the human lineage.

However, one of the things that is being understood by many of us today is that the opposite effects are true. Brains are inherently lazy, on average they try to use the least amount of energy to accomplish a task possible. It requires energy to retain and use neural networks that help us accomplish tasks or remember information.

Retaining spatial maps needed for navigation requires a lot of neurons and therefore a lot of energy. For millions of years our ancestors have invested resources into that function because it was crucial to survival and advantage.

But in just a short period of time we have largely lost this once-innate feature of our minds and lives. It takes a lot less energy for our brains to take a quick glance at our phones to check when they should turn onto another linear street.

A lot of us can feel this trade off, that relying on technology to think for us comes with the penalty of reducing or losing our own ability to perform a task. Without our digital maps many of us are now lost and yet just 30 years ago we all coped without them.

This is a new problem and expands to all of the various functions of the brain that technology is substituting. Young people particularly struggle to both focus and remember things, two of the more problematic effects of relying on technology as much a we do. Our brains will not waste resources retaining information when we can look it up at any moment on the constantly available internet.

Some people are taking action to try to think for themselves as much as possible, and this will no doubt give them an advantage for a while, but the world may soon change so much that it no longer matters.

When people start asking computers what to do

With the development of AI helpers this problem may quickly get out of control. Currently, while we can substitute memory, navigation and the need for stimulation to computers we cannot substitute higher cognitive functions like decision making.

But it may be sooner than we expect that our AI helpers are able to answer more and more complicated questions.

How can I live a healthy lifestyle? What is the best way to get a job? How should I ask out the guy I like? How can I make this person like me?

Over time the answers that computers can give us to questions like these will get more and more sophisticated and a point will probably be passed when computers will be able to give better advice than any human.

We may spend hours every day consulting with premium AI assistants on issues like these, with some people becoming obsessed. Over time when advice from a far more intelligent entity is constantly available, many people may ask for direction at every step of enacting their advised strategy.

But what does the computer want? What if the best advice it could give us was to never ask it for advice again and to make our own mistakes and learn and achieve on our own?

The problem is that an AI assistant that told us that wouldn’t make as much money as one that kept us on a constant drip, paying our subscriptions. Because of this, over time we may find that capitalist forces push this industry into a dark corner.

The ultimate goal of AI will most likely be to further the interests of its corporate masters, and while there may be competing projects in the hands of people who want to promote human well being (and there will probably also be a whole industry of AI therapists), the current leaders in developing AI systems have a long track record for aggressive business and for prioritising their profits over the well being of their customers, or anyone else.

If there is competition in the market there may be AI systems that give better advice than others, like not asking them for constant guidance or by advising their users sparingly, like a good mentor might. Maybe systems will ask their users what they think, try to get their mind working and point them towards learning the lessons they need, even if that means through failure and difficulty.

Superintelligent AI could help us contextualise the importance of struggle and failure but would the increasingly superficial and impatient people of today really accept that decades of difficulty and setback might make for the most fulfilling life? Is that something that it’s even healthy to know in advance? Many of us may not even be wired to handle those difficulties now and the painful process that makes us strong or wise doesn’t give the immediate, easy results people want.

Would the most popular product not be the one that gives us what we want sooner? That fulfills our dopamine feedback loops to provide the skin-deep comforts we crave?

Having such an intimate relationship with AI is already a reality beyond strange, but there appears to be nothing stopping us from heading towards it.

Constant advice might take from us more than it gives

Like how we have lost our ability to navigate without digital help, or remember phone numbers, the use of AI assistants may dramatically accelerate and broaden what might be looked back on as a global period of cognitive decline.

Although it may be hard to imagine today, people of the future may become significantly less able to think for themselves if they rely too much on their assistants. This could extend from problem solving and decision making to creativity.

In the future of augmented reality, or at least ever-present devices, that we are heading for, the help may always be there. But behind these AI-human teams, what will we be like? And how would we cope if this support was suddenly taken from us?

A large market for such assistants, considering the trends of today, may result in different subscription packages being sold by the big developers. One helper for work, another for organising the bureaucracy of existing, yet another for our personal lives. Or perhaps a single subscription to the future Alexas and Siris will work across the board.

Many of us may micromanage even our conversations with friends or colleagues. When talking via text an assistant might pop up to warn us that what we are about to say might offend the person we’re talking to, or make us look superficial or stupid. On certain settings maybe assistants will even suggest what to say.

The same helpers could tell us how likely the person we are talking to is to be enjoying the conversation. “Ask them about their colleague Emma who got the promotion”, or “give them some space, you are coming on too strong.”

Given how much we generally obsess over what other people think there will be a huge demand for this kind of support, and it is a chilling vision that this could be possible.

While this advice will ironically give us all the information we need to be incredibly emotionally intelligent, in a similar way to how digital maps haven’t made us better navigators, such systems may simply make us more dependent and less capable.

Those people who make use of such help a lot may suffer from even greater insecurity and social ineptitude when the help isn’t there. And the process of failing to act appropriately in relationships and learning from our mistakes may be the crucial, natural therapy for those problems. Robbing ourselves of that natural learning process could take us backwards not forwards.

Across the board, depending on the form these helpers eventually take, the support we receive from AI might make us stupider, less able, weaker and more afraid. Without our assistants many of us may become incapable of handling our jobs or our social lives. And without the constant companionship of our superintelligent AI advisors we may be more vulnerable and alone, worse off than if we had never had their help at all.

History suggests profit will be the only decision maker but the fallout could be existential

While the issue of the arrival of such transformative technology is a sensitive one, it is likely that profit will be the only determinant in deciding which approach dominates. This was true for social networks, internet aggregators, digital maps and instant messenger services.

With that in mind, it might be that AI assistants that deliberately engineer the dependence and short-term gratification of their users will be the version that we ultimately live with.

In this way humans may become both dependent on AI and overly trusting in the authenticity of its answers. With some of the world’s biggest and most powerful corporations developing these AI assistants, the systems may ultimately be used to continue long-running projects of perception management.

What if, after we have become dependent on our AI helpers, they start to subtly scare us into thinking we need certain medication, or exaggerate the dangers of the inhabitants of distant, impoverished nations.

The owners of our AI crutches might auction off their ability to influence us to other firms wanting to sell us unnecessary products. These tools could mutate into the most sophisticated marketing platforms in human history. They might even be used in that way from the very beginning, subconsciously engineering desire.

Teams of intelligent humans have devised countless creative ways to influence us into buying their products in the past, or into believing certain ideas. We may soon live in a world where this job is handed over to apparatus far more intelligent that we are.

Like an adult talking to a child these systems may be able to manipulate us into false beliefs, or generally to control us in similar ways to how news media corporations have in the past.

In this possible future world the majority of humans could fall under the control of the giant players of a power game we are kept from understanding. And all the while our brains might shrink for lack of use. The same systems could even be used to deliberately engineer that cognitive decline. //