When I studied math in the late 1960s, I learned how to do complicated calculations using a slide rule. All engineers worth their salt walked around with a slide rule in their breast pocket, and an engineer I was to be.

Then came the digital calculator, and I was to be something else, and the rest is history, as they say.

Today, hardly anyone knows a slide rule, let alone how to use it.

Just as the path of humanity is lined with new knowledge and skills, it is also lined with knowledge and skills that have lost their relevance and been unlearned over time.

Learning and unlearning tend to go hand in hand. For the most, unlearning is seen as something positive; we unlearn what seems obsolete to give room to what seems more useful. Anyone proposing the revival of the slide rule would be perceived as slightly eccentric.

That said, we might be on a fast track to unlearn things we shouldn’t. Most importantly, the ability to think and communicate reasonably well what we believe.

We are increasingly communicating via machines that can be digitally manipulated to erase the distinction between true and false, real and unreal, and to conceal who is sharing with us and why, and thus, I suspect, making us unlearn how to communicate well.

We are rapidly outsourcing our thinking to machines we perceive as intelligent as they seem to show in field after field that they can think better than us. Or rather, we are unlearning how to think for ourselves since there will be machines that will feel better and faster than we do, increasingly doing it in ways that seem more human.

“We might be on a rapid track to unlearn things we shouldn’t”

While it is true that most humans still make the machines, for example, write the instructions, or algorithms, that tell the machines what to do, the devices are already able to make themselves, for example, write instructions for new machines, which means that humans may be lured to unlearn that skill as well. Why learn something that machines can do better and faster?

It has yet to be determined what the self-programming machines might be made to do. Still, the warnings that they might be made to do things that we don’t want them to do now come from the people who have invented and developed them and are becoming increasingly wary of their possible and unimaginable consequences.

I can imagine the great unlearning, the unlearning of the kind of thinking that, with all its flaws, distinguishes humans from machines.

Anything that artificially intelligent machines can be made to do, and this will rapidly increase. They will do better and faster than humans. Consequently, in more areas of human life, we will perceive machine thinking, or whatever we want to call it, as superior to human thought.

But AI machines do not think, even though it may seem so. However, advanced AI machines calculate, and they can be made to calculate vast quantities of digital information at incredible speeds. Still, inhumanly rapid calculations of inhumanly large amounts of digitalised data are ultimately what their “thinking” is based on.

Whatever we say about human thinking is primarily based on something else. Not least, the capacity for empathy, imagination, creativity, morality, love, hope and doubt.

If we put a higher value on the thinking that machines are capable of, we will devalue and unlearn the sort of thinking that, in all its unpredictability, is what makes us human.

I could be wrong, and I sincerely hope I am. Which, if nothing else, indicates that I am still thinking as a human being.