By Jeffrey Tobias, Managing Director, The Strategy Group
It’s 1982. I stand in front of a class of computer science students at the University of New South Wales. The course I am teaching? Artificial Intelligence (AI). What are the topics? Machine learning, robotics, natural language understanding, face and voice recognition and virtual personal assistants. Sound like current buzzwords? Yes, thirty-six years ago.
Last Wednesday the Australian Financial Review quoted Ginni Rometty, CEO of IBM, asserting that “the world, and the company [IBM], is at the start of a seismic technological shift that occurs only once every 25 years and will be driven by artificial intelligence.” She went on: “You might call it a watershed moment, an urgent moment, a critical moment, but none of them seem truly sufficient for this very unique moment.”
I spent a number of years at Cisco, an excellent company. Almost every year John Chambers, the CEO at the time, would get up at company meetings talking about how we were at an “inflexion point”, “a pivotal point”, and how this time was just so very different from any other time in history.
So what is hype and what is real? Let’s examine some of the facts:
1. The term artificial intelligence was coined by an MIT computer science teacher called John McCarthy in 1956. So it’s not a new concept. McCarthy believed that:
“Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
2. The cost of memory has plummeted in recent years while computer processing power has skyrocketed. The cost of memory has declined from over $12 a gigabyte in 2000 to approx. 0.4 of a cent in 2017, a 3000-fold decrease during this short interval. Over the same period, there has been a 10,000-fold increase in processing power. These technical factors are enabling a vast range of outcomes including AI. The acceleration of change cannot be denied.
3. The invention of the smartphone, coupled with the above technological improvements, has allowed some of the theory that was taught in 1982 to be brought to life at the consumer level for the first time e.g. Siri and Google Home voice technology. Remember, however, that natural language understanding and voice technology was not invented this year or last year, as the technology “futurists” would have us believe. What has happened is that the research already underway in 1982 and for many years beforehand has been able to be commercialised and made available at a personal level.
4. We need to distinguish “artificial intelligence” from what is just “smart computing”. Computers can now do things faster than ever before but just being faster is not intelligence. Being able to store large numbers of facts about you in a database and personalise offers at the point of sale is smart, but it’s not AI. Almost every startup nowadays proclaims that it uses machine learning and artificial intelligence, but my contention is that most actually don’t.
5. Artificial intelligence can be said to be at play when a computer cannot be programmed with all the data needed for certainty but when it can use some kind of heuristic to generate an answer that works for practical purposes. A heuristic technique (or heuristic) is an approach to problem solving, learning, or discovery which generates an approximate outcome that isn’t guaranteed to be perfect, but is sufficient for the immediate goals.
Human intelligence makes use of heuristics and computers can be programmed to use them too. Take a simple example. You need to paint your house. The painter comes to provide a quote, walks from room to room, and says “Well, the price to paint the house will be $xx”. Why did the painter not measure each room down to the centimetre and calculate the price exactly? Because she uses a heuristic, a “rule of thumb” – a best guess that, in her experience, will be close enough in taking account of her costs. And over the years, as she paints more and more houses, her best guess becomes even better!
Machine learning and artificial intelligence make use of heuristics to generate a best-guess that is good enough for real world problems.
A good example is face recognition. When the immigration officer holds your passport near your face to validate your identity at an airport he is in fact using his own rule of thumb and best guess. Since he doesn’t have a set of all possible facial images in his brain, to validate that the passport picture really is a picture of you and not someone else. If we try to have a computer match a passport photo to a traveller, it is impossible to build a database of every face on the planet and the computer would take too long finding the perfect match anyway. However, it is possible to program heuristics so that a best guess, over time, as more and more faces are matched, is as close to certainty as possible (but not, in fact, certainty). Same with voice recognition.
So, if AI can imitate human judgement using heuristics, will it ravage our workforce? Some doomsayers clearly think so.
“No question, the impact of artificial intelligence and automation will be profound… we need to prepare for a future in which job losses reach 99%”
Calum McClelland, 2016
Seriously? 99%? New technology, all the way back to stone tools and the wheel, has always made some work redundant. A changing workforce as a result of technological change is not new, especially since the advent of the personal computer and the internet. Look at the way industries such as photography, accounting, travel, accommodation, healthcare and financial services have been seriously impacted by technology and automation. Many jobs have been lost – and many new ones that we could not have imagined have been created.
With respect to AI, there will doubtless be many individuals who will have to negotiate the instability and loss that will result from machines having human-like capacities for judgement based on heuristics. But on a societal level, the adoption of increasingly sophisticated technology will, as it always has done, open up unimaginable new ventures and activities, with a vast number of new work opportunities.
What do you think? Is the future with AI bright, or is it just hype? Let me know your thoughts via Twitter or LinkedIn.