Artificial Intelligence: What’s Hype and What’s Not
So what is hype and what is real? Let’s examine some of the facts:1. The term artificial intelligence was coined by an MIT computer science teacher called John McCarthy in 1956. So it’s not a new concept. McCarthy believed that: “Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” 2. The cost of memory has plummeted in recent years while computer processing power has skyrocketed. The cost of memory has declined from over $12 a gigabyte in 2000 to approx. 0.4 of a cent in 2017, a 3000-fold decrease during this short interval. Over the same period, there has been a 10,000-fold increase in processing power. These technical factors are enabling a vast range of outcomes including AI. The acceleration of change cannot be denied. 3. The invention of the smartphone, coupled with the above technological improvements, has allowed some of the theory that was taught in 1982 to be brought to life at the consumer level for the first time e.g. Siri and Google Home voice technology. Remember, however, that natural language understanding and voice technology was not invented this year or last year, as the technology “futurists” would have us believe. What has happened is that the research already underway in 1982 and for many years beforehand has been able to be commercialised and made available at a personal level. 4. We need to distinguish “artificial intelligence” from what is just “smart computing”. Computers can now do things faster than ever before but just being faster is not intelligence. Being able to store large numbers of facts about you in a database and personalise offers at the point of sale is smart, but it’s not AI. Almost every startup nowadays proclaims that it uses machine learning and artificial intelligence, but my contention is that most actually don’t. 5. Artificial intelligence can be said to be at play when a computer cannot be programmed with all the data needed for certainty but when it can use some kind of heuristic to generate an answer that works for practical purposes. A heuristic technique (or heuristic) is an approach to problem solving, learning, or discovery which generates an approximate outcome that isn’t guaranteed to be perfect, but is sufficient for the immediate goals. Human intelligence makes use of heuristics and computers can be programmed to use them too. Take a simple example. You need to paint your house. The painter comes to provide a quote, walks from room to room, and says “Well, the price to paint the house will be $xx”. Why did the painter not measure each room down to the centimetre and calculate the price exactly? Because she uses a heuristic, a “rule of thumb” – a best guess that, in her experience, will be close enough in taking account of her costs. And over the years, as she paints more and more houses, her best guess becomes even better!
Machine learning and artificial intelligence make use of heuristics to generate a best-guess that is good enough for real world problems.A good example is face recognition. When the immigration officer holds your passport near your face to validate your identity at an airport he is in fact using his own rule of thumb and best guess. Since he doesn’t have a set of all possible facial images in his brain, to validate that the passport picture really is a picture of you and not someone else. If we try to have a computer match a passport photo to a traveller, it is impossible to build a database of every face on the planet and the computer would take too long finding the perfect match anyway. However, it is possible to program heuristics so that a best guess, over time, as more and more faces are matched, is as close to certainty as possible (but not, in fact, certainty). Same with voice recognition.
So, if AI can imitate human judgement using heuristics, will it ravage our workforce? Some doomsayers clearly think so.“No question, the impact of artificial intelligence and automation will be profound… we need to prepare for a future in which job losses reach 99%” Calum McClelland, 2016 Seriously? 99%? New technology, all the way back to stone tools and the wheel, has always made some work redundant. A changing workforce as a result of technological change is not new, especially since the advent of the personal computer and the internet. Look at the way industries such as photography, accounting, travel, accommodation, healthcare and financial services have been seriously impacted by technology and automation. Many jobs have been lost – and many new ones that we could not have imagined have been created. With respect to AI, there will doubtless be many individuals who will have to negotiate the instability and loss that will result from machines having human-like capacities for judgement based on heuristics. But on a societal level, the adoption of increasingly sophisticated technology will, as it always has done, open up unimaginable new ventures and activities, with a vast number of new work opportunities. What do you think? Is the future with AI bright, or is it just hype? Let me know your thoughts via Twitter or LinkedIn.
- Webinar: Enabling innovation for growth - October 13, 2021
- WHY (AND HOW) to Innovate in a crisis - September 28, 2021
- How to Centralize Customer Experience During COVID-19 - September 2, 2021
- 6 Steps to Optimize Your Customer Experience (CX) - June 11, 2021
- The Hybrid Workplace Model: How the Employee Experience is Being Overlooked - June 11, 2021
- What Organisations Should Focus on When Re-inventing or Transforming their Business Models - May 31, 2021
- Employee Experience and Embracing the Future of work - May 14, 2021
- Webinar: An Effective Framework for Digital Transformation - March 18, 2021
- Digital Strategy Toolkit - February 24, 2021
- Developing a Go-to-Market Customer Strategy for Shirohato’s Australian Expansion - February 4, 2021
Managing Director, The Strategy Group
Dr Tobias is an accomplished innovation consultant and entrepreneurship strategist, drawing expertise from the academic, entrepreneurial and corporate worlds. Jeffrey’s commercial and business experience is particularly focussed on lean startup, design thinking and leadership. Prior to The Strategy Group, Jeffrey was Cisco’s Global Lead for Innovation in the Internet Business Solutions Group helping Fortune Global 500 companies improve customer experience and grow revenue by transforming how they do business.
Jeffrey is a professor of innovation and entrepreneurship teaching MBA students at the Australian Graduate School of Business at the University of New South Wales. An active angel investor, Jeffrey is on the board of various well known startups. Jeffrey’s corporate background includes leading global innovation strategy at Cisco, working with large corporates such as Adobe, Westpac, Telstra, Woolworths, and Perpetual.
Have any questions about what you just read?
Have any questions? Perhaps you'd like to know how we can help you and your organisation? please feel free to reach out to us below with any questions that you might have:
More related articles on innovation
Learn three steps for determining what customers need, and how you can fulfill them from the experts at The Strategy Group.