“Do we really need home computers?” This was the headline of an article written by William F Buckley Jr in June 1982. In it, he mused about the potential of having an entire dictionary programmed into the machine but remarked that this was not worth the price tag the personal computer came with, especially considering its limited utility when compared to other household gadgets, such as the radio or the blender.
Before computers revolutionised how we live and communicate, they were thought to be everything, from an existential threat to humanity, to a fad that would die out in a couple of years. Even Steve Wozniak, American computer pioneer, inventor, and co-founder of Apple, publicly doubted their usefulness. This pessimism toward technological advancements wasn’t new at that time nor did it cease to persist long after. Every invention, from the telegraph in the 1840s to the smartphone in your hand currently, has been met with a degree of scepticism by the public and some doom and gloom speculation from the media. Will Artificial Intelligence (AI) be any different?
The much-touted era of artificial intelligence (AI) is finally here, and people don’t know what to think (or believe). Is AI a harbinger of doom, bringing with it killer robots who will enslave humanity? Or is it a tool we can use to make everything we care about better? Before we get into that debate let’s start by describing what AI actually is. According to ChatGPT, artificial intelligence refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning, reasoning, and self-correction. Essentially, AI will impact any field of endeavour in which human intelligence is applied to solve a task. This is because AI offers us the opportunity to profoundly augment human intelligence to make all the outcomes of our efforts much better in a much shorter time.
This augmentation has already begun. Large Language Models like ChatGPT and Google Bard are quickly changing the way academic and administrative work is done. Schools are scrambling to adapt to these changes, with hastily drafted policies outlining acceptable uses of AI in the classroom, and corporate offices around the world are all trying to upskill their employees and potentially find an edge on competitors. As we speak, artificial intelligence is being used across various industries to automate tasks, improve efficiency, and make better decisions.
In manufacturing, AI is being used to automate tasks such as quality control, predictive maintenance, and logistics. In healthcare, doctors use it to diagnose diseases, develop new treatments, and provide personalised care. In finance, AI is being implemented in fraud detection, risk management, and providing personalised financial advice. Considering all these use cases, and the countless others which are yet to be discovered, a logical question that arises is: Will AI take all our jobs?
The fear of losing jobs to machines is an old one. This fear can perhaps be attributed to the “lump of labour fallacy”. This is the misconception that there is a finite amount of work (a lump of labour) to be done within an economy which can be distributed to create more or fewer jobs. Therefore, if machines do the work, there will be no work for people to do. While this reasoning makes sense on the surface, it is incorrect. Instead, in practice, when technology is properly applied to production, we get productivity growth – an increase in output without an increase in inputs. This results in lower prices for goods and services, extra spending power for consumers, and increased demand in the economy. Increased demand in turn drives the creation of new products and new industries, which will lead to new jobs being created for the people who were replaced by machines in the first place. We have nothing to worry about with AI then, right?
Well, according to a recent report published by investment bank, Goldman Sachs, AI could replace the equivalent of 300 million full-time jobs, meaning 18 per cent of work globally could be automated. The report highlighted that advanced economies will be more heavily impacted than emerging markets, predicting that two-thirds of jobs in the US and Europe are exposed to some degree of AI automation, and around a quarter of all jobs could be performed by AI entirely. A recent study from auditing firm, Deloitte noted, however, that while AI enterprise systems can perform repetitive tasks at scale and process large amounts of data to a high level of accuracy, such technologies have not yet developed to the degree that they can fully replace humans in the workplace.
So, while we shouldn’t worry about AI taking all our jobs, it is likely that AI will automate many jobs, which could in fact lead to job losses in some industries. However, AI is also likely to create new jobs as well as liberate workers from routine time-consuming tasks, enabling them to focus on more creative and strategic work which would be beneficial to both the individual worker and the organisation. With all this in mind and with the future largely unknown, it is important that we remember that AI is a tool, and like any tool, it is owned by people and controlled by people. Rather than fearing it, we must decide how best we can use it. If we use AI wisely, it can help us to create a better future for everyone.
Was this article helpful?
YesNo