Generative AI will upend the professions

June 18, 2023
135 views

The writers are the authors of ‘The Future of the Professions’

ChatGPT opened a new chapter in the artificial intelligence story we have been working on for more than a decade. Our research has focused on the impact of AI on professional work, looking at technologies across eight sectors, including medicine, law, teaching and accountancy.

Overall, the narrative laid out in our book, The Future of the Professions, has been optimistic. At a time when professional advice is too expensive, and our health, justice, education and audit systems are often failing us, AI offers the promise of easier access to the best expertise. Understandably, some professionals find this threatening because the latest generative AI systems are already outperforming human professionals in some tasks — from writing effective code to drafting compelling documents.

Contrary to many predictions that AI would be “narrow” for years yet, the latest systems have far wider scope than those that came before, as happy diagnosing illnesses as they are designing beautiful buildings or drawing up lesson plans.

They emphatically refute the idea that AI systems must “think” to undertake the tasks that require “creativity” or “judgment” — a common line of defence from the old guard. High-performing systems do not need to “reason” about the law like a lawyer to produce a solid contract, nor “understand” anatomy like a doctor to deliver useful medical advice.

How do professionals react? Our original research and more recent work suggest a familiar response pattern. Architects are inclined to embrace new possibilities. Auditors dive for cover because the threats to their data-driven activities are clear. Doctors can be dismissive of non-doctors, management consultants prefer to advise on transformation rather than change themselves.

With generative AI, though, business leaders seem to be less dismissive than in the past.

Some are interested in how to use these technologies to streamline existing operations: a recent study conducted by researchers at MIT found ChatGPT increased productivity of white-collar writing tasks, such as composing a sensitive company-wide email or a punchy press release, by almost 40 per cent. Others are preoccupied with simply reducing headcount: the US online learning company Domestika, for instance, is reported to have sacked almost half its Spanish staff in the hope those working on content translation and marketing material could be replaced with ChatGPT.

Although such cuts seem hasty, research by Goldman Sachs predicted that as many as 300mn full-time jobs around the world could be threatened by automation. However, few professionals accept AI will take on their most complex work. They continue to imagine AI systems will be confined to their “routine” activities, the straightforward, repetitive parts of their jobs — document review, administrative tasks, everyday grunt work. But for complex activities, many professionals argue that people will surely always want the personal attention of experts.

Each element of this claim is open to challenge. The capabilities of the GPTs already go well beyond the “routine”. As for personal attention, we can learn from tax.

Few people who submit their tax returns using online tools rather than human experts lament the loss of social interaction with their tax advisers.

To claim clients want expert, trusted advisers is to confuse process and outcome. Patients do not want doctors, they want good health. Clients do not want litigators, they want to avoid pitfalls in the first place. People want trustworthy solutions, whether they rely on flesh-and-blood professionals or AI.

This leads to wider questions. How do existing professionals adapt and what are we training younger professionals to become? The worry is that we are nurturing 20th-century craftsmen, whose knowledge will soon be redundant. Today’s and tomorrow’s workers should be acquiring the skills needed to build and operate the systems that will replace their old ways of working — knowledge engineering, data science, design thinking and risk management.

Some argue that teaching people to code is the priority. But this is an activity at which AI systems are already impressive — AlphaCode, developed by DeepMind, outperformed almost half the contestants in major coding competitions. Instead, we should be alive to the emergence of unfamiliar new roles, such as the all-important prompt optimisers — those who, for now, are the most adept at instructing and securing the best responses from generative AI systems.

There are of course risks with the latest AI. A recent technical paper on GPT4 acknowledges the systems can “amplify biases and perpetuate stereotypes”. They can “hallucinate”. They can also be plain wrong and raise the spectre of technological unemployment. Hence the frenzy of ethical and regulatory debate. At some stage, though, as its performance improves, and the benefits become unarguable, the threats and shortcomings will frequently be outweighed by the improved access AI provides.

The professions are unprepared. Many companies are still focused on selling the time of their people, and their growth strategies are premised on building larger armies of traditional lawyers, auditors, tax advisers, architects and the rest.

The great opportunities surely lie elsewhere — not least in becoming actively involved in developing generative AI applications for their clients.

Source: Financial Times