Weekend Australian, 30th September 2023
OPINION PIECE
ARTIFICIAL INTELLIGENCE BOOM POSES BIG RISKS FOR COMPETITION
The rise of AI engines has been remarkable. To reach 100 million users, the telephone took 75 years. The mobile phone took 16 years. ChatGPT took two months.
The progress of these systems can be seen by looking at their performance on tasks that humans find difficult. Compare Chat GPT’s old version 3.5 (released in 2022) with its new version 4 (released in 2023). Faced with the New York bar exam, the old model scored at the 10th percentile. The new model aces the test at the 90th percentile. And that’s just one year’s improvement.
People are using AI in ingenious ways. A garden designer who specialises in ultra-high-density planting arrangements uses AI to provide inspiration in choosing species that fit together. Software engineers use AI for everything from writing complex Excel formulas to debugging computer code. If you’ve never programmed a website, AI will write the code for you.
But what will AI do for competition? On the optimistic side, AI can be a valuable competitive force in product and service markets. It might help small firms scale faster – placing pressure on lazy incumbents. If all firms have access to similar quality AI, it can have a democratising effect on the economy, potentially boosting dynamism.
However, AI also poses five risks for competition.
First, costly chips. Currently only a handful of companies have the cloud and computing resources necessary to build and train AI systems. This means that any rival AI start-up must pay to licence server infrastructure from these firms. Computing power, including the development of AI systems, also relies on access to currently costly and scarce semiconductor chips. In generative AI chips, Nvidia’s market share now exceeds 70 percent.
Second, private data. The latest AI models from Google and Meta are trained on about one trillion words. Companies that hold large troves of data, including Reddit and X, are blocking open access to those who might use them to train their large language models. Data access could potentially push the AI market towards a less competitive outcome.
Third, network effects. By observing which responses meet user demands, generative AI systems are able to steadily improve the relevance and accuracy of its outputs. Just as Google search learns from users, so too does Google’s AI engine, Bard. This means that network effects could fuel market power, entrenching the position of the strongest platforms. If AI engines are natural monopolies, then competition regulators ought to worry.
Fourth, immobile talent. Few sets of skills are more sought after than people who can design AI systems. Research in the United States and Australia has identified that one in five workers is presently bound by a non-compete clause, which constrains their ability to move to another employer. Limiting job mobility constrains the ability of new firms to challenge incumbents. The next AI startup may find that it not only struggles to get the requisite chips and data, but also to hire the right workers.
The fifth competitive challenge is the potential for AI systems to operate on an ‘open first, closed later’ model. The US Federal Trade Commission warns of the potential for companies to use an open-source approach to initially lure in new business and fresh streams of data, building scale advantages before closing off their ecosystems to ‘lock-in customers and lock-out competition’. This kind of conduct is not straightforward for competition regulators to control. If it is legal for firms to charge for access, and legal for firms to offer free access, then competition regulators alleging a ‘bait and switch’ may find themselves having to persuade a court that the pricing change is a form of anti-competitive conduct. In a complex market where true costs are difficult to discern, this may prove challenging for competition regulators.
What are the answers to these challenges? I don’t have all of them, and I’m afraid that ChatGPT doesn’t either. Still, it is vital that new regulation is alive to competition challenges. A further risk, as economist Joshua Gans points out, is that the largest AI companies may welcome onerous licensing and regulatory barriers if they act as a competitive moat to keep new entrants at bay.
After an era in which productivity has languished while inequality has worsened, AI offers the potential for massive economic gains. AI might also be an equalising force by providing better education to struggling students, communication tools to migrants, or health care in regional areas.
But it’s not all upside. Many digital markets have started as fiercely competitive ecosystems, only to consolidate over time. We should beware of incumbents asserting their right to train AI models on their own user data, while denying data access to competitors.
Just as competition laws needed to be updated to deal with the misbehaviour of the oil titans and rail barons of nineteenth century America, so too we may need to make changes in competition laws to address the challenges that AI poses.
Getting AI regulation right won’t be as straightforward as getting Claude to create a dinner party menu for a group of gluten-free vegans who are trying the Atkins diet. With a technology that is moving this fast, it’s unlikely we’ll find a solution that is perfect the first time. But with AI having huge potential to transform our society and economy, it’s critical to be considering its competitive challenges. Only by doing so will we ensure that countries reap the greatest social and economic benefits of AI.
END
Showing 1 reaction
Sign in with