Price-setting algorithms play a major role in today’s economy. But some experts worry that, without careful checks, these programs might inadvertently learn to discriminate against minority groups and possibly collude to artificially inflate prices. Now a new study suggests that an economic tool dating back to ancient Rome could help curb this very modern concern.
Algorithms currently set prices for entire product lines at tech-heavy corporations such as Amazon and compute fares around the clock for ride-sharing services, including Uber and Lyft. Such programs may not always rely solely on supply-and-demand data. It is possible for algorithms to leverage massive sets of consumers’ personal information to calculate how companies can precisely offer individuals their most coveted products—and maximize profits while doing so.
In the past few years, a number of studies have suggested that pricing algorithms can learn to offer different prices to different consumers based on their unique purchasing history or preferences. And some research suggests that this strategy, referred to as “personalized pricing,” can unintentionally lead an algorithm to set higher prices for disadvantaged minority groups. For instance, brokers often charge higher interest rates to racial and ethnic minorities, and one possible factor is where people live: programs could target areas that have less competition. Other studies show that, under certain experimental conditions, such algorithms can learn to collude with one another to create price-fixing schemes.
When algorithms adopt such tactics in pursuit of maximum profits, experts often refer to these programs’ aggressive approach as “greedy.” For years, policy makers and tech executives have sought to balance the inherent greediness of algorithms’ logic with the human-level fairness of their decisions. A new preprint study, released online in February by researchers at Beijing’s Tsinghua University, may provide a surprisingly simple solution: it suggests that price controls—which are among the oldest and most elementary tools in regulating commerce—could be readily used to prevent the economic discrimination that may potentially result from greedy pricing algorithms while still maintaining reasonable profits for the companies using them.
Officially imposed price controls have existed as long as economies themselves. In their most basic form, they act as upper or lower limits on how much a seller is allowed to charge for a certain good or service. Theoretically, they promote fairness and protect smaller businesses by thwarting market leaders from forming monopolies and manipulating prices. Over the past few years, this once common regulatory tool has attracted fresh attention, in part because of ride-sharing companies’ use of “surge” pricing strategies. These businesses can use demand in a given area at a given time to modify their prices so drivers (and companies) earn as much as possible. This approach has occasionally spiraled into fares of several hundred dollars for a ride from an airport to a town or city, for example, and has raised calls for stronger regulation. A spokesperson for Uber, who asked to remain anonymous, says the company maintains its support for the current strategy because “price controls would mean … lower earnings for drivers and less reliability.” (Lyft and Amazon, mentioned separately earlier, have not responded to requests for comment at the time of publication.)
But interest in the concept of price controls has recently been gaining new ground, driven by record-high inflation rates. When COVID-19 forced many American businesses to close, the U.S. federal government padded losses with stimulus checks and small business loans. These monetary injections contributed to price inflation—and one way to control that inflation would be for the federal government to simply limit the price a company can charge.
The authors of the new Tsinghua University paper sought scientific evidence that such controls could not only protect consumers from algorithmic price discrimination but also allow companies using these digital tools to maintain reasonable profits. The researchers also wanted to see how price controls would affect the “surplus” of both the producers and consumers. In this context, a surplus refers to the entire monetary benefit each party derives from a transaction. For example, if the true price of a good is $5, but a consumer is somehow able to purchase it for $3, the consumer’s surplus would be $2.
“Personalized pricing has become common practice in many industries nowadays due to the availability of a growing amount of consumer data,” says study co-author Renzhe Xu, a graduate student at Tsinghua University. “As a result, it is of paramount importance to design effective regulatory policies to balance the surplus between consumers and producers.” Xu and his colleagues provided formal mathematical proofs to show how price controls could theoretically balance the surplus between consumers and sellers who use artificial intelligence algorithms. The team also analyzed data from previously published price-setting studies to see how such controls might achieve that balance in the real world.
For example, in one often-cited study from 2002, researchers in the German city of Kiel measured consumers’ willingness to purchase a snack: either a can of Coke on a public beach or a slice of pound cake on a ferry. As part of the experiment setup, participants stated the price they would be willing to pay for the goods before drawing marked balls from an urn to determine the price they would actually be offered. If their original offer was higher, they would be able to purchase the snack; otherwise, they would lose the opportunity. The experiment demonstrated that this scenario—in which participants knew they would receive a randomly selected offer after sharing their desired price—made buyers far more willing to disclose the true price they were willing to pay, compared with traditional methods such as simply surveying individuals. But part of the experiment’s value to future studies, such as the new Tsinghua paper, lies in the fact that it produced a valuable data set about real people’s “willingness to pay” (WTP) in realistic situations.
When a human rather than a random number generator sets the cost, knowing a consumer’s WTP in advance allows the seller to personalize prices—and to charge more to those whom the seller knows will be willing to pony up. Pricing algorithms achieve a similar advantage when they estimate an individual’s or group’s WTP by harvesting data about them from big tech companies, such as search engine operators or social media platforms. “The purpose of algorithmic pricing is to precisely assess consumers’ willingness to pay from the highly granular data of consumers’ characteristics,” Xu says. To test the potential impact of price controls in the real world, the researchers used the WTP data from the 2002 study to estimate how such controls would shift the trade-off of the sellers’ and buyers’ surplus. They found that the advantage that the experimental cake and Coke sellers achieved from their knowledge of consumers’ WTP would have been erased by a simple control on the range of prices considered legal. At the same time, the price controls would not prevent the sellers from earning profits.
This balance in power comes with some drawbacks, however. By achieving a fairer distribution of surpluses between algorithms (or, in the case of the Kiel experiment, sellers operating under a set of algorithmic rules) and consumers, the range constraint dampens the total surplus realized by all participants. For this reason, many economists argue that such regulations prevent the formation of a true market equilibrium—a point where supply matches demand and consumers can receive accurate prices in real time. Meanwhile some behavioral economists contend that price controls can ironically inspire increased collusion among market leaders, who seek to fix prices as closely to the given limit as possible. “Internet and power companies, for example, overcharge when they can because they are effectively monopolies,” says Yuri Tserlukevich, an associate professor of finance at Arizona State University, who was not involved in the new study.
For many of today’s algorithmic pricing agents, however, such price-fixing concerns carry less weight. That is because most modern pricing algorithms still lack the ability to effectively communicate with one another. Even when they can share information, it is often difficult to forecast how an AI program will behave when it is asked to communicate with another algorithm of a substantially different design. Another thing that prevents price-fixing collusion is that many pricing algorithms are wired to compete with a “present bias”—which means they value returns solely in the present rather than considering the potential for future gains that could stem from an action in the present. (In many ways, algorithms that consider future gains could also be described as types of greedy algorithms, although they opt to continually lower the price rather than increasing it.) AIs that have present bias often converge quickly to fair, competitive pricing levels.
Ultimately, algorithms can behave only as ethically as a programmer sets them up to act. With slight changes in design, algorithms might learn to collude and fix prices—which is why it is important to study restraints such as price controls. There are “several research directions open,” says the new study’s co-author Peng Cui, an associate professor of computer science and technology at Tsinghua University. He suggests future work could focus on how price controls would influence more complex situations, such as scenarios in which privacy constraints limit companies’ access to consumer data or markets where only a few companies dominate. More research might emphasize the idea that sometimes the simplest solutions are most effective.