Better Incentives for Efficient Transmission: The Potential Contribution of Price Cap Regulation
This paper analyzes the regulatory barriers to adopting grid-enhancing technologies as an alternative to building more transmission lines.
Abstract
A low-cost method for increasing transmission capacity is to use grid-enhancing technologies (GETs). Setting transmission rates on the basis of cost may lead transmission providers to choose to install lines at greater cost than GETs. Price cap regulation (PCR) adjusts rates over time on the basis of inflation and expected (but not actual) cost reductions, thus giving the regulated firm an incentive to reduce costs, such as by adopting GETs. Allowed rates are likely to eventually diverge from costs enough to warrant regulatory recalibration, reducing the advantages of PCR. PCR is also not designed to incentivize quality, such as resilience. PCR can handle the multiplicity of rates over different nodes and times, but it will likely take more time for such rates to converge to efficient levels than it would take for regulators to adjust the rates accordingly. Because new transmission lines will likely be required, regulators will have to set an initial price for PCR, reintroducing rates based on cost. Nevertheless, regulators should consider PCR, given the importance of maximizing the efficiency of the transmission system and the use of GETs to achieve that efficiency.
1. Introduction
To reduce concentrations of greenhouse gases in the atmosphere that lead to climate change, emissions of those gases need to be drastically reduced if not eliminated. The primary means for doing so is the substitution of electricity generated without emissions of greenhouse gases—in the past, hydroelectric and nuclear power and, more recently, wind and solar power—for energy sources that lead to those emissions. The substitution does not just involve different means of generating electricity, such as using wind and solar power to displace electricity produced by coal and natural gas plants. It also includes substituting “clean” electricity in cases where power is largely provided by means other than electricity, including automobile transportation and home heating.
Absent enormous gains in energy efficiency—that is, a decrease in the amount of energy it takes for heating, lighting, transportation, and the other tasks that require it—substitution of clean electricity for these other uses will require substantial increases in the amount of electricity generated, primarily by wind and solar power. Because production of large amounts of wind and solar power is constrained to be from locations with substantial steady wind and copious sunlight, transmission grids will have to be expanded to deliver that electricity. Higher-capacity transmission will also be necessary to improve the ability to use electricity generated at present, especially that from clean sources. The demand for additional transmission capacity may be driven by climate or environmental policies to reduce fossil fuel use or simply by the increased cost advantages of generating electricity by wind and solar power.
This expansion could be achieved by building more transmission lines. However, a different option would be to employ grid-enhancing technologies (GETs), which generally allow more electricity to be delivered through a given transmission line or grid. There are many such technologies (Lafoyiannis et al. 2024), with different cost and performance profiles. That there are so many is important, for reasons explained in Section 4. Before we get to that, a primary issue with adopting GETs is that the price of transmission is regulated in a way that discourages their adoption. Section 2 discusses why the price of transmission is regulated and how that price is set on the basis of the cost of providing transmission. Because cost-of-service regulation may encourage regulated firms to overinvest in constructing additional expensive transmission lines or may reduce incentives to operate efficiently, a different form of regulation known as price cap regulation (PCR) should be considered. Section 3 describes the workings and rationales for PCR, essentially adjusting price over time on the basis of expected but not actual cost, and presents its disadvantages. Section 4 examines the implications of PCR for transmission and adoption of grid-enhancing technologies, and Section 5 looks at its limitations in that context. Section 6 concludes.
2. Conventional Regulation: Why, How, and What Can Go Wrong?
To understand what price cap regulation is and why a transmission regulator might want to turn to it, we should first consider how regulation is traditionally done.
2.1. Why Regulate Price at All?
This section begins by reviewing the economic problem that price regulation is designed to solve. In some sectors, electricity transmission among them, natural monopoly conditions exist—that is, one firm can supply a good or service at a lower cost than two or more other firms because the average cost of production falls, and this is often a bigger supplier, at least within the range of production one could see in the market. Multiple firms with these advantages of scale may compete against each other with different but reasonably close products; an example might be cookbooks, where any one cookbook can be produced at lower cost if more are published, but different cookbooks can compete with each other. If natural monopoly conditions are sufficiently strong, though, the market would be left with one provider. Consequently, competition cannot be relied on to keep prices reasonably close to cost, and the attendant higher prices would lead to too little of the service being provided.
A solution is to set prices at a lower level than what this monopoly would produce; this response is generically called price regulation. Having a natural monopoly is a necessary but not sufficient condition to warrant price regulation. Since competition between enterprises where costs fall with output may hold prices down, any improvement by a regulator would unlikely be worth the trouble. Costs must also be relatively stable; otherwise, the regulator will have a difficult time determining when the price is reasonably close to cost. The product and demand for it also need to be relatively stable so that these do not change while the regulator is attempting to determine price. This is why utilities—water, electricity, natural gas delivery, and telephones (in the era before the internet and mobile devices)—have seen regulation, while computer operating systems, which have similar if not more pronounced monopoly cost conditions, have not.
In principle, the target for any regulator would be to match what competitive markets typically do and set a price equal to the marginal cost of the monopoly firm. However, this solution creates its own problem. When costs fall with output, as is necessary to justify price regulation, marginal cost will be less than average cost. Setting the price equal to marginal cost means that it is below-average cost, and the supplier will lose money and will not find it worthwhile to operate.
There is a need to design methods to provide funds to the regulated firm to cover its cost. In theory, the best method would be to charge customers a fixed fee on top of the per-unit price. These “two-part tariffs” are practical when there are customer-specific fixed costs to cover, such as running an electricity distribution line to a customer’s home, but they can otherwise be controversial. If the fixed fee is based on usage, for example, it essentially becomes part of the unit price, which leaves us with the original problem.
A second option would be to cover the firm’s costs from general government revenue, but this not only would be politically controversial but also would entail higher taxes that raise prices further above marginal cost. This leaves the third and prevailing option: recover the costs from the customers of the regulated firm.
2.2. The Traditional “Cost-of-Service” Solution
If all revenues need to come from the customers, the regulator’s problem is to find the lowest price at which the firm’s revenues are just large enough to cover its cost. It is easier to describe the outcome with a picture than in words alone (Figure 1). The target price (Preg) will be where the demand curve (D) for the regulated product intersects the average cost (AC) curve for the regulated firm, producing output under the regulation (Qreg). At a higher price than Preg, the firm would be getting more than enough revenue to cover its cost from the sales it would make at that price. The price could be reduced, promoting economic efficiency and benefiting customers, without leaving the regulated firm unable to cover its cost. At a lower price, including the ideal where the efficient price (Peff) equals marginal cost (MC), leading to the efficient level of output (Qeff), the firm is unable to cover its total cost.
Figure 1. Price Versus Cost

This is the conceptual underpinning of traditional cost-of-service regulation, currently used for transmission (Joskow 2024). The regulator attempts to determine the level of average cost that would meet demand, often assuming that demand is relatively insensitive to price for the services where consumers are sufficiently vulnerable to price increases to support regulation. The regulator then sets the price equal to that average cost, hence the term cost of service regulation (COSR). To do so, the regulator needs to know the costs and demand with reasonable accuracy within the time it takes to determine an appropriate price.
Moreover, COSR is often referred to as “rate of return” regulation. Long-standing Supreme Court doctrine requires that regulated firms be given a “fair opportunity” to earn a “just and reasonable return.” Federal Power Commission v. Hope Nat. Gas Co., 320 US 591 (1944); Bluefield Water Works v. West Virginia Public Service Commission, 262 US 679 (1923). In addition, unlike other costs incurred by the regulated firm, the rate of return its investors should get needs to be estimated from that earned by investors in enterprises with similar risk profiles, as the cost of capital is essentially a function of risk and expected inflation. For a regulated firm, that cost cannot be directly observed by the regulator before setting the price, since that price determines what investors earn. In addition, a major reason for observing declining average cost, which creates the condition for price regulation, is a high fixed cost, and that typically entails a large capital investment, such as a transmission line. Consequently, the rate of return for that investment will be a large component of the total cost of providing the service to be regulated.
2.3. What Can Go Wrong—Consequences for Electricity Transmission
Let’s assume there is a product with relatively stable demand and cost, so regulation of a monopoly provider might be feasible. Even with that, cost-of-service regulation has proven difficult. The foundational argument, going back over sixty years (Averch and Johnson 1962), involves the consequences of setting the rate of return on investment too high. Recall that a regulator cannot observe the cost of providing capital directly because the actual return depends on the price it sets, so error is inevitable. If the regulator would be expected to set the rate of return too low, the firm would find it impossible to recover its investment, and the transmission line or other utility would not be provided in the first place.
If the allowed rate of return is too high, the regulated firm’s reported average costs go up, and prices are thus higher to users than they need to be. Moreover, the regulated utility has an incentive to use too much capital on which it earns a return relative to other inputs, such as labor. This exacerbates excessive pricing from the allowed capital cost being too high. In the extreme, a utility could engage in “gold plating”—that is, making capital investment that has no productive value, just to get the excessive rate of return.
Perhaps surprisingly, a second, potentially more severe set of problems arises if the regulator manages to set the allowed rate of return correctly. In this case, the regulated firm just covers its cost regardless of what it does. If prices always just cover the average cost of production, the firm has no incentive to produce efficiently. It need not waste capital; it may waste any of its inputs. It may forgo opportunities to reduce the cost of the inputs it uses to deliver its services. To prevent this, the regulator must not only accurately measure the cost of capital but also oversee the regulated firm’s investment and operational decisions to ensure that it is producing efficiently.
This has a number of consequences for transmission. If the allowed rate of return is too high, a regulated transmission utility will have an incentive to build bigger lines than it needs. For a recent discussion, see Gearino (2024). If the rate of return is just right and prices just cover costs, a transmission utility has no incentive to employ lower-cost methods for expanding transmission capacity—grid-enhancing technologies—and instead may inefficiently build bigger lines than is necessary. That concern motivates the search for a better way to regulate.
3. Price Caps to the Rescue—Maybe
3.1. How Price Cap Regulation Works and Why
In principle, regulators could minimize this problem by not just setting price but also overseeing the regulated firm’s production to ensure that it is investing appropriately, operating efficiently, and minimizing cost. This, however, would not only involve the transmission of the investment and operating decisions made by the regulated firm but also require that the regulator have the expertise to determine whether the regulated firm could have done better by making different investment and operational decisions. Under COSR, regulators typically have the authority to review firm decisions as to whether they were “prudent” and whether the investments were “used and useful.” That regulators can induce firms to reveal accurate cost information and have the expertise to ascertain whether what the firm did was the least-cost way to deliver its services are unrealistic expectations. Sappington and Sibley (1988) propose mechanisms to induce a regulated firm to provide accurate information, essentially by paying them the difference between the benefits of the regulator’s initial guess and the true costs that only the regulated firm knows. To my knowledge, no regulator has used such a method to pay a firm directly as an inducement to reveal its true cost.
The fundamental problem with COSR is that rates are based on costs. If the allowed rate of return exceeds the risk-adjusted cost of investing in the regulated firm, the effect of that higher rate on price is what gives the firm the incentive to use too much capital. If the regulator manages to find the right rate of return, the firm’s revenues just cover its cost, including an appropriate return on investment, and thus it lacks any financial incentive to deliver its services economically.
Suppose instead a regulator set rates without regard to the firm’s reported costs. This is the essential principle behind price cap regulation. PCR was first proposed publicly by Stephen Littlechild (1983; see also 2024) while in the UK government for regulating a newly privatized British Telecom. It generally works as follows: The regulator begins with a regulated price from a certain year, perhaps a legacy price from employing COSR, then commits to adjusting the price over time on the basis of just two parameters. The first is changes in a price index of some sort, typically the consumer price index (CPI), to allow the regulated firm to adjust its rate to keep up with inflation. If available, the regulator may want to use a producer price index or perhaps a price index more representative of the inputs used by the regulated firm. The second is a percentage price reduction, constant over time, reflecting how the regulated firm would share expected increases in productivity; this constant is typically referred to as X. Accordingly, PCR is referred to as CPI-X regulation, where CPI refers to the change in the price index reflecting inflation and X is a “productivity” factor.
Crucially, neither CPI nor X is based on the actual cost or changes in cost faced by the regulated firm. Under PCR, the regulated firm would act as if its production and supply choices had no effect on the price it receives for its product. The regulated firm would then act just like a competitive firm from an economics textbook. The price set by the regulator would determine output, based on the demand for the regulated service. However, if the regulated firm reduces the cost of delivering that output by a dollar, it gets to keep that dollar. It thus has the incentive to minimize cost and produce efficiently. The excessive investment incentives and indifference to waste disappear.
3.2. Relatively Subtle Advantages
This increased cost efficiency partly explains why the productivity factor X is part of price cap regulation. PCR gives the regulated firm an incentive to be efficient by letting it keep the profits from cutting costs. In principle, these savings could be shared with the customers of the regulated firm through price reductions. Reducing the price by X percent per year is a way to do that. Notably, even if price remains above cost, the benefits from reducing the regulated firm’s cost is typically considerably smaller than the loss to customers from the price being above the theoretical ideal, because the welfare loss from somewhat higher prices is generally outweighed by the cost savings. The argument is akin to the assessment of merger efficiencies in Williamson (1968).
The X percentage adjustment (over and above inflation) has other rationales. In the United States, “market dominant” postal services, mainly letter mail postage, has been regulated since 2006 by PCR. Postal Accountability and Enhancement Act, P.L. 109-435 (2006), §201. However, demand for sending mail in the United States (and around the world) has fallen substantially with the advent of electronic mail, online bill paying, and other substitutes for hard-copy communication. Because price will typically be above marginal cost, falling demand creates unexpected losses. Brennan and Crew (2016) propose d a percentage increase in the cap—like shrinking X or making it negative—based on (rough) estimates of the reduction in demand, the fraction of costs that are fixed, and demand elasticity. At the request of the US Postal Regulatory Commission’s public representative, I filed two declarations explaining this argument to the commission when it was fulfilling its statutory requirement to reassess how price cap regulation was working for postal services. The logic also works in the reverse direction: A reduction in price can maintain the regulated firm’s expected solvency when demand is increasing. Consequently, an argument for an X factor apart from sharing the gains from increased productivity is a method for sharing with customers the profit effects from increasing demand.
A further advantage is that price cap regulation can attenuate inefficient and perhaps anticompetitive incentives to cross-subsidize unregulated services. Because a regulated firm has to charge a price below the monopoly level, it has market power it might like to exercise by entering unregulated markets as well (Brennan 1987). This concern supported the 1984 divestiture by AT&T to settle a decade-long antitrust case by separating competitive equipment and (then different) long-distance telephone markets from (then) regulated telephone service. It also lay behind the requirement of the Federal Energy Regulatory Commission that where wholesale electricity markets were open to entry, electricity generators could have no control over regulated transmission grids (Brennan et al. 2002). One tactic following from this entry is cross-subsidization—allocating costs of the unregulated service to the regulated side of the business. Under COSR, this misallocation means that the firm can both increase the rates its customers pay for the regulated service and create an artificial cost advantage in the unregulated market (Brennan 1990; Brennan and Palmer 1994). That artificial cost advantage can be sufficiently great to preempt competition in the unregulated market, like a predatory price, but where the costs of predation are in effect borne by the customers of the regulated service. By divorcing regulated rates from cost, PCR eliminates this concern. In both the telephone divestiture and opening wholesale electricity markets, an additional and perhaps more compelling consideration was that a regulated firm in a related unregulated market might deny or delay rivals from obtaining equal quality access to its regulated service. This concern applies under any price regulation, whether by COSR, PCR, or any other regulatory or statutory method.
A last advantage that may be the most important, if perhaps insufficiently recognized, is that PCR minimizes the need for regulators to obtain information to do their job. A critical observation regarding regulation and the role of government in the economy in general, from Hayek (1945), is that error is inevitable because the government cannot get information that is known to market participants. With price regulation, the problem with COSR is that it requires the government to obtain this information on a regular basis. This need for information is avoided with PCR, since once the initial price, inflation price index, and productivity factor are chosen, the regulator no longer requires verifiable and verified cost information. PCR may be viewed as a response to the Hayek critique: While the price may not be the theoretical ideal, the lack of need for cost information makes reasonable regulation feasible. The perfect need not be the enemy of the good.
3.3. Disadvantages
In principle, once PCR is in place, the regulator’s job is over. Unfortunately, in practice, it will not be so simple. Over time, the regulated firm’s path of earnings under PCR may not conform to expectation: The earnings can be either too high or too low. If earnings are too high, a regulator will find it difficult to resist pressure to reduce the cap to make earnings closer to publicly acceptable levels. If earnings are too low, the firm in the extreme case will go out of business; before it reaches that point, it can claim that PCR is not giving it the fair opportunity to earn a just and reasonable return. See the Hope and Bluefield decisions cited in footnote 1 on the regulatory obligation to provide a fair opportunity to earn a just and reasonable return. The doctrines in those cases provide a bulwark against a regulator forcing its regulated firm to set the price just high enough to keep it operating but with no recovery of investment. Without a commitment to provide expected returns sufficient to compensate investors for investing in the regulated firm, the regulator will have no firm to regulate. These Supreme Court decisions supply that commitment. The possibility of returns being either too high for public acceptance or too low to provide appropriate returns calls into question the ability of a regulator to commit to PCR and not to resort to COSR at some point.
This inability to commit has at least three related implications. First, to the extent that the regulated firm expects its rates to be based on costs at some point, the incentives for it to operate efficiently—and not to cross-subsidize any operations in unregulated markets—will be attenuated. In some cases, periodic reviews of rates under PCR, such as every five years, will be part of establishing PCR. A recent example is the Postal Rate Commission’s Order 5763, described in “PRC Adopts Final Rules to Modify the Rate System for Classes of Market Dominant Products,” Nov. 30, 2020. If so, the only significant difference between PCR and COSR is the time between rate reviews. For this reason, PCR may be regarded as better suited as a transitional regulatory method for a firm in a market where competition is expected in the near future, before a rate review would arise, with fewer advantages where regulation is expected to be relatively permanent. Sappington and Weisman (2016) argued that PCR was seen in telecommunications but not electricity in the United States for this reason.
A second issue with PCR is that it may not lead to service at quality levels where the benefits exceed the cost. Under PCR, a regulated firm has an incentive to maintain quality, since it boosts demand for its service. However, because the firm cannot raise its price just because it increases quality, it will typically set quality below the standard that would be efficient at the price level under PCR (Brennan 2019; see also Sappington 2005, 131n18). COSR also need not result in optimal service quality, although if returns to capital are excessive, the regulated firm could find it profitable to make investments that enhance quality, regardless of how much customers value those enhancements.
The third point is specific to electricity regulation. In recent years, many have advocated for increased energy efficiency—that is, encouraging and sometimes requiring electricity customers to install appliances that use less electricity and, if they already have them, to reduce how much electricity they use, such as by installing better insulation in their homes. Those advocating such policies have successfully encouraged many state electricity regulators to adopt “decoupling” policies that essentially preserve a local electricity distribution utility’s profit when its customers use less electricity as a result of these policies (Brennan 2010). Because prices to pay for distribution are typically above the low-to-zero marginal cost of distributing electricity, utilities lose money as demand falls, so they would be expected to oppose such measures. Decoupling takes away their incentive to oppose such measures. In terms of the comparison of different types of regulation essentially COSR with automatic adjustment, a step away from PCR. I note here, although I did not at the time (Brennan 2010), that revenue protection could be provided under PCR through the price adjustment mechanism in Brennan and Crew (2016). See footnote 8 in this paper and the accompanying text.
4. Implications for Transmission
As noted in Section 1, to achieve the goal of substantially decarbonizing the economy, it is imperative to maximize the capacity of the transmission grid to deliver electricity. This becomes even more critical as lines are needed to carry electricity from sites conducive to generating electricity through wind and solar energy. Because of the importance of transmission to decarbonization, government regulators and clean energy advocates have recommended extensive long-term planning over coming decades to expand transmission capacity. See Gramlich and Caspar (2021); Federal Energy Regulatory Commission, Building for the Future Through Electric Regional Transmission Planning and Cost Allocation,18 CFR Part 35, Docket No. RM21-17-000; Order No. 1920 (May 13, 2024).
However, long-term transmission planning is fraught with difficulties (Brennan 2022). Regulators have no way to predict which generation technologies, and how much generation, will be needed in the next twenty years. Costs of error may be considerable, with foreseeable political controversies as to who will bear those costs. Moreover, attempts to gather information from stakeholders run a risk of turning competitive wholesale electricity markets into a collectively planned network.
The planning concern here involves how transmission networks should be designed. It is not enough to note that “a new FERC rule should require transmission planning entities to evaluate all available solutions, including new physical infrastructure options and grid-enhancing technologies, within regional transmission plans to more efficiently serve customers” (Gramlich and Caspary 2021, 11). The question is how regulators are supposed to do that. As we have seen, under cost-of-service regulation, transmission companies may have an incentive to install expensive lines on which they could earn a return above cost rather than adopt less capital-intensive means to control costs, including grid-enhancing technologies. Notably, some GETs may avoid or reduce difficult regulatory hurdles, including obtaining construction permits or certificates of need (Reed et al. 2020). Federal Energy Regulatory Commission Order 1920 requires transmission companies to “consider the use” of GETs (FERC 2024).
The compelling problem for regulatory verification that GETs are being deployed as needed in the most beneficial manner is the plethora of options under that heading. A recent Electric Power Research Institute (EPRI) presentation described four different categories of GETs: advanced conductors that operate at higher temperatures, dynamic line rating that allows transmission lines to increase capacity depending on prevailing weather conditions, power flow controllers that change impedance to reroute power to higher capacity lines, and topology optimization to redirect power to less congested lines (Slaria et al. 2023; Lafoyiannis et al. 2024). Each of these technologies has different properties, with different knowledge bases, amenability to standardized performance criteria, and needs for field trials. Focusing only on dynamic line rating, the presentation noted forty-four different options in the marketplace or undergoing testing, some software-based, some hardware-based, and some exploiting advances in optical technology. A similar breadth of options may apply to the other families of GET described by EPRI. For more detail on using advanced conductors to increase transmission capacity, see Chojkiewicz et al. (2024).
Attempts by regulators and planners to get and apply information on transmission engineering necessary to make appropriate design choices are likely to be futile. Price cap regulation may eliminate the need for these attempts. With price separated from cost, transmission utilities would have the incentive to apply their knowledge and expertise regarding the many GET options to decide which works best for them. EPRI explains that many of these technologies can work together; it is not simply a matter of choosing one among many. A utility could decide which is the best method or combination of methods, choosing to use what works best on the basis of characteristics most pertinent to the portion of the grid for which it is responsible.
This advantage of PCR cannot be overstated. The call for transmission planning is already asking implicitly, and perhaps explicitly, for regulators to make decisions in the face of enormous information shortcomings. GETs promise enormous benefits but exacerbate these information shortcomings. Exploiting regulatory methods used in other sectors but largely neglected for electricity may alleviate this need. As discussed in the next section, PCR cannot solve all the planning problems transmission expansion objectives raise for operators and regulators, but in light of the help PCR can provide, it is worthy of consideration.
5. Challenges and Limitations
Although applying price cap regulation to transmission has significant advantages, especially with regard to encouraging implementation of grid-enhancing technologies, it also has significant complications that will have to be recognized in any implementation.
5.1. Relatively Permanent, Not Transitional Regulation
As discussed in Section 3.1, the advantages of PCR in general are diminished because divergence of realized cost from the CPI-X adjustment path, as one would expect over time, will lead to profits for the regulated firm that are unacceptably low or high. This problem is less consequential when price regulation can be replaced by market pricing with sufficient competition forthcoming in the near future. However, transmission is not likely to be competitive in the foreseeable future. Even when different companies own different lines, the physical fact that electricity takes all available paths from where it is generated to where it is used—a phenomenon known as loop flow or parallel flow—implies that an interconnected grid is essentially a single entity (Brennan et al. 1996, chap. 4).
A piece of the grid could, in principle, disconnect from the rest of the grid, but this would both be operationally inefficient, preventing electricity from different areas from meeting demand and electricity in that area from meeting demand elsewhere, and leave a monopoly within the isolated region. A firm might have independently built and owned “merchant transmission” lines, but they would become part of the monopoly grid. Unless electricity generated close to or at the users’ locations displaces electricity shipped over long distances, transmission rates will continue to be regulated, implying that those rates will at some point be adjusted on the basis of real cost, including a prescribed rate of return. This eventual adjustment will diminish incentives to be efficient and, perhaps, to implement grid-enhancing technologies when the benefits exceed the cost.
5.2. Multiple Dimensions of Transmission Quality
Transmission networks do not simply deliver electricity between two points. There are many dimensions of quality associated with that task. One is reducing “line loss,” the difference between the amount of energy injected where it is generated and how much is available where it is used. A second dimension is reliability, which we can think of as maximizing the time the full capacity of a transmission line is available. This has become especially significant in parts of the country where transmission lines pose, and are vulnerable to, risk of fires when they go through forested land. A third, of increasing import in recent years, is resilience, which we can define as the speed with which a line that has failed can be returned to service. There undoubtedly are other dimensions of quality as well.
PCR provides some incentive to maintain quality, since a regulated firm, including a transmission operator, makes more money with greater use of its service. The lower the line loss and the more reliable and resilient a transmission line is, the more electricity it will deliver, and the more generators and distributors will want to use it. However, this incentive will be less than ideal. For transmission, the potential challenge is that the gain to an operator from improving the line will not reflect any value not captured simply by increased output, such as any increase in the willingness to pay for more reliability to reduce losses from power failures. Although PCR restores an incentive to install the particular grid-enhancing technologies best suited for a transmission line, the transmission regulator may have reason to order additional GETs to be installed. My guess is that this is likely to be a very difficult need to ascertain in practice, even if the regulator can identify the possibility in theory.
5.3. Variation over Space and Time
The arguments for PCR in this paper envision a single product for which the cost of production can be minimized. Transmission, however, does not provide a single product. Geographically, a transmission grid connects numerous generators in a multitude of locations. On top of that, the product varies over time. Until storage at the users’ locations becomes inexpensive and ubiquitous, a transmission system has to deliver electricity when it is used. This implies that electricity delivered at one time is a not a substitute for electricity delivered at a different time.
In electricity, this shows up as congestion pricing, which means considerably different prices for delivering electricity. It is easier to see this if thinking about an individual line. In times of relatively low demand, the cost of delivering an additional amount of power will be negligible (other than line loss). When demand is high enough that the line cannot carry any more power, the cost in effect would be the cost of having installed a bigger line. An analogy to summer resort hotels may be useful. At other times of year, when tourist demand is low, the cost of making an additional room available is relatively small, based on the cost of cleaning and heating. During the summer peak, however, the cost of an additional room is basically these plus the cost of building another hotel to provide that room. In the near term, that price of using that line will be the price high enough that the demand to use that line just equals its capacity. The difference between the low-demand off-peak price and the high-demand peak price is the congestion price. In practice, with multiple nodes and time periods, congestion prices send signals about which transmission pathways to take and are considerably more complex than a difference between a peak and an off-peak price.
From an efficiency perspective, this leaves open the question of how large to build that line. If the revenues from peak sales plus off-peak sales need to be just large enough to cover the entire cost of the line, then in general the off-peak price will be above the negligible or zero cost of transmitting additional electricity, and the peak price will exceed the marginal cost of building a bigger line. How much these will be above their respective costs would ideally depend on the elasticities of demand for electricity off-peak and during the peak. This is an application of the efficient Ramsey pricing result in Baumol and Bradford (1970).
This complexity is multiplied if we look at all the pathways between generators, distribution companies, and nodes in between in a transmission grid. In principle, PCR can work with multiple products. If a weighted average of the prices of all the products, here including congestion prices and off-peak prices at all nodes, where the weights are previous period quantities, has to remain under the price cap, then over time the set of prices will converge to values that maximize consumer benefit, subject to the amount of profit the regulated firm is able to get over time by raising some prices while it reduces others (Brennan, 1989). However, the speed of this convergence is indeterminate and unlikely to achieve even this level of efficiency before other adjustments in the face of low or high profits would be likely. So even with PCR, regulators are unlikely to avoid questions of rate design across nodes over time and how much capacity any available segment, even with grid-enhancing technologies, should have.
5.4. Handling Expansion: Setting Initial Prices
Recognizing the multiplicity of “products” a transmission grid offers, at every node and at multiple times, does not reduce the complexity. The set of nodes is not static. The push to decarbonize the economy entails constructing new transmission lines to locations conducive to producing electricity with sunlight and wind, and these had not been extensively connected to the transmission grid previously.
The complexity is not just an extension of the multiplicity problem, although that problem persists. The complexity involves getting PCR initialized. PCR involves precommitted adjustments based on inflation and potential expected cost savings. These adjustments require an initial beginning price. When PCR is brought to a setting where cost-of-service regulation had been in place, that initial price is already available and has been validated through the regulatory process.
This initial price is not available for brand new lines, where there has never been a regulated price. That price will likely have to be set through some form of cost-of-service regulation. As a consequence, regulators intent on adopting PCR will still have to go through a cost verification process. This process may discourage adoption of grid-enhancing technologies when prices are first set. However, PCR going forward may restore incentives to implement GETs in an efficient manner.
6. Conclusions
The prospect of decarbonization of the electricity sector and the economy as a whole has increased focus on the value of increasing the size and capacity of the transmission sector. One relatively low-cost set of methods for increasing the capacity of installed lines involves employing a variety of possible grid-enhancing technologies. Because setting transmission rates on the basis of cost may lead transmission providers to choose to install lines at greater cost than that of GETs, a different regulatory method, price cap regulation, merits consideration. In theory, PCR takes an initial rate and adjusts it over time on the basis of inflation and expected (but not actual) cost reductions, thus giving the regulated firm an incentive to adopt cost-reducing methods for delivering service—in this case, GETs. PCR also implies that regulators need not be burdened with having to determine and prescribe how transmission providers should deploy GETs and which of the many varieties to deploy.
PCR is no panacea. Because transmission prices will likely continue to face regulatory oversight, allowed rates are likely to eventually diverge from costs enough to warrant regulatory recalibration. This attenuates the advantages of PCR, which also is not designed to properly incentivize reliability, resilience, line loss, and other quality dimensions. PCR can handle the multiplicity of rates over different nodes and different times, but the amount of time for such rates to theoretically converge to efficient levels is likely to exceed the time it would take for regulators to adjust rates when profits under price caps become unacceptably high or low. Finally, because new transmission lines will likely be required, regulators will have to set an initial price for PCR rather than take a prior regulated price as the starting point, also reintroducing rates based on cost.
Nevertheless, the importance of maximizing the efficiency of the transmission system in general, and the use of GETs to achieve that efficiency, justifies an assessment of the virtues of PCR by transmission regulators.