In previous post, I explored the potential impact of batteries with a total capacity of 1,350 MW / 2 GWh in replacing dispatchable power sources by intermittent power sources in a grid. It learned me that the capacity of the batteries was way too small to absorb the variability of the intermittent output. Not much surplus was produced at a low share of intermittent power, but there was always a deficit. The higher the share, the lower the deficit, but also the higher the surplus production. It however took an incredibly high share for the deficit to reach zero, corresponding to very high levels of surplus production.
That made me wonder whether it would be possible to determine the point where the batteries would be used in an optimal way, meaning finding the point where there is the least amount of surplus combined with a still reasonable amount of deficit. This would allow me to determine a more realistic share of intermittent power for this battery capacity and, more importantly, how much dispatchable power would this intermittent share actually displace.
In that previous post, I used the example of a maximum dispatchable power of 6,000 MW, basically just topping off the peaks. When I run the model in a loop, iterating over an increasing share of intermittent power, this is the relation between surplus and deficit over the output of intermittent power:
This shows that the deficit is indeed lowering very slowly, but the surplus is shooting up fast. The point where there is not too much unused power and a not too high deficit is just below 1.4 times current intermittent power output, more precise (a tad above) 1.38.
When I throw this multiplier into the model, it spits out this graph (click the image to enlarge):
It is indeed clear that there is very little surplus (gray peaks), much lower than the example that I created in previous post (for the 2x current intermittent power output). Also, the battery seems a bit better used.
Other maximum dispatchable power values will get worse results than this. For example, this is the model output for a maximum of 1,000 MW dispatchable power sources. Its optimal balance point is 4.32x the current output of intermittent power sources and that gives this graph (click the image to enlarge):
This shows even more how underpowered the battery is for this amount of replacement. The battery not only fills up pretty quickly (not allowing to absorb the surplus production), it also draws empty pretty quickly (not able to fill in the deficit). Leading to large deficits and also a lot of surplus production.
Now what is the capacity of dispatchable power needed to avoid this deficit at this balance point? In the scenario of a 6,000 MW cap of dispatchable power, an extra capacity of 4,265 MW dispatchable power is still needed. This capacity will not be used much. For example, there are 13 time slots (from a total of 14,007) with a deficit greater than 4,000 MW and only 4 time slots with a deficit greater than 4,200 MW. Capacity that needs to be build, operated and maintained will be expensive when it is rarely used, but that aside.
Finally, what is the gain? In the 6,000 MW cap scenario, the total needed capacity is 6,000 + 4,265 = 10,265 MW (in the 1,000 MW cap scenario it is 1,000 + 8,695 = 9,695 MW). The maximum dispatchable power capacity in the reference period was 10,468 MW. This means that in the end the extra intermittent capacity combined with the 2 GWh batteries displaced just 1.9% dispatchable capacity for the 6,000 MW cap scenario (and 7.4% for the 1,000 MW cap scenario). That is surprisingly little.
Pingback: Intermittent versus base load when aiming for a balanced grid: a simple test – Climate- Science.press