The paper on the 10% increase of lithium-ion battery life as a result of operating in a vehicle-to-grid (see previous post) is an interesting read. I was initially fascinated by the validation of their battery degradation model, but the actual result came from the integration of that model in a smart grid algorithm. This algorithm was then used in a simulation of load balancing of a building by means of electric cars and resulted in the 10% increase of battery-life figure.
That number is therefor not obtained by measuring the battery degradation in reality, it is the outcome of a mathematical model. Personally, I don’t have a problem with models and this particular model seems to have potential (the battery degradation part is validated). Models are useful for sure, but that doesn’t mean they are necessarily right. It depends for example on the data that goes in the model and the assumptions that are made. It seems that this is where it went wrong in this simulation.
The data that was fed to the algorithm came among other things from an actual building (the International Digital Laboratory). This is the description of that building:
The International Digital Laboratory (IDL) is four story office building located on the University of Warwick campus near Coventry. The University is situated in the centre of England, adjacent to the city of Coventry and on the border with Warwickshire. The building compromises of a 100-seater auditorium, two electrical laboratories, a boardroom, 3 teaching laboratories, eight meeting rooms and houses approximately 360 researchers and administration staff.
That is not a small building and it draws quite some electricity (my emphasis):
The buildings electricity demand is in excess of 0.8 GWh/year, with a daily typical consumption of 2.2 MWh/day, resulting in a total spend on electricity of around £82 k p.a. The electricity demand for a week in July 2016 is shown in Fig. 12 with a visible reduction in energy demand during weekends from weekdays. Electricity demand peaks between 6.30am and 9pm on weekdays with a smaller embedded peak at 7–9pm likely to be associated with the use of kitchen facilities. The average daily power demand for the building is 93 kW.
Figure 12 shows the electricity demand of that building during the week starting July 25, 2016 (blue color) compared with the grid demand when including vehicle-to-grid simulation with the smart grid algorithm (brown color):
Okay, all nice and well, but something seems not right here. If electricity demand is somewhat more than 0.8 GWh per year and “typical” demand is 2.2 MWh per day, then this “typical” demand is in fact “average” demand (800,000 MWh / 366 days = 2.186 MWh/day ≈ 2.2 MWh/day which is also confirmed by the 93 kW average).
That made me wonder why they considered the week starting July 25, 2016 as a week with typical demand. Did they select this week because the average daily demand is equal to the yearly average of 2.2 MWh/day? That seems to be the case.
That would be weird because July (and August) are vacation months and therefor not typical months when it comes to electricity demand. There will be less personnel present in the building, probably also less investigators, less on-going experiments, less or no students following classes, less food to prepare in the kitchen, less lights necessary during summer than in winter and so on.
If this week in July is taken as a “typical” week in the year, then the demand of the other weeks is underestimated. Especially in winter when it is dark and cold, leading to a higher electricity demand.
Averaging is also done for supply of electricity by the batteries of the electric cars to the building (my emphasis):
For the week depicted in Fig. 12, EVs provide 2.8 MWh of energy; assuming the same level of network support for every week in the year, this equates to 0.145 GWh of annual clean energy support which is just over 18% of IDLs annual energy demand.
Okay, I can understand that if electric cars delivered 2.8 MWh during that week and if that week is representative for every week of the year, then yearly production is 2.8 * 52 = 145.6 MWh = 0.1456 GWh. Which is indeed 18% of total electricity demand of 0.8 GWh. This seem to confirm that they just multiplied this specific week in July to come to the yearly numbers.
If the model actually ran over an entire year, then it would be easy to give the total modeled contribution over that year. Yet the authors just did the calculating based on averages of this specific week in July 2016. Which again suggests that they just ran their algorithm for this specific week and that the yearly result was extrapolated from this week with a “typical” electricity demand.
But this is only valid if ALL weeks are the same and this is probably not the case because this week in summer vacation is a below average week when it comes to demand. There will be a higher demand for electricity in winter, yet less electricity provided by those electric cars, because these cars will have less electricity to spare (some of that charge is used for car lights, heating,…). So in winter months, this weekly 2.8 MWh will not be reached.
There is even more that confirms my suspicion that they used extrapolation. This is what the authors assumed about the temperature used in their simulation (my emphasis):
Using the representative, distributed data presented in Fig. 11 and assuming an ambient temperature for Coventry of 18 °C, we study the impact V2G can have for load levelling IDLs power demand shown in Fig. 12 and simultaneously extending participant EVs battery service life where applicable (i.e., reduce CF an sic PF) through the routine presented in Fig. 9.
Further in the paper, this 18 °C is confirmed as the annual average temperature (my emphasis):
An ambient temperature of 18 °C corresponding to the annual average temperature of Coventry was assumed in this work.
Did they really used 18 °C in the algorithm in order to calculate the degradation at this temperature? That would make sense if they took that one week and extrapolated it for the entire year, but in the real world this will have consequences. For example, efficiency of a lithium-ion battery is (almost) optimal at this temperature, therefor they overestimate the electricity supplied by the car batteries in the rest of the year. In the real-world Conventry, the efficiency of the electric car batteries will be lower outside this week in July and then the share of the supplied electricity by those electric cars to the building will be less than the 18% stated in the paper.
There is nothing that even suggests that the simulation was done for a complete year. Everything points to it being done for one week in July (probably because it is assumed a “typical” week of the year) and with a temperature of 18 °C (average temperature in Coventry).
If this is really what they did, then they assumed:
- a period with less (maybe even the least) electricity demand of the year
- a period with more (maybe even the maximum) charge still in the battery
- (close to) optimal temperature:
- when charging/discharging
- to avoid battery degradation
- the maximum possible population of electric car drivers over the entire year.
They then didn’t take into account that:
- electricity demand will be higher in the rest of the year, especially in winter
- there will be less charge left in the battery in the rest of the year, especially in winter
- there will be less favorable temperatures in the rest of the year, especially in winter (cold temperatures), but also in summer (cars heating up when not in shadow)
- the population of electric car drivers will vary over the year and will not be maximum in the vacation period.
They then basically took (an impossible) best case scenario and extrapolated that for the whole year. Therefor overestimating what the contribution of the electric car batteries to the building would be. That way it becomes very easy to show a big impact of electric car batteries on load balancing of a building…
If that is really what they did, then this 10% less degradation is not even the maximum battery life gain the electric cars could get by operating in a vehicle-to-grid, but just a virtual result derived from data that has no bearings with reality. Then that 10% less degradation is just a meaningless figure.