TY - UNPB
T1 - Interpretable machine learning for power systems
T2 - establishing confidence in SHapley Additive exPlanations
AU - Hamilton, Robert
AU - Stiasny, Jochen
AU - Ahmad, Tabia
AU - Chevalier, Samuel
AU - Nellikkath, Rahul
AU - Murzakhanov, Ilgiz
AU - Chatzivasileiadis, Spyros
AU - Papadopoulos, Panagiotis N
PY - 2022/9/13
Y1 - 2022/9/13
N2 - Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This letter first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used. Second, we seek to demonstrate that SHAP explanations are able to capture the underlying physics of the power system. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF)—a physics-based linear sensitivity index—can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line flows from generator power injections, using a simple DC power flow case in the 9-bus 3-generator test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer.
AB - Interpretable Machine Learning (IML) is expected to remove significant barriers for the application of Machine Learning (ML) algorithms in power systems. This letter first seeks to showcase the benefits of SHapley Additive exPlanations (SHAP) for understanding the outcomes of ML models, which are increasingly being used. Second, we seek to demonstrate that SHAP explanations are able to capture the underlying physics of the power system. To do so, we demonstrate that the Power Transfer Distribution Factors (PTDF)—a physics-based linear sensitivity index—can be derived from the SHAP values. To do so, we take the derivatives of SHAP values from a ML model trained to learn line flows from generator power injections, using a simple DC power flow case in the 9-bus 3-generator test network. In demonstrating that SHAP values can be related back to the physics that underpin the power system, we build confidence in the explanations SHAP can offer.
U2 - 10.48550/ARXIV.2209.05793
DO - 10.48550/ARXIV.2209.05793
M3 - Working paper
BT - Interpretable machine learning for power systems
ER -