Stop! Exploring Bayesian Surprise to Better Train NILM.

Published in The 5th International Workshop on Non-Intrusive Load Monitoring (NILM ’20), 2020

Recommended citation: Richard Jones, Christoph Klemenjak, Stephen Makonin, and Ivan V. Bajić. 2020. Stop! Exploring Bayesian Surprise to Better Train NILM. In The 5th International Workshop on Non-Intrusive Load Monitoring (NILM ’20), November 18, 2020, Virtual Event, Japan. https://mobile.aau.at/publications/klemenjak-nilm20-surprise.pdf

Abstract:

In Non-Intrusive Load Monitoring (NILM), as in many other machine learning problems, significant computational resources and time are spent training models using as much data as possible. This is perhaps driven by the preconception that more data leads to more accurate models and, eventually, better performing algorithms. When has enough prior training been done? When has a NILM algorithm encountered new, unseen data? This work applies the notion of Bayesian surprise to answer these important questions for both, supervised and unsupervised algorithms. We compare the performance of several NILM algorithms to establish a suggested threshold on two combined measures of surprise: postdictive surprise and transitional surprise. We validate the use of transitional surprise by exploring the performance of a particular Hidden Markov Model as a function of surprise threshold. Finally, we explore the use of a surprise threshold as a regularization technique to avoid overfitting in cross-house performance. We provide preliminary insights and clear evidence showing a point of diminishing returns for model performance with respect to dataset size, which can have implications for future model development, dataset acquisition, as well as aiding in model flexibility during deployment.

Index Terms— NILM, Bayesian Surprise, Overfitting, Training, Energy Datasets

Comments