All new in NILMTK: A Summary!

8 minute read

Published:

Dear all, there is a lot of exciting news on NILMTK: important bug fixes, new APIs, new reference algorithms and a paper at ACM BuildSys! In this post, I’d like to summarise the latest updates on NILMTK. To the best of my abilities, I aim to paraphrase details from Nipun Batra’s latest NILMTK paper to give you an idea of how big these updates are with regard to scholarship in NILM! However, I don’t make claims for completeness of this summary. It should also be noted that I’m neither part of the development team of NILMTK nor receive any benefits from publishing this post. To say it in very simple terms: I’m just another happy researcher that is excited by open source-tools like NILMTK.

Brand-New Conference Paper on NILMTK!

Nipun Batra et al. present a full conference paper at BuildSys 19:

Non-intrusive load monitoring (NILM) or energy disaggregation is the task of separating the household energy measured at the aggregate level into constituent appliances. In 2014, the NILM toolkit (NILMTK) was introduced in an effort towards making NILM research reproducible. Despite serving as the reference library for data set parsers and reference benchmark algorithm implementations, few publications presenting algorithmic contributions within the field went on to contribute implementations back to the toolkit. This paper describes two significant contributions to the NILM community in an effort towards reproducible state-of-the-art research: i) a rewrite of the disaggregation API and a new experiment API which lower the barrier to entry for algorithm developers and simplify the definition of algorithm comparison experiments, and ii) the release of NILMTK-contrib; a new repository containing NILMTK-compatible implementations of 3 benchmarks and 9 recent disaggregation algorithms. We have performed an extensive empirical evaluation using a number of publicly available data sets across three important experiment scenarios to showcase the ease of performing reproducible research in NILMTK.

You can find their paper here.

The Experiment API

The creators of NILMTK introduce a new API for experiments that aims to simplify the definition of experiments. This is particularly interesting for new users. The interface makes use of a declarative visualisation library to simplify how experiments are defined in NILMTK. The new approach now allows encapsulating training and test parameters. Also, all settings are gathered in one place. As a result of this approach, new users benefit from a reduced workload when conducting experiments.

As we can see in the code below, it’s now possible to easily set power types, define the sampling interval, select appliances as well as disaggregation algorithms. Also, we can set training and test parameters and choose performance metrics right away!

d = {
  'power': {
    'mains': ['apparent','active'],
    'appliance': ['apparent','active']
  },
  'sample_rate': 60,
  'appliances': ['fridge','air conditioner','electric furnace','washing machine'],
  'methods': {

      'Mean': {},"DSC":{'learning_rate':5*1e-10,'iterations':100},"AFHMM":{},"AFHMM_SAC":{}
  },
   'train': {    
    'datasets': {
            'Dataport': {
                'path': '../dataport.hdf5',
                'buildings': {

                10: {
                    'start_time': '2015-04-04',
                    'end_time': '2015-04-24'
                },
                15: {
                    'start_time': '2015-04-30',
                    'end_time': '2015-05-20'
                }
                }

            }
            }
    },
    'test': {
    'datasets': {
        'Datport': {
            'path': '../dataport.hdf5',
            'buildings': {

                10: {
                    'start_time': '2015-04-25',
                    'end_time': '2015-05-01'
                    },
                15: {
                    'start_time': '2015-05-20',
                    'end_time': '2015-05-27'
                    }
            }
    }
},
        'metrics':['mae']
}
}

I copied this code from Github.

A Refactored Disaggregation API

The latest version of NILMTK introduces a new model interface for developers. To contribute a new NILM technique to NILMTK, developers of NILM algorithms now only need to be familiar with Pandas, Numpy, and Scikit-Learn. Furthermore, only two functions have to be implemented: partial_fit and disaggregate_chunk. For further details on the new disaggregation API I refer to the BuildSys paper because going into detail would exceed the scope of this blog post.

Many Little Things

New installation method

The creators of NILMTK provide a new permanent solution, which makes use of the Anaconda universe. Users find certain NILMTK versions in conda-forge community repositories. Another benefit of this new solution is that NILMTK’s dependencies can be pulled from the same place. The former tricky installation process now simplifies to:

#!/usr/bin/env bash

# Create new Conda env
conda create -n nilmtk-env python=3.6
conda config --add channels conda-forge

# Install NILMTK
source activate nilmtk-env
conda install -c nilmtk nilmtk

New dataset converters

Batra et al. solved many issues of existing dataset converters i.e. problems with format conversion, dataset loaders, metadata, and HDF5 stores. In addition, they provide new converters for the datasets DRED and Smart*.

Documentation

The NILMTK team has revised the majority of manuals. Furthermore, users will find new data exploration plots and brand-new descriptions of the important MeterGroup and elec objects.

A Summary of Algorithms

Thanks to the NILMTK-Contrib efforts, NILMTK includes now more disaggregation algorithms than ever!

  1. Mean: This algorithm was designed to serve as a simple benchmark against more complex NILM solutions. A trained model calculates and stores only the mean power state for each appliance. The mean for each appliance is dynamically updated each time the same appliance is encountered. Prediction is performed as follows: For each value of the aggregate reading, the mean model predicts all appliances to be ON and returns the mean power value for all appliances.

  2. Edge Detection: This technique divides the time series into steady and transient time periods. An edge is defined as the magnitude difference between two steady states. That difference corresponds to a state switch. The implemented edge detection algorithm was introduced by Further Information.

  3. Combinatorial Optimisation: This CO approach is related to the Knapsack problem. The goal of CO is to assign states to appliances so that the difference between the aggregate signal and the sum of appliance power usage is at a minimum.

  4. Discriminative Sparse Coding: Sparse coding aims to approximate an energy matrix through a representation of over-complete bases and their activations. Discriminative Sparse Coding modifies the sparse coding bases to produce activations that are close to the optimal solution of the optimisation problem. Further information.

  5. Exact Factorial Hidden Markov Model (ExactFHMM): In this approach, every appliance is represented by a hidden Markov model with K states so that the component signal has a finite set of states. Parameters are state mean values, initial probabilities, and transition probabilities.

  6. Approximate FHMM: Inference in exact FHMM is computationally-expensive and is likely to stick in local optimums. Approximate FHMM aims to overcome such issues by relaxing state values and transforming the inference problem to a convex program. Further information.

  7. FHMM with Signal Aggregate Constraints (FHMM+SAC): This approach is an extension to the baseline FHMM, where the aggregate value of each appliance is expected to be a certain value over a certain time period. Further information.

  8. Denoising Autoencoders: DAEs were designed to extract information from noisy input signals. Kelly et al. proposed a DAE for NILM that regards the aggregate power signal as a noisy input signal and the appliance signal as the information of interest. The aggregate signal is seen as a composition of the appliance signal of interest and noise. For further information see contributions of Kelly et al. and Bonfigli et al.

  9. Recurrent Neural Network (RNN): RNNs are neural networks that show connections between neurons of the same layer. RNNs are popular networks of choice for time series problems and are said to deliver decent performance. Kelly et al. proposed an RNN that considers a sequence of aggregate power readings and provides a single value of a certain appliance. To overcome the vanishing gradient problem, Kelly utilised LSTM units. Further information.

  10. Sequence-to-Sequence (Seq2Seq): creates a regression map from the aggregate power signal and the corresponding target appliance signals. Further information.

  11. Sequence-to-Point (Seq2point): This technique considers a time window of the aggregate power signal and outputs the midpoint of that time window of the target appliance. The idea behind this architecture is to consider the correlation of a power value and its neighbours within the time series. It is claimed that Seq2point learning can be viewed as non-linear regression. Further information.

  12. Online GRU: The online GRU disaggregator builds on the RNN implementation of Kelly et al. This implementation replaces LSTM units by more light-weight Gated Recurrent Units (GRU). Further, the layer sizes and overall architecture were optimised to reduce redundancy in comparison with Kelly et al.’s implementation. Further information.

Overview of the latest contributions

I hope this summary will be useful for some of you. Please report any misinterpretations or wrong statements.

Best,

Christoph

Comments