AGL has a guide to help customers read their meters manually. These instructions divide the meters into four different groups for both electricity and gas meters. While this information gave us a starting point, we did not know much about the proportions of these groups among the meters in use and what challenges the individual types might pose.
AGL provided us with a large dataset of images from all groups of meter types which we had to classify manually in separate stages. Regular discussions with subject matter experts at AGL were invaluable to set priorities and identify pitfalls and opportunities in classifying and reading the individual meters.
The images supplied by AGL were a genuine reflection of the customers’ challenges and so it quickly became obvious that the analog electricity meters with clock faces were the most difficult to read. At the same time, they are the most ubiquitous non-digital model still in use. We therefore decided that the POC should demonstrate how the most complex and most widespread problem can be tackled successfully and focussed on developing a solution to identify all meters and read clock faced meters specifically.
It wasn’t just the meter read that needed to be captured through the app, it also had to locate the label on the meter that connects to the specific customer.
Furthermore, the app had to locate the region on the meter that would identify it in relation to the customer. This was especially important to help the customer manage the complexity of modern peak/off-peak contracts that require multiple analog meters to record power consumption for different times of the day.
We realised that due to the complexity of the problem space and the number of data labels we would require, we had to not only understand how to read the meters ourselves, we had to find ways to quickly teach large groups of people how to do the same to distribute the labelling process. The transfer of human knowledge and understanding into a machine readable format is still one of the most time-consuming and error-prone steps in the process of machine learning. A machine learning model will only be calibrated as well as the data it is trained with, and it is tempting to take shortcuts because the labelling task is repetitive and tedious to humans. We solved this problem in collaboration with AGL, having multiple short labelling sessions with a limited number of volunteers who focussed on an individual task or label type to speed up the learning process and minimise confusion and risk of error.
Given the target of an interactive, real-time application on a mobile platform, we decided to use the live camera output of the phone as an input stream of images for the app to read sequentially. This greatly improved the app’s resilience to glare, reflections or other effects that could make a single image unreadable.
The objects we wanted to detect were very complex, so we chose to divide the model into smaller, specialised subunits that would only be used if required. The first component of the architecture would therefore be an object detector model that identified elements of each frame and filtered out the relevant information to pass on to the specific subunit. A final logic would statistically evaluate the results from multiple frames in real time to reduce errors and produce a result with a detailed confidence value for each component.