Photo credit Christina Leth

Large Scale Surveillance

We are interested in the problems that arise when we examine hundreds of cameras. This big data problem opens up many new questions: Given limited operator capacity, how can select which camera I should look at? How do I select models and parameters for heterogeneous stream data - example, video feeds in varying environmental conditions? How do I evaluate algorithms in big data - training sets may exceeds 2 weeks of video data, and no realistic ground truth can be obtained from operators. How do we compare algorithms?

Our Contributions: Focusing on issues of large scale surveillance we have developed new techniques to model “normal” data from static video cameras. This allows us to detect real time “abnormal events” and thus enable operators to focus on the 1% of events in a video feed. Our algorithms drive the start-up iCetana’s innovative anomaly detection software. The software uses ideas from Compressed Sensing to enable simultaneous surveillance of many cameras deployed in diverse settings. A local city council has used our algorithms to detect loitering, anti-social behaviour and traffic violations. For more information see iCetana. The technology was: Winner, The Broadband Innovation Award Tech23,2010; Winner, 2011 WA Innovator of the Year.