This article is about estimating rainfall using ordinary surveillance camera footage and computer algorithms to process the videos. Measuring rainfall with physical rain gauges is subject to a lot of error, and so far the only real way to reduce the uncertainty is to add more gauges, which of course costs money. Radar can be used to improve our knowledge of what is going on in the spaces between rain gauges, but ultimately the radar-based estimates still end up being calibrated to the gauges. New methods to improve accuracy for a given gauge coverage, and/or reduce cost and gauge coverage while maintaining accuracy, would be welcome.
“Opportunistic sensing” represents an appealing idea for collecting unconventional data with broad spatial coverage and high resolution, but few studies have explored its feasibility in hydrology. This study develops a novel approach to measuring rainfall intensity in real‐world conditions based on videos acquired by ordinary surveillance cameras. The proposed approach employs a convex optimization algorithm to effectively decompose a rainy image into two layers: a pure rain‐streak layer and a rain‐free background layer, where the rain streaks represent the motion blur of falling raindrops. Then, it estimates the instantaneous rainfall intensity via geometrical optics and photographic analyses. We investigated the effectiveness and robustness of our approach through synthetic numerical experiments and field tests. The major findings are as follows. First, the decomposition‐based identification algorithm can effectively recognize rain streaks from complex backgrounds with many disturbances. Compared to existing algorithms that consider only the temporal changes in grayscale between frames, the new algorithm successfully prevents false identifications by considering the intrinsic visual properties of rain streaks. Second, the proposed approach demonstrates satisfactory estimation accuracy and is robust across a wide range of rainfall intensities. The proposed approach has a mean absolute percentage error of 21.8%, which is significantly lower than those of existing approaches reported in the literature even though our approach was applied to a more complicated scene acquired using a lower‐quality device. Overall, the proposed low‐cost, high‐accuracy approach to vision‐based rain gauging significantly enhances the possibility of using existing surveillance camera networks to perform opportunistic hydrology sensing.