Optimizing Cloud-Based Video Crowdsensing
Wearable and mobile devices are widely used for crowdsensing, as they come with many sensors and are carried everywhere. Among the sensing data, videos annotated with temporal-spatial metadata contain huge amount of information, but consume too much precious storage space. The problem of optimizing cloud-based video crowdsensing in three steps is studied. First, we study the optimal transcoding problem on wearable and mobile cameras. An algorithm to optimally select the coding parameters is proposed to fit more videos at higher quality on wearable and mobile cameras. Investigate the throughput of different file transfer protocols from wearable and mobile devices to cloud servers. A real-time algorithm is proposed to select the best protocol under diverse network conditions, so as to leverage the intermittent WiFi access.