Wireless Sensor Networks (WSNs), with low-power wireless devices integrated with sensors and actuators, are emerging as a new computing paradigm that promise to seamlessly integrate cyberand physical-worlds. WSNs have great potential for many existing and novel application areas such as environmental monitoring, industrial and manufacturing automation, health-care, and military.
Heterogeneous Sensor Networks (HSNs), with heterogeneity in terms of their computation resources, wireless link properties, power capacities or sensing modalities, are the natural step in the evolution of WSNs driven by several factors, such as multiple application support, incorporation of legacy hardware, hierarchical deployment/architecture, and monitoring of multimodal phenomena. HSNs have been increasingly used in surveillance applications such as monitoring and tracking. Such applications require an information fusion framework that incorporates the data from multiple sensors. Classical target tracking approaches, such as probabilistic data association filtering and multiple hypothesis tracking, perform decision-level information fusion, wherein local decisions are made on the sensors which are then fused at a centralized location for global decision and tracking. Such approaches suffer from poor discrimination and exponential complexity, especially for multiple targets. Target tracking approaches based on signal-level information fusion, wherein the entire raw data from sensors are utilized for tracking, are not feasible in WSNs due to limited communication bandwidth.
The goal of this dissertation is to develop feature-level information fusion methods for target tracking in HSNs. Feature-level information fusion, wherein several features that are extracted from the raw data, should be used for tracking due to their lower communication bandwidth requirement, while maintaining good target discrimination capability. We design and implement a multimodal multisensor information fusion system for target tracking in an urban environment using an HSN of audio and video sensors. We demonstrate the system operating online in real-time. Further, we extend the audio sensing component of the multimodal system by including multiple acoustic features. We develop and implement a feature-based approach to collaborative source localization of multiple acoustic sources in WSNs. We also extend the video sensing component of the system to include multiple video features. We develop and implement an approach for collaborative target tracking in 3D space using a wireless network of smart cameras.