Previous Conferences
Jump to: navigation, search
Line 7: Line 7:
 
| style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">T23 Information Fusion in Resource-Limited Camera Networks</h2>
 
| style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">T23 Information Fusion in Resource-Limited Camera Networks</h2>
 
|-
 
|-
| style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px">'''Length:''' 3 hours (half day)
+
| style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px">'''Length:''' 3 hours
  
 
'''Intended Audience:''' Researchers, system designers and developers from industry and academia working on data captured by cameras and camera networks. People desiring to learn results and methods for fusing data produced by or request from smart cameras for applications such as distributed decision making, collaborative robotics, (mobile) surveillance and generic multi-camera target tracking. People wishing to learn to use a holistic simulator for the comprehensive modelling of this type of rapidly emerging applications.
 
'''Intended Audience:''' Researchers, system designers and developers from industry and academia working on data captured by cameras and camera networks. People desiring to learn results and methods for fusing data produced by or request from smart cameras for applications such as distributed decision making, collaborative robotics, (mobile) surveillance and generic multi-camera target tracking. People wishing to learn to use a holistic simulator for the comprehensive modelling of this type of rapidly emerging applications.

Revision as of 14:30, 24 February 2016

T23 Information Fusion in Resource-Limited Camera Networks

Length: 3 hours

Intended Audience: Researchers, system designers and developers from industry and academia working on data captured by cameras and camera networks. People desiring to learn results and methods for fusing data produced by or request from smart cameras for applications such as distributed decision making, collaborative robotics, (mobile) surveillance and generic multi-camera target tracking. People wishing to learn to use a holistic simulator for the comprehensive modelling of this type of rapidly emerging applications.

Description: Recent hardware advances such as multi-core high-speed platforms allow smart cameras to perform multiple tasks whilst observing a scene. Smart-camera networks are becoming ubiquitous and have the potential to enable a wide range of real-time services for vehicular ad-hoc networks, smart cities, wide-area surveillance, smart homes and disaster management. These networks of cameras with inbuilt processing and communication capabilities produce high-dimensional signals and share high-data-rate messages and generally operate with limited resources.

This tutorial will introduce key features of modern visual sensor networks while exploring the issues commonly found in such networks, which have recently become central in several applications. For smart-camera networks to enable these emerging applications they need to adapt to unforeseen conditions and varying tasks under constrained resources. The tutorial will offer theoretical explanations followed by examples using the WiseMNet++ simulator.

Prerequisites: Attendees are expected to be familiar with basic concepts in probability and statistics. For the practical part of the tutorial, attendees will benefit from knowledge of C/C++ programming.

Presenter: Andrea Cavallaro and Juan C. SanMiguel

Andrea Cavallaro is Professor of Multimedia Signal Processing and Director of the Centre for Intelligent Sensing at Queen Mary University of London, UK. He received his Ph.D. in Electrical Engineering from the Swiss Federal Institute of Technology (EPFL), Lausanne, in 2002. He was a Research Fellow with British Telecommunications (BT) in 2004/2005 and was awarded the Royal Academy of Engineering teaching Prize in 2007; three student paper awards on target tracking and perceptually sensitive coding at IEEE ICASSP in 2005, 2007 and 2009; and the best paper award at IEEE AVSS 2009. Prof. Cavallaro Associate Editor for the IEEE Transactions on Image Processing. He is an elected member of the IEEE Signal Processing Society, Image, Video, and Multidimensional Signal Processing Technical Committee, and chair of its Awards committee. He served as an elected member of the IEEE Signal Processing Society, Multimedia Signal Processing Technical Committee, as Area Editor for the IEEE Signal Processing Magazine, as Associate Editor for the IEEE Transactions on Multimedia and the IEEE Transactions on Signal Processing, and as Guest Editor for seven international journals. He was General Chair for IEEE/ACM ICDSC 2009, BMVC 2009, M2SFA2 2008, SSPE 2007, and IEEE AVSS 2007. Prof. Cavallaro was Technical Program chair of IEEE AVSS 2011, the European Signal Processing Conference (EUSIPCO 2008) and of WIAMIS 2010. He has published more than 150 journal and conference papers, one monograph on Video tracking (2011,Wiley) and three edited books: Multicamera networks (2009, Elsevier); Analysis, retrieval and delivery of multimedia content (2012, Springer); and Intelligent multimedia surveillance (2013, Springer).

In addition to more than 40 invited conference talks, industrial and university invited seminars, he served as chair, organizer, editor of related conferences, journal special issues, tutorials and short courses (CVPR, ICASSP, ICIP, ICDSC), and conference special sessions and has a well-published research record in the field of distributed signal processing and tracking, including IEEE award winning publications.
For details on previous talks and tutorials see http://www.eecs.qmul.ac.uk/~andrea/talks.html

Juan C. SanMiguel is associate profesor (interim) in the Department of Electronic Technology and Communications at University Autonoma of Madrid, Spain. He received the M.S. degree in Electrical Engineering ("Ingeniero de Telecomunicación" degree) in 2006 and the PhD in Computer Science and Telecommunication in 2011, both at Universidad Autónoma de Madrid (Spain). Since 2005, he has been with the Video Processing and Understanding Lab (VPU-Lab) at Universidad Autonoma of Madrid as a researcher and teaching assistant. From June 2013 to June 2014, he was a postdoctoral researcher at Queen Mary University of London (UK) under a Marie Curie IAPP fellowship. In 2015 he visited the Institute of Computing Technology (ICT) of the Chinese Academy of Sciences (CAS) in Beijing, China. He also serves as a reviewer for several international Journals (IEEE TIP, IEEE CSVT, Elsevier IMAVIS,… ) and Conferences (IEEE ICIP, IEEE WACV, IEEE AVSS,…). He has published more than 35 journal and conference papers. His current research interests are focused on multi-camera activity understanding and performance evaluation, oriented to target detection and tracking.

He has also participated as lecturer in the summer school for video surveillance (editions 2008 and 2010), training courses for the Spanish law enforcement agency (Guardia Civil) in 2013 and 2015, and an invited conference talk at the X Conference on Science and Technology ESPE 2015 (Quito, Ecuador) and an invited seminar talk at the Chinese Academy of Sciences (CAS) in 2015 (Beijing, China). He also received the award to the best PhD thesis (School of Electrical engineering, University Autonoma of Madrid) in 2014 and the Best PhD thesis in multimedia (runner up) (given by the Electrical engineering Spanish association) in 2013.


Conference Catalysts, LLCcongress and morecongress and more


Personal tools