Line 35: | Line 35: | ||
* [[Special_Sessions#ss9| SS9 Sequential Monte Carlo Methods for Complex Systems]] | * [[Special_Sessions#ss9| SS9 Sequential Monte Carlo Methods for Complex Systems]] | ||
* [[Special_Sessions#ss10| SS10 Multi-Level Fusion: bridging the gap between high and low level fusion]] | * [[Special_Sessions#ss10| SS10 Multi-Level Fusion: bridging the gap between high and low level fusion]] | ||
+ | * [[Special_Sessions#ss11| SS11 Kalman Filter Based Nonlinear Estimation]] | ||
+ | * [[Special_Sessions#ss12| SS12 Advances in Distributed Kalman Filtering]] | ||
+ | * [[Special_Sessions#ss13| SS13 Applications of Data Analytics and Information Fusion to Finance, Business, and Marketing]] | ||
+ | * [[Special_Sessions#ss14| SS14 Sensor, Resources, and Process Management for Information Fusion Systems]] | ||
+ | * [[Special_Sessions#ss15| SS15 Multistatic Tracking]] | ||
+ | * [[Special_Sessions#ss16| SS16 Multimodal Image Processing and Fusion]] | ||
+ | * [[Special_Sessions#ss17| SS17 Maritime Domain Awareness]] | ||
+ | * [[Special_Sessions#ss18| SS18 Positioning in Wide Area Networks]] | ||
+ | * [[Special_Sessions#ss19| SS19 Evaluation of Technologies for Uncertainty Reasoning]] | ||
+ | * [[Special_Sessions#ss20| SS20 Extended Object and Group Tracking ]] | ||
+ | * [[Special_Sessions#ss21| SS21 Information Fusion in Multi-Biometric Systems]] | ||
+ | * [[Special_Sessions#ss22| SS22 Situational Understanding Through Equivocal Sources]] | ||
</div> | </div> | ||
|} | |} | ||
Line 214: | Line 226: | ||
'''Description:''' The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of timely, comprehensive and most accurate description of a situation or phenomenon. There is a growing need to effectively identify relevant information from the mass available, and exploit it through automatic fusion for timely, comprehensive and accurate situation awareness. Even if exploiting multiple sources, most fusion systems are developed for combing just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information that could be of different origin, type, and with possibly very different representation (e.g. a priori knowledge, contextual knowledge, mission orders, risk maps, availability and coverage of sensing resources, etc.) but still very significant to augment the knowledge about observed entities. Very likely, this latter type of information could be considered of different fusion levels that rarely end up being systematically exploited automatically. The result is often stove-piped systems dedicated to a single fusion task with limited robustness. This is caused by the lack of an integrative approach for processing sensor data (low-level fusion) and semantically rich information (high-level fusion) in a holistic manner thus effectively implementing a multi-level processing architecture and fusion process. The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be at different and disjoint, fostering thus the discussion on the commonalities and differences in their research methodologies, and proposing viable multi-level fusion solutions to address challenging problems or relevant applications. | '''Description:''' The exploitation of all relevant information originating from a growing mass of heterogeneous sources, both device-based (sensors, video, etc.) and human-generated (text, voice, etc.), is a key factor for the production of timely, comprehensive and most accurate description of a situation or phenomenon. There is a growing need to effectively identify relevant information from the mass available, and exploit it through automatic fusion for timely, comprehensive and accurate situation awareness. Even if exploiting multiple sources, most fusion systems are developed for combing just one type of data (e.g. positional data) in order to achieve a certain goal (e.g. accurate target tracking) without considering other relevant information that could be of different origin, type, and with possibly very different representation (e.g. a priori knowledge, contextual knowledge, mission orders, risk maps, availability and coverage of sensing resources, etc.) but still very significant to augment the knowledge about observed entities. Very likely, this latter type of information could be considered of different fusion levels that rarely end up being systematically exploited automatically. The result is often stove-piped systems dedicated to a single fusion task with limited robustness. This is caused by the lack of an integrative approach for processing sensor data (low-level fusion) and semantically rich information (high-level fusion) in a holistic manner thus effectively implementing a multi-level processing architecture and fusion process. The proposed special session will bring together researchers working on fusion techniques and algorithms often considered to be at different and disjoint, fostering thus the discussion on the commonalities and differences in their research methodologies, and proposing viable multi-level fusion solutions to address challenging problems or relevant applications. | ||
− | '''Organizers:''' Lauro Snidaro, Jesus Garcia, Wolfgang Koch | + | '''Organizers:''' Lauro Snidaro, Jesus Garcia Herrero, Wolfgang Koch |
|- | |- | ||
|} | |} | ||
Line 220: | Line 232: | ||
|- | |- | ||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss11"></div> | ||
+ | <!-- SS11 Kalman Filter Based Nonlinear Estimation --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS11 Kalman Filter Based Nonlinear Estimation</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Nonlinear state estimation is an important component in navigation, robotics, object tracking, and many other current research fields. Besides popular but computational expensive particle filters, variants of nonlinear Kalman filters or LMMSE estimators are widely used methods for state estimation. Such filters include for example the unscented Kalman filter, divided difference filter, cubature Kalman filter, or iterated Kalman filters. The proposed session aims to cover the recent advances in the area of nonlinear Kalman filters with an emphasis on sampling and sigma-point set design, linearization techniques, and performance evaluation and applications of nonlinear Kalman filter based estimators. | ||
+ | |||
+ | '''Organizers:''' Jannik Steinbring, Jindřich Duník, Uwe D. Hanebeck, and Ondřej Straka | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss12"></div> | ||
+ | <!-- SS12 Advances in Distributed Kalman Filtering --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #fff784; background:#fffff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#fffff7;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #fff784; text-align:left; color:#000; padding:0.2em 0.4em;">SS12 Advances in Distributed Kalman Filtering</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' The rapid advances in sensor and communication technologies are accompanied by an increasing demand for distributed state estimation methods. Centralized implementations of Kalman filter algorithms are often too costly in terms of communication bandwidth or simply inapplicable - for instance when mobile ad-hoc networks of autonomously operating state estimation systems are considered. Compared to centralized approaches, distributed or decentralized Kalman filtering is considerably more elaborate. In particular, the treatment of dependent information shared by different state estimation systems is a central issue. This special session provides a platform to discuss recent developments and to share ideas on distributed Kalman filtering and related topics. | ||
+ | |||
+ | '''Organizers:''' Benjamin Noack, Felix Govaers, Uwe D. Hanebeck, and Wolfgang Koch | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss13"></div> | ||
+ | <!-- SS13 Applications of Data Analytics and Information Fusion to Finance, Business, and Marketing --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #d6bdde; background:#f7eff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:4px; background:#e7deef; font-family:inherit; font-size:125%; font-weight:bold; border:1px solid #d6bdde; text-align:left; color:#000; padding:0.2em 0.4em;">SS13 Applications of Data Analytics and Information Fusion to Finance, Business, and Marketing</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' This proposal is to propose a special session of applying data fusion and predictive analytics to finance, business, and marketing within the Fusion 2016 conference. Finance and business are critical application areas in information fusion and data analytics. Many of the techniques discussed in the information fusion community are directly applicable to this emerging and important application area. The goal of this proposed session is to open up a forum for data scientists and engineer to share their latest experience and insight on applying the predictive modeling and data analytics techniques to the applications in finance and business areas. | ||
+ | |||
+ | '''Organizers:''' KC Chang and Zhi Tian | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss14"> | ||
+ | <!-- SS14 Sensor, Resources, and Process Management for Information Fusion Systems --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f36766; background:#f9d6c9; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#f9d6c9;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#f5baa3; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f36766; text-align:left; color:#000; padding:0.2em 0.4em;">SS14 Sensor, Resources, and Process Management for Information Fusion Systems</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' A continuing increase of performance requirements sets up the need for optimal gain and exploitation of information. This gives rise to a broad field of optimization problems in the context of uncertainty. Advancements in communication, information and sensor technologies are driving a trend in the development of complex, adaptive and reconfigurable sensor systems. Such a sensor system can have a large scope for online reconfiguration, which typically exceeds the management capability of a human operator. In addition, the sensor system can face a variety of fundamental resource limitations, such as a limited power supply, a finite total time budget, a narrow field of sight, a limited on-board processing capability, or constraints on the communication channels between the sensor nodes. Consequently, effective sensor scheduling and resources management is a key factor for the performance of the emerging generation of adaptive and reconfigurable sensor systems. | ||
+ | |||
+ | In the case of stationary sensors, it is usually desirable to schedule measurements to maximize the benefit in respect to the objectives of the sensor system, whilst avoiding redundant measurements. This benefit can be quantified by an appropriate metric, for example, a task specific metric, information gain or utility. For mobile sensors it is also necessary to consider the sensor platform navigation (including its uncertainties), as the sensor-scenario geometry can significantly affect performance of the current and future information acquisition, e.g., for coordinated exploration in disaster areas. Additionally, the coupling of navigation and sensing allows for the exploitation of the dual effect and an improved control performance. Considering the uncertainty of the controlled variable, which can be done explicitly or implicitly, allows for the exploitation of this dual effect and an improved control performance. | ||
+ | |||
+ | '''Organizers:''' Christof Chlebek, Fotios Katsilieris, Maxim Dolgov, and Uwe D. Hanebeck | ||
+ | </div> | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss15"></div> | ||
+ | <!-- SS15 Multistatic Tracking --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #a3babf; background:#f5fdff; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | |||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#ceecf2; font-family:inherit; font-size:125%; font-weight:bold; border:1px solid #a3babf; text-align:left; color:#000; padding:0.2em 0.4em;">SS15 Multistatic Tracking</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' This special session focuses on multistatic sonar and radar information fusion and target tracking algorithms. Recent years have seen increasing interest in fusion and tracking algorithms for multistatic systems. Challenges include the effective treatment of bistatic sensor nodes, non-linear measurements, and false alarm overloading. Recent progress in multistatic tracking has been facilitated by the Multistatic Tracking Working Group (MSTWG), an International Society of Information Fusion (ISIF) working group. The purpose of this working group is to evaluate a large variety of multistatic tracking algorithms available amongst group members on common data sets with common metrics. The reporting of these results and other related multistatic topics has been of great value to MSTWG and ISIF in the form of numerous papers and participation during similar special sessions in previous FUSION conferences since 2006. A special session on multistatic sonar/radar tracking at FUSION’16 will enable current MSTWG outputs and other contributions by others outside of this group to be documented. | ||
+ | |||
+ | '''Organizers:''' Garfield R. Mellema and David W. Krout | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss16"></div> | ||
+ | <!-- SS16 Multimodal Image Processing and Fusion --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">SS16 Multimodal Image Processing and Fusion</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Since the launch of the first version of the Microsoft Kinect in 2010, setting up networks based on multimodal image sensors has become extremely popular. The novelty of these devices includes the availability of not only color information, but also infrared and depth information of a scene, at a price affordable to laymen. The combination of multiple sensors and image modalities has many advantages, such as simultaneous coverage of large environments, increased resolution, redundancy, multimodal scene information, and robustness against occlusion. However, in order to exploit these benefits, multiple challenges also need to be addressed: synchronization, calibration, registration, multi-sensor fusion, large amounts of data, and last but not least, sensor-specific stochastic and set-valued uncertainties. This Special Session addresses fundamental techniques, recent developments and future research directions in the field of multimodal image processing and fusion. | ||
+ | |||
+ | '''Organizers:''' Florian Faion, Antonio Zea, and Uwe D. Hanebeck | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss17"></div> | ||
+ | <!-- SS17 Maritime Domain Awareness --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #fff784; background:#fffff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#fffff7;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #fff784; text-align:left; color:#000; padding:0.2em 0.4em;">SS17 Maritime Domain Awareness</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' According to the International Maritime Organization, Maritime Domain Awareness (MDA) is seen as the effective understanding of any activity associated with the maritime environment that could impact upon the security, safety, economy or environment. A particular interest in the field of MDA lies in the collection of essential information about individuals, groups of people or organizations acting in the maritime domain. This information is used to monitor activities in such a way that trends can be identified and anomalies differentiated. The goal is the protection of the territorial waters against threats such as military interventions or terrorist attacks and the sustainment of the global trade which depends on the safety of the oceans. Achieving this goal involves a number of different facilities, equipment and technologies. Most important are the sensor carriers which are necessary for information gathering. Among them are such diverse types as submarines, surface vessels, unmanned underwater or surface vehicles, autonomous underwater or surface vehicles, airplanes and even satellites, sharing information through a communication network. The sensors in use comprise sonar sensors, electro-optical and infrared cameras, ESM sensors, radars of various kinds, AIS transponders and receivers and many more. Data produced by these sensors alone are insufficient. In order to allow the operational decision makers to anticipate threats and take the initiative to defeat them, data must be collected and analyzed by the aid of computer analysis algorithms. A major part of these algorithms include modern sensor data and information fusion methods. | ||
+ | |||
+ | This Special Session addresses fundamental techniques, recent developments and future research directions in the field of MDA sensor data fusion. It brings together academic, industry and government experts working on topics related to the field of MDA. | ||
+ | |||
+ | |||
+ | '''Organizers:''' Antonio Zea, Florian Faion, Uwe D. Hanebeck | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss18"></div> | ||
+ | <!-- SS18 Positioning in Wide Area Networks --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #d6bdde; background:#f7eff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:4px; background:#e7deef; font-family:inherit; font-size:125%; font-weight:bold; border:1px solid #d6bdde; text-align:left; color:#000; padding:0.2em 0.4em;">SS18 Positioning in Wide Area Networks</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Positioning of devices in wireless networks is becoming ubiquitous and has many applications, such as surveillance, Internet of Things, health care, intelligent transportation systems, logistics, etc. The devices can be mobile, stationary or nomadic. Estimating the position of devices is subject to several challenges including deployment aspects, proper reception of sufficiently many radio signals, measurement errors modelling, motion models aspects, energy and resource efficiency aspects etc. Furthermore, analysis of large number of positioned devices and associated movements is highly relevant in some use cases. | ||
+ | |||
+ | This special session calls for both theoretical and practical oriented works in different domains of positioning for devices in indoor and/or outdoor environments. | ||
+ | |||
+ | '''Organizers:''' Fredrik Gunnarsson, Carsten Fritsche, Fredrik Gustafsson, Lyudmila Mihaylova, Martin Ulmke, Feng Yin, Hans Driessen | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss19"> | ||
+ | <!-- SS19 Evaluation of Technologies for Uncertainty Reasoning --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #f36766; background:#f9d6c9; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#f9d6c9;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#f5baa3; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #f36766; text-align:left; color:#000; padding:0.2em 0.4em;">SS19 Evaluation of Technologies for Uncertainty Reasoning</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' The session will focus three topics: (1) to summarize the state of the art in uncertainty analysis, representation, and evaluation, (2) discussion of metrics for uncertainty representation, and (3) survey uncertainty at all levels of fusion. The impact to the ISIF community would be an organized session with a series of methods in uncertainty representation as coordinated with evaluation. The techniques discussed and questions/answers would be important for the researchers in the ISIF community; however, the bigger impact would be for the customers of information fusion systems to determine how measure, evaluate, and approve systems that assess the situation beyond Level 1 fusion. The customers of information fusion products would have some guidelines to draft requirements documentation, the gain of fusion systems over current techniques, as well as issues that important in information fusion systems designs. One of the main goals of information fusion is uncertainty reduction, which is dependent on the representation chosen. Uncertainty representation differs across the various levels of Information Fusion (as defined by the JDL/DFIG models). Given the advances in information fusion systems, there is a need to determine how to represent and evaluate situational (Level 2 Fusion), impact (Level 3 Fusion) and process refinement (Level 5 Fusion), which is not well standardized for the information fusion community. | ||
+ | |||
+ | '''Organizers:''' Paulo Costa, Kathryn Laskey, Anne-Laure Jousselme, Erik Blasch, Jürgen Ziegler, Valentina Dragos, Pieter DeVilliers, and Gregor Pavlin D | ||
+ | |- | ||
+ | </div> | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss20"></div> | ||
+ | <!-- SS20 Extended Object and Group Tracking --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #a3babf; background:#f5fdff; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | |||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#ceecf2; font-family:inherit; font-size:125%; font-weight:bold; border:1px solid #a3babf; text-align:left; color:#000; padding:0.2em 0.4em;">SS20 Extended Object and Group Tracking</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' Typical object tracking algorithms assume that the object can be modeled as a single point without an extent. However, there are many scenarios in which this assumption is not reasonable. For example, when the resolution of the sensor device is higher than the spatial extent of the object, a varying number of measurements from spatially distributed reflection centers are received. Furthermore, a collectively moving group of point objects can be seen as a single extended object because of the interdependency of the group members. | ||
+ | |||
+ | This Special Session addresses fundamental techniques, recent developments and future research directions in the field of extended object and group tracking. | ||
+ | |||
+ | '''Organizers:''' Marcus Baum, Uwe D. Hanebeck, Peter Willett, and Wolfgang Koch | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss21"></div> | ||
+ | <!-- SS21 Information Fusion in Multi-Biometric Systems --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #bdd6c6; background:#e7f7e7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#e7f7e76;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#d6efd6; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #bdd6c6; text-align:left; color:#000; padding:0.2em 0.4em;">S21 Information Fusion in Multi-Biometric Systems</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' This session will focus on the latest innovations and best practices in the emerging field of multi-biometric fusion. Biometrics tries to build an identity recognition decision based on the physical or behavioral characteristics of individuals. Multi-biometrics aims at outperforming the conventional biometric solutions by increasing accuracy, and robustness to intra-person variations and to noisy data. It also reduces the effect of the non-universality of biometric modalities and the vulnerability to spoof attacks. Fusion is performed to build a unified biometric decision based on the information collected from different biometric sources. This unified result must be constructed in a way that guarantees the best performance possible and take into account the efficiency of the solution. | ||
+ | |||
+ | The topic of this special session, Information Fusion in Multi-Biometrics, requires the development of innovative and diverse solutions. Those solutions must take into account the nature of biometric information sources as well as the level of fusion suitable for the application in hand. The fused information may include more general and non-biometric information such as the estimated age of the individual or the environment of the background. | ||
+ | |||
+ | This special session will be supported by the European Association for Biometrics (EAB). The EAB will provide technical support by addressing experts for reviews and will help with the dissemination and exploitation of the event. | ||
+ | |||
+ | '''Organizers:''' Naser Damer and Raghavendra Ramachandra | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |||
+ | <!-- FUSION 2016 Accepted Special Sessions --> | ||
+ | {| id="mp-upper" style="width: 80%; margin:4px 0 0 0; background:none; border-spacing: 0px;" | ||
+ | <div id="ss22"></div> | ||
+ | <!-- SS22 Situational Understanding Through Equivocal Sources --> | ||
+ | | class="MainPageBG" style="width:100%; border:1px solid #fff784; background:#fffff7; vertical-align:top; color:#000;" | | ||
+ | {| id="mp-left" style="width:100%; vertical-align:top; background:#fffff7;" | ||
+ | | style="padding:2px;" | <h2 id="mp-tfa-h2" style="margin:3px; background:#fff7bd; font-family:inherit; font-size:120%; font-weight:bold; border:1px solid #fff784; text-align:left; color:#000; padding:0.2em 0.4em;">S22 Situational Understanding Through Equivocal Sources</h2> | ||
+ | |- | ||
+ | | style="color:#000;" | <div id="mp-tfa" style="padding:2px 5px"> | ||
+ | '''Description:''' In contrast to traditional sensing sources, the proliferation of soft information sources—especially multimodal social media—has made them a viable medium to obtain insights about events and their evolutions in the environment. Fusing information from such information sources could improve the situational understanding of decision makers, thus enabling them to make informed decision in rapidly changing complex environments. However, the equivocal nature of such sources makes the decision-making challenging, especially in critical situations where information reliability plays a key role. | ||
+ | |||
+ | Thus, the aim of the special session is as follows: (a) discuss how different strains of information can be processed, analysed, and combined to model the equivocality in information; (b) investigate how such models can be exploited to improve the credibility and reliability of the fused information; and (c) frameworks to combine such information to assists decision makers—be they central or edge users. | ||
+ | |||
+ | |||
+ | '''Organizers:''' Geeth de Mel, Murat Sensoy, Lance Kaplan, and Tien Pham | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
+ | |} | ||
+ | | style="border:1px solid transparent;" |<br /> | ||
+ | |- | ||
__NOTOC____NOEDITSECTION__ | __NOTOC____NOEDITSECTION__ |
Revision as of 10:05, 22 February 2016
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|