TUTORIAL SCHEDULE

 MONDAY,  6 JULY

Time duration per tutorial: 3 hours

Fusion 2020 will be hosted on the Whova Virtual Conferencing app. Tutorial presentations will be made available as a series of 20 minute video presentations in order to mitigate screen fatigue. The tutorial registration will include access to the recorded videos and two live Q&A sessions of 40 minutes each to accommodate attendees from different time zones. The schedule of the Q&A sessions is as follows:

Eastern Session:
6 AM UTC, 4PM Sydney, 8AM Pretoria, for attendees from Australasia and Asia
Western Session:
17:00 UTC, 5PM London, 7PM Pretoria, 10AM Los Angeles, 12PM New York, for attendees from Africa, Europe and the Americas

During each of these Q&A slots there will be 20 parallel virtual rooms (one per Tutorial), where attendees can drop in and out to ask questions. The live Q&A sessions will be recorded and made available to attendees afterwards. While watching the tutorial videos, attendees are encouraged to ask questions by texting on the Whova discussion board. These questions can be answered asynchronously by the presenters or be discussed during the live Q&A sessions.

The tutorial material and presentations will be released towards the end of the week before the conference in order to allow attendees sufficient time to watch the videos before the live Q&A sessions on Monday 6th.


CLICK HERE TO INDICATE WHICH TUTORIALS YOU WILL BE ATTENDING


CLICK HERE TO DOWNLOAD INFORMATION ON TUTORIALS

Tutorial No.

Title

Presenters

T1

Registration and Fusion of Multiple Sensors for the 3D Reconstruction of the Environment with Classical and Deep Learning Methods

Nina Felicitas Heide

T2

Multitarget Tracking and Multisensor Information Fusion, FUSION 2020 Tutorial

Yaakov Bar-Shalom

T4

Overview of High-Level Information Fusion Theory, Models, and Representations

Erik Blasch

T5

An Introduction to Track-to-Track Fusion and the Distributed Kalman Filter

Felix Govaers

T7

Deep Convolutional Neural Networked-based Multisensor Fusion for Computer Vision:Opportunities and Challenges

Fahimeh Farahnakian

T8

Practical use of Belief Function Theory: Tools and examples of applications

Sylvie Le Hegarat-Mascle

T9

Multisensor Data Fusion for Industry 4.0

Claudio de Farias and José Brancalion

T10

Evaluation of Technologies for Uncertainty Reasoning

Paulo Costa, Kathryn Laskey and Erik Blasch

T11

Estimation of Noise Parameters in State Space Models: Overview, Algorithms, and Comparison

Ondrej Straka and Jindrich Dunik

T12

Stone Soup: an open source tracking and state estimation framework; principles, use and applications

Richard Green, Jordi Barr, Steven Hiscocks, David Kirkland and Lyudmil Vladimirov

T13

Fusion using belief functions: source reliability and conflict

Frédéric Pichon and Anne-Laure Jousselme

T14

Poisson multi-Bernoulli mixtures for multiple target tracking

Angel Garcia-Fernandez, Yuxuan Xia, Karl Granstrom and Lennart Svensson

T16

Context-enhanced Information Fusion

Lauro Snidaro and Erik Blasch

T17

Tutorial on Robust Kalman Filtering

Florian Pfaff and Benjamin Noack

T18

Localization-of-Things: Foundations and Data Fusion

Moe Win and Andrea Conti

T21

Deep Feature Learning to Model Brain Network Activities

Narges Norouzi

TUTORIAL CO-CHAIRS

Denis Garagić, Stefano Coraluppi
email: tutorials@fusion2020.org