Ieee Mb White

IEEE International School of Imaging Keynote Lectures

  • Distinguished Professor of Computer Science at the Graduate Center/CSI, CUNY
  • IEEE Fellow, Fellow SPIE, Fellow of IS&T, Fellow AAAS, Fellow AAIA

Visual Perception-Driven Image Quality Measurements: Principles, Future Trends, and Applications

Bio-inspired Computer Vision is about learning computer vision algorithms from computational neuroscience, cognitive science, and biology and applying them to the design of real-world image processing-based systems. More specifically, this field is giving computers the ability to “see” just as humans do. Recently, many useful image processing algorithms developed with varying degrees of correspondence with biological vision studies. This is natural since a biological system can provide a source of inspiration for new computational efficient/robust vision models and measurements​. Simultaneously, the image processing tools may give new insights for understanding biological visual systems. Digital images are subject to various distortions during acquisition, processing, transmission, compression, storage, and reproduction.  How can we automatically predict quantitatively or perceived image quality? In this talk, we present originating in ​visual perception studies: Visual perception-driven image quality measurements: principles, future trends, applications. We will also give our recent research works and a synopsis of the current state-of-the-art results in image quality measurements and discuss future trends in these technologies and the associated commercial impact and opportunities.
  • PhD, O.ONT, FCAHS, FCCPM, FCOMP, FSPIE, FIEEE, FAAPM, FIOMP, FAIUM
  • Professor and Chair of the Division of Imaging Sciences of the Department of Medical Imaging at The Western University.
  • Founder and past Director of the Imaging Research Laboratories (IRL) at the Robarts Research Institute

Applications and trends for use of 3D Ultrasound in image-guided Interventions and point of care diagnostic applications

Our laboratory has been developing 3D ultrasound imaging systems that can be used for a variety of diagnostic and interventional applications. Our approach is to use a motorized fixture to translate, tilt, or rotate the ultrasound transducer with predefined user-controlled spatial and angular spacing. Any manufacturer’s ultrasound transducer can be housed in the fixture and images from the ultrasound machine are acquired into a computer via a digital frame grabber. The acquired images are reconstructed into a 3D image as the images are acquired during a 6-10 sec scan. In this lecture, we describe the use of 3D ultrasound in point of care diagnostic applications and for image-guided cancer therapy applications and discuss potential trends in the technology. Point of Care: As our population ages, there is an increasingly critical need for innovative and low-cost diagnostic and treatment methods. Thus, a rapidly aging population, combined with an over-burdened healthcare system provides a context that urgently requires the development of accurate and low-cost diagnostic imaging systems. Using our 3D ultrasound platform technology are able to not only improve the accuracy, effectiveness, and efficiency of diagnosis of breast, thyroid, and oral cancers as well as vascular diseases but also develop commercially viable, novel imaging-based devices that are cost-effective. Furthermore, combining 2D and 3D ultrasound imaging with AI tools has the potential to identify pathology more efficiently and accurately, which can lead to better decision-making, better care, and a reduction in delivering healthcare. Image-guided Interventions: Procedures such as prostate and gynecologic brachytherapy, and focal liver tumor ablation require positional accuracy of the needles delivering the therapy while sparing the critical organs. However, accurate guidance and placement of these needles in soft tissues are challenging as needles can deflect due to heterogeneous tissue properties, tissue/organ deformation, and movement, and lack of accurate tracking. Although 2D ultrasound imaging is used extensively, it has limitations. As the ultrasound images are two-dimensional, obtaining a 3D view of the anatomy needed for planning and verification of the needle positions requires three-dimensional images. Our laboratory has been developing complete 3D ultrasound-guided interventional systems, which include robotic approaches and software tools allowing physicians to guide guided needles to deliver therapy accurately.
  • PhD
  • Senior Vice President and Chief Technology Officer Ashland Global Holdings

Navigating Innovation in Global Markets

Sustained innovation represents an essential element required for the long-term survival and growth of all corporations. Well-established product lines having relatively high capital and technology barriers to entry may provide companies with a false sense of security. Even when a market need continues to exist over a long period in time all products are ultimately displaced by lower cost or higher performing alternatives. The challenge to continually provide innovative solutions to customers is especially acute when companies compete in a global market with diverse product lines serving a broad range of industries/applications. This presentation provides a specialty chemicals company’s perspective regarding the considerations and steps required to be addressed to generate sustained innovation in our business.
  • IEEE Fellow
  • 2019 George S. Glinski Award for Excellence in Research
  • Professor, School of Electrical Engineering and Computer Science, University of Ottawa

Quantifying Uncertainty in Machine Learning Based Measurement

Like any science and engineering field, Instrumentation and Measurement (I&M) is currently experiencing the impact of the recent rise of Applied AI and in particular Machine Learning (ML). In fact the relationship between I&M and ML has reached new levels: I&M is used to collect data, which is used to train an ML model, which is then used in a measurement system. Uncertainty is accumulated at every stage, and quantifying it is crucial. But I&M and ML use terminology that sometimes sound or look similar, though they might only have a marginal relationship or even be false friends. Therefore, understanding the terminology used by both communities is of crucial importance to understand the influences of ML and I&M in each other. In this talk, we will give an overview of ML’s contribution to measurement error, and how to avoid confusion with the said terminology in order to better understand the application of ML in I&M. We then use that understanding and terminology to show how to quantify the uncertainty introduced by ML in a measurement system, and we go over some specific examples in imaging.
  • Fellow of the IEEE
  • ONR Distinguished Faculty Fellow
  • Professor and Chair Electrical and Computer Engineering Manhattan College, NY
  • Director Laboratory for Quantum Cognitive Imaging and Neuromorphic Engineering (CINE)
  • Augmented Intelligence and Bioinspired Vision Systems

Innovations in Unresolved Resident Space Objects (RSO) Detection: From Analog to Neuromorphic Detectors

The “Laboratory for Quantum Cognitive Imaging and Neuromorphic Engineering (CINE), Bioinspired Vision Research”, founded by Professor Giakos, is a state-of-the art, multifunctional laboratory; located in the heart of NYC, it is fully dedicated to the education, training and research advancement of our students and the NYC community. By integrating physics, engineering, and bioinspired distributed architectures, it is aimed to enhance cognitive vision using spiking networks and machine learning. During the last fifteen years several discoveries came out of this lab with emphasis on the detection, characterization, and discrimination of unresolved resident space objects (RSO). With the support of the Airforce Research Laboratories (AFRL), through research contracts and awards, it has been possible to design a fully automated multifunctional polarimetric platform, consisting of a suite of multispectral polarimetric sensors and innovative control and image processing algorithms, including artificial intelligence and machine learning; for surveillance, imaging, material characterization and discrimination of space objects.

The presentation of professor Giakos is articulated into two parts:

Firstly, the research team of Prof. Giakos pioneered the design of single-pixel linear mode avalanche photodiodes architectures, operating under polarimetric principles, for space objects detection and identification. These linear avalanche photodetectors (AP) operate at a bias slightly below breakdown, providing linear amplification with negligible after pulsing. In addition, they exhibit high dynamic range, high speed, and high responsivity at the infrared (IR). Combining these sensors with polarimetric single-pixel detection allows one to focus on a very few pixels of the object, obtaining information, with a high scatter rejection, decoupled from any interfering signals (noise) that may arise from the adjacent pixels of the target. Therefore, it can be a very effective method to detect polarimetric signatures from cluttered or unresolved targets, with high sensitivity and high background rejection. Single-pixel detection can keep spatial-frequency variation within a single pixel; while, polarization states of light offer unique advantages for a wide range of detection and classification problems, due to intrinsic potential of high-contrast and high dynamic range. (2007-present).

Secondly, Giakos and coworkers introduced novel and efficient bioinspired vision architectures, operating on polarimetric neuromorphic detection principles, in conjunction with efficient deep learning architectures, namely, the polarimetric Dynamic Vision Sensor p(DVS); integrating human cognition capabilities, such as computation and memory emulating neurons and synapses, together with polarization of light principles. The p(DVS) would ultimately revolutionize and give rise to the next-generation highly efficient augmented intelligence vision systems with potential applications in space research. The experimental results clearly indicate that both high computational efficiency and classification accuracy can be achieved on detecting the shape, texture, and motion patterns of resident space objects (RSO) and space debris; while operating at low bandwidth, low memory, and low-storage (2016-present).

  • Research Staff Member (RSM)
  • IBM T.J. Watson Research Center, New York

Quantum State Tomography: An integrated approach

In Quantum State Tomography (QST) we reconstruct the density matrix representing the state of a quantum device with n qubits from measurements. In this seminar we detail all steps in an end-to-end QST pipeline including the preparation of a target quantum state, the collection of measurements to input to the QST algorithm and the evaluation of the reconstruction it produces. We focus on the case of low-rank quantum states and design a projected gradient descent algorithm for QST which is both parallelizable and critically integrates efficient projection primitives. We apply our approach to quantum circuits consisting of various combinations of Hadamard, CNOT and U3 gates and report on our findings. Our developments are based on Qiskit, an open-source quantum computing framework, which also allows direct interaction with real quantum processors that are made publicly available by IBM Research.
  • IEEE Fellow
  • Full Professor of Electromagnetic Fields
  • Director of the Department of Electrical, Electronic, Telecommunications Engineering and Naval Architecture (DITEN).
  • University of Genoa, Genoa, Italy

MICROWAVE IMAGING TECHNIQUES AND APPLICATIONS: FROM BASIC CONCEPTS TO RECENT DEVELOPMENTS

Microwave imaging (MI) is a class of nondestructive and noninvasive techniques aimed at inspecting targets starting from measurements of the electromagnetic field they scatter when illuminated with an incident radiation at microwave frequencies. The aim is to extract information about some of the geometrical/physical properties (e.g., the distributions of the dielectric properties) of the targets under test, often provided to the users in the form of images. However, the underlying inverse-scattering problem poses significant theoretical, numerical, and practical aspects that make this technique quite difficult and challenging. MI has been thus considered an emerging field for a long time. Across the years, engineers and scientists in universities and many other institutions devoted significant efforts in the development of new and innovative solutions, to face the challenging problem of developing effective measurement systems and data processing algorithms. Recent developments, however, allow to consider it a promising tool in several applications, such as nondestructive testing and evaluations, subsurface prospection, security, and medical imaging. In this lecture, MI techniques and their application in different fields will be reviewed. After an introduction concerning the basic concepts of the electromagnetic inverse problem (which is the basic theory of MI methods), some of the commonly adopted approaches are discussed, together with information about the related systems. Some specific examples in different applicative fields will also be provided. Finally, recent developments and future trends will be addressed.
  • Full Professor of Electromagnetic Fields
  • Department of Electrical, Electronic, Telecommunications Engineering and Naval Architecture (DITEN).
  • University of Genoa, Genoa, Italy

MICROWAVE IMAGING TECHNIQUES AND APPLICATIONS: FROM BASIC CONCEPTS TO RECENT DEVELOPMENTS

Microwave imaging (MI) is a class of nondestructive and noninvasive techniques aimed at inspecting targets starting from measurements of the electromagnetic field they scatter when illuminated with an incident radiation at microwave frequencies. The aim is to extract information about some of the geometrical/physical properties (e.g., the distributions of the dielectric properties) of the targets under test, often provided to the users in the form of images. However, the underlying inverse-scattering problem poses significant theoretical, numerical, and practical aspects that make this technique quite difficult and challenging. MI has been thus considered an emerging field for a long time. Across the years, engineers and scientists in universities and many other institutions devoted significant efforts in the development of new and innovative solutions, to face the challenging problem of developing effective measurement systems and data processing algorithms. Recent developments, however, allow to consider it a promising tool in several applications, such as nondestructive testing and evaluations, subsurface prospection, security, and medical imaging. In this lecture, MI techniques and their application in different fields will be reviewed. After an introduction concerning the basic concepts of the electromagnetic inverse problem (which is the basic theory of MI methods), some of the commonly adopted approaches are discussed, together with information about the related systems. Some specific examples in different applicative fields will also be provided. Finally, recent developments and future trends will be addressed.
  • Fellow IET
  • Deputy Head: Department of Production and Management Engineering
  • Director: Laboratory of Robotics and Automation
  • Democritus University of Thrace

Embedded vision systems for emergency management tasks

The role of intelligent robotics in security and intervention applications has been gradually upgraded during the last decade, since it puts forward high-end solutions for keeping human lives safe. Throughout the last years, considerable achievements occurred worldwide to provide worthy solutions to scientific and technological problems involving the intervention of robots in emergency management. Simultaneously, owing to the increased availability of computational power, cameras became the primary sensor unit for most of the robotic apparatuses. Vision is the primary sense in any intelligent robot agent. The main concern is that until now, the enhanced computing power demanded for computer vision tasks, was not consistent with the increased mobility search and rescue mission requirements. However, recent advancements in dedicated embedded AI hardware platforms provide powerful solutions and allow complicated vision algorithms and networks to be fully deployed on a single board, suitable for robotics applications.
  • Technical University of Crete
  • Professor at the School of Electrical and Computer Engineering
  • Vice Rector at the Technical University of Crete

Medical Image Segmentation, Radiomics and Radio-transcriptomics: applications in cancer diagnosis

Early diagnosis of cancer in its initial stages, when the tumor is confined in a small area, increases the probability of survival. Screening at-risk populations, suspect for cancer development, is suggested by the doctors as a vital tool to diagnose the disease at an early stage, when the treatment has more chances to succeed. The most popular screening tests include Computed Tomography (CT), Magnetic Resonance Imaging (MRI), Positron Emission Tomography (PET) and PET/CT scans. The area of ‘radiomics’ has focused on the extraction of quantitative features from medical images in order to reveal the development and progression of cancer, providing valuable information for clinical diagnosis and treatment planning. In radiomics studies imaging characteristics are accumulated in a massive set of features, which entail either qualitative and quantitative image descriptors measured within the segmented volume of interest or features “engineered” through advanced AI techniques and deep learning. Such features might be considered as byproducts and manifestations of the genomic variation at the cellular level, which controls the specific disease phenotype and/or response to treatment. Alternatively, the genotypic markers from molecular biology reflect many aspects of gene and protein interactions across a variety of cellular processes relevant to cancer diagnosis and prognosis. Considering the imaging/radiomic features as “surrogates” of the genetic substrate, this talk addresses the intersection of the traditional analysis of differential genes and the feature extraction from medical images. More specifically, we consider the interaction among imaging features (e.g. size and/or shape features, image intensity histogram metrics, texture features) and RNA transcripts measurements (expression of selected genes), which can be used to enhance the predictive power in the diagnosis of cancer. Following this consideration, we explore the modeling of radio-transcriptomics correlations based on the modeling of associations between these two multiscale modalities, i.e. imaging and genomics.
  • Associate Professor Head of Studies MSc Autonomous Systems Department of Electrical Engineering Technical University of Denmark

Towards more robust Autonomous Systems through automating image annotation

In this talk I will discuss about what defines an Autonomous System and how research can pave the way towards Autonomous Robots. Robustness is essential for real deployment of such systems. One way of improving robustness is through properly trained deep learning perception models and automating the process of image annotation can support this direction. I will present concrete examples from ongoing projects in our research group where automating the annotation process has been beneficial for Autonomous Systems’ operation.

IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.