Videos of the day [TinyML and WACV]

Event-based sensing and computing for efficient edge artificial intelligence and TinyML applications
Federico CORRADI, Senior Neuromorphic Researcher, IMEC

The
advent of neuro-inspired computing represents a paradigm shift for edge
Artificial Intelligence (AI) and TinyML applications. Neurocomputing
principles enable the development of neuromorphic systems with strict
energy and cost reduction constraints for signal processing applications
at the edge. In these applications, the system needs to accurately
respond to the data sensed in real-time, with low power, directly in the
physical world, and without resorting to cloud-based computing
resources.
In this talk, I will introduce key concepts underpinning
our research: on-demand computing, sparsity, time-series processing,
event-based sensory fusion, and learning. I will then showcase some
examples of a new sensing and computing hardware generation that employs
these neuro-inspired fundamental principles for achieving efficient and
accurate TinyML applications. Specifically, I will present novel
computer architectures and event-based sensing systems that employ
spiking neural networks with specialized analog and digital circuits.
These systems use an entirely different model of computation than our
standard computers. Instead of relying upon software stored in memory
and fast central processing units, they exploit real-time physical
interactions among neurons and synapses and communicate using binary
pulses (i.e., spikes). Furthermore, unlike software models, our
specialized hardware circuits consume low power and naturally perform
on-demand computing only when input stimuli are present. These
advancements offer a route toward TinyML systems composed of
neuromorphic computing devices for real-world applications.



Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help Through Multi-Task Learning

Authors: Abdullah Abuolaim (York University)*; Mahmoud Afifi (Apple); Michael S Brown (York University) 
 
Many
camera sensors use a dual-pixel (DP) design that operates as a
rudimentary light field providing two sub-aperture views of a scene in a
single capture. The DP sensor was developed to improve how cameras
perform autofocus. Since the DP sensor's introduction, researchers have
found additional uses for the DP data, such as depth estimation,
reflection removal, and defocus deblurring. We are interested in the
latter task of defocus deblurring. In particular, we propose a
single-image deblurring network that incorporates the two sub-aperture
views into a multi-task framework. Specifically, we show that jointly
learning to predict the two DP views from a single blurry input image
improves the network's ability to learn to deblur the image. Our
experiments show this multi-task strategy achieves +1dB PSNR improvement
over state-of-the-art defocus deblurring methods. In addition, our
multi-task framework allows accurate DP-view synthesis (e.g., ~39dB
PSNR) from the single input image. These high-quality DP views can be
used for other DP-based applications, such as reflection removal. As
part of this effort, we have captured a new dataset of 7,059
high-quality images to support our training for the DP-view synthesis
task.




Komentar

Postingan Populer