Project: Artificial Intelligence and Computer Vision (AICV)

 

The aim of this project is to propose and develop intelligent algorithms for interpreting the visual world, using techniques such as pattern recognition, deep learning and image processing to improve the performance of computer vision tasks, particularly in terms of accuracy, speed and robustness. This will be applied to various domains such as facial recognition, pedestrian detection, autonomous navigation, safety monitoring, augmented reality, etc. In addition, this project aims to support the national strategy for the introduction of systems to control and secure both individuals and spaces requiring surveillance (airports, supermarkets, government structures, etc.). Particular interest is planned in the development of intelligent embedded solutions for access control, based on biometric technologies such as face and fingerprint.

The aim is to develop approaches for the identification/authentication of individuals using biometric measurements, based on new artificial intelligence (AI) and deep learning (DL) technologies. These approaches are relevant to applications such as securing access to protected infrastructures, and combating fraud, crime and terrorism.   

The aim is to develop a platform for collecting/managing/storing digital data (images, audio, video) from public or private databases, under open or regulated license, or acquired from the sensors and cameras we have at our disposal. These data, which are essential to our studies, are constantly increasing in volume, on the order of Big Data, and changing in characteristics (size, resolution, modality), requiring a platform to facilitate their exploration/exploitation. It's a question of giving concrete form to the approaches developed in the form of hardware and software tools. These tools are the building blocks of embedded platforms and systems with real-time solutions for acquisition, analysis, object tracking, action recognition and decision-making.

When used in medicine, many augmented reality systems only consider a rigid registration of the preoperative model, and therefore do not take deformations into account. This reduces the credibility of augmented reality. In fact, the patient's position changes between the acquisition of the preoperative image and during the operation, when the table is often tilted to allow the abdominal viscera to slide to the bottom of the abdominal cavity to facilitate access to the organs. Heartbeats can also cause displacement. Also, the instruments will move and deform the organs, requiring new methods based on augmented reality to visualize these organ deformations during the operation.  Various methods have been proposed to achieve satisfactory results, non-rigidly realigning the preoperative model with an intraoperative surface of the target organ. Unfortunately, these methods are unable to achieve good overall accuracy. The aim of this project is to explore new techniques.

Although these technologies hold great promise for improving our current mobility habits and reducing the number of road accidents, there are still serious safety challenges to be addressed by Intelligent Transport Systems (ITS). In fact, although on-board intelligence in the automotive field has produced impressive results, particularly for environmental perception, these systems remain unable to overcome complex traffic situations. A real-time collaborative perception scheme in which vehicles no longer limit themselves to data acquired by their own on-board sensors, but exploit data from remote sensors, is needed for reliable perception of the environment. Vehicles are no longer seen as isolated systems controlled solely by their drivers, but rather as intelligent nodes distributed within a complex interconnected system. With vehicular wireless communication, ITS applications are supported by several high-performance communication technologies such as ITS-G5 and 5G. As a result, vehicles can interact actively and in a participative way using in-vehicle units and roadside units. In this project, we are targeting aspects such as pedestrian detection, multi-object detection and recognition of driver distraction actions.

The aim is to develop feature extraction approaches for the recognition of hand gestures with a jumping motion. Indeed, over the last few decades, hand gesture recognition (HGR) has become one of the most important research topics, due to its wide range of applications in computer vision. Various approaches and techniques have been suggested and evaluated to achieve significant results on HGR. However, even with the triumph achieved over state-of-the-art methods, they failed to take into account the non-linearity and non-stationarity existing on time series data, including raw Leap Motion Controller (LMC) data. In this research work, we propose a new method for extracting and selecting relevant and discriminating features using the Hilbert Huang transform (HHT) and Fisher discriminant analysis. Accordingly, CML time series signals are decomposed by an empirical mode decomposition. Next, HHT is applied to generate the resulting Hilbert marginal spectrum, and Fisher discriminant analysis is also performed for efficient feature selection.

The idea for this project is based on one observation: the imbalance between demand and production of electrical energy is growing exponentially, and available resources are being depleted at an alarming rate. At the same time, the customer's role remains passive in the face of this challenge. It is therefore necessary for the customer to participate in this system as a consumer-actor. The aim of this project is to develop predictive algorithms based on Artificial Intelligence and its application to the energy optimization of Smart Grids (SG).

This innovative research project focuses on signal reconstruction using new multi-wavelet-based approaches. By exploring advanced filters and algorithms, this research aims to improve the ability to reconstruct complex signals, with an emphasis on their application in the biomedical field. In particular, the study focuses on the analysis of electrocardiographic (ECG) signals, which are essential for the early detection of heart disease. In addition, the project extends to investigate signals associated with the coronavirus pandemic, notably in the context of medical data collected from COVID-19 patients. The expected results of this research will offer a deeper understanding of signal reconstruction, paving the way for better methods of detecting cardiac abnormalities, as well as providing essential insights into understanding variations in complex biomedical signals and their correlation with coronavirus cases.