This project pursues the goal of helping people who have lost their voices through illness or injury.
Specialized Silent Speech interfaces are created for two patient groups, namely laryngectomees
(who have lost their voice box, but still can perform articulatory movements) and persons with neurodegenerative diseases (who may not even be able to move their mouth).
In the first case, we rely on the technique of captuing electromyographic activity (i.e. traces of muscle activity) from the user's face: while
proof-of-concepts of this method have already been presented, this project will perform one of the first large-scale user studies, tackling many roadblocks
which so far have prevented the creation of practically useful voice prostheses, ideally replicating the person's own voice.
The second user group will be able to recreate speech by means of cerebral recordings (electrocorticography, ECoG).
This is a collaborative project with Prof. Tanja Schultz at the Cognitive Systems Lab at Bremen University, Germany. It is funded by the Swiss National Science Foundation and by the German Research Foundation.
I am also a consulting partner in the RessINT project, which receives national funding by the Agencia Estatal de Investigación (Spain) and is a companion project to MyVoice.
Neural networks (NN) have become the most important method for pattern recognition and decision making in artificial intelligence (AI), yet understanding their behavior, tracing how a particular result emerges, ensuring the absence of unfair biases, and giving guarantees of their performance is difficult. Their use as black boxes has a drastic negative impact on their security and trustworthiness, rendering them particularly problematic in sensitive areas. In this project, we develop methods to explain the role of certain structures (e.g., sets of features and layers) in the classification process. We draw on methods of Formal Reasoning (in particular, Satisfyability Modulo Theories) to provide certifiable guarantees of our results, in order to make a robust foundation to the emerging field of Explainable Artificial Intelligence.
This project is a collaboration with the USI Formal Verification and Security Lab of Prof. Sharygina.
Optimising biological activity and ADME properties, while minimising toxicity, are objectives when developing new compounds.
Advanced machine learning methods are indispensable to this process. The project develops and benchmark representation learning approaches,
addressing their accuracy and explainability, using public and in-house data for endpoints ranging from chemical reactions to toxicity.
The program will is together with the target users' needs in mind: large companies, regulatory agencies and SMEs.
The PI of this project is Jür;gen Schmidhuber. For further information refer to the Project Homepage.
Application of Artificial Intelligence in the chemical and pharmaceutical industry is a highly current topic, in the light of recent innovation in machine learning, and the fast development of the field of chemistry.
Therefore, there is a strong need to train a new generation of scientists who have competence in both machine learning and chemistry. This project, funded by the
European Union (Marie Skłodowska-Curie European Industrial Doctorate, grant agreement No 956832) will provide a remedy:
Sixteen excellent PhD candidates will be recruited by more than 20 partners, working on challenging topics selected to cover the key innovative directions in machine learning in chemistry.
They will be supervised by academics who have excellent complementary expertise and contributed some of the fundamental AI algorithms which are used billions of times per day in the world,
and by industry reearchers in leading EU Pharma companies. Beyond the scientific work of the individual fellows, the AIDD network will offer comprehensive, structured training through a
well-elaborated Curriculum.
The PI of this project is Jürgen Schmidhuber. For further information refer to the Project Homepage.
Although IAD (Inherited Arrhythmogenic Diseases of the heart) are rare, they account for 50% of deaths related to cardiac diseases. In this project, we will develop the first personalised digital cardiac monitoring and alert system which focuses on IADs, based on breakthrough machine learning and signal processing algorithms, compatible with existing devices and connectable with local emergency services. SUPSI research is executed within the newly founded MeDiTech institute, project partners include highly innovative companies in Italy and Switzerland, as well as the EOC CadioCentro and the TicinoCoure foundation.
INPUT strives to make the control of complex upper limb prostheses simple, natural and to be used on a daily basis by amputees effortlessly after donning -"don and play".
For this purpose, residual myoelectric signals (electrical traces of muscle activity) are collected from the user's arm and fed into a neural network-based system
which transforms the muscular activity into motor commands for a state-of-the-art prosthesis.
The project consortium includes leading laboratories
INPUT was funded under the EU H2020 program (grant agreement No 687795) from 2016 to 2020, for further information refer to the
Project Homepage.
The next generation of electronic user interfaces such as touch pads will rely on haptic
feedback for improved user experience and fine-grained control. In this project, funded under the EU FP7 program (Marie Skłodowska-Curie Initial Training Network,
grant agreement No 317100) from 2013 to 2017,
a group of young researchers tackled together the challenges in the field, from a multidisciplinary perspective spanning biomedicine, haptics, hardware development,
and machine learning and signal processing. As a postdoc, I was ITN fellow in Prototouch from 2014 - 2016, taking care of a variety of machine learning related tasks.
This project (a predecessor to RessINT) dealt with advancing signal capturing and machine learning method for a Silent Speech interface based on (surface) electromyography:
bioelectric activity of the facial muscles which are involved in the speech process is captured by means of a nonintrusive surface electrode patch and fed into
a machine learning system which autonomously learns to convert these signals into the underlying speech, which is output in text form (i.e. to be printed on a monitor,
or to be resynthesized by a standard text-to-speech system.
Maps was funded by the German Research Foundation from 2013 - 2015 and was put into action at Karlsruhe Institute of Technology in the group
of Prof. Tanja Schultz (now University of Bremen, Germany). As a PhD student, I took a major part in the preparation and planning of the
project, as well as in the execution of the research itself.