Dr. Palina Salanevich

Hans Freudenthalgebouw
Budapestlaan 6
Kamer 701
3584 CD Utrecht

Dr. Palina Salanevich

Assistant Professor
Mathematical Modeling
p.salanevich@uu.nl

Palina Salanevich is a mathematician working on the interface of harmonic and time-frequency analysis, geometric functional analysis, signal and image processing, and high dimensional probability theory. Her research is inspired by signal processing problems, such as phase retrieval, quantization, and compressive sensing. These problems are motivated by exiting "real world" applications and lead to beautiful and insightful mathematics on the intersection of different fields. Study of these problems allows to build new rich connections between random matrix theory, frame theory and signal processing, and to develop new tools in these areas.

 

For the  list of the student projects I am currently offering for Bachelor and Master students, see https://uu.konjoin.nl and also this page. If interested in one of these projects, or have your own in mind, contact me over email!

 

 

Research directions:

 

Geometric properties of random frames and matrices

Frames proved to be a powerful tool in many areas of applied mathematics, computer science, and engineering, as they provide a redundant, stable way of representing a signal. Investigation of various geometric properties of frames that reflect their "quality" plays a crucial role in different signal processing problems, such as compressive sensing, phase retrieval, and quantization. Such properties are sufficiently well-studied for randomly generated Gaussian frames with independent frame vectors. Moreover, it is often the case that Gaussian frames have properties optimal for applications with high probability. At the same time, the concrete application for which a signal processing problem is studied usually dictates the structure of the frame used to represent a signal. This motivates the study of properties of structured application relevant frames, such as Gabor frames.

Poster on "Frame bounds for Gabor frames in finite dimensions."

 

Phase Retrieval

Phase retrieval is the non-convex inverse problem of signal reconstruction from intensity measurements that arises naturally in many applications within science and engineering. Even though it has been studied for decades, until recently very little was known about how to achieve stable and efficient reconstruction, and the existing methods lacked rigorous mathematical understanding. 

Recent technological advances and pressing need in fast high-precision methods in imaging and audio processing inspired active interest in phase retrieval. Nowadays, the case of random measurement frames with independent vectors is sufficiently well studied. However, the case of structured, application-relevant frames remains wide open. An important example of such frames is time-frequency structured (Gabor) frames that arise in ptychography, speech recognition, and other applications. 

One of the main goals of my research is to develop a toolbox for the analysis of phase retrieval with Gabor frames. 

My main approach is to translate questions in phase retrieval into geometric properties of the measurement frames. I use these properties to develop a novel method for the stability analysis of phaseless measurement maps and to design an efficient reconstruction algorithm for time-frequency structured measurements.

Presentation on "Phase Retrieval with Gabor Frames: Stability and Reconstruction Algorithms" (CWI, 2022)

 

Machine learning with randomization

Randomness is an important element in machine learning. It helps to eliminate inherent biases and build a generalized machine learning models. Randomized ML models allow to reduce the amount of learned parameters, improve run time and memory usage. They also provide us with the necessary tools to understand how ML models learn. This motivated the importance of the study of their properties.

RVFL networks (joint with D. Needell, A. Nelson, R. Saab, and O. Schavemaker)

Deep neural networks often have thousands of parameters, which are learned using training algorithms such as back-propogation. Optimizing this many parameters computationally challenging, and training algorithms often get stuck in local minima and are sensitive to the training data.

To address some of the difficulties associated with deep neural networks, both researchers and practitioners have attempted to randomize part of the parameters. One of the popular randomization-based neural network architectures is the Random Vector Functional Link (RVFL) network, which is a single layer feed-forward neural network (SLFN) in which the input-to-hidden layer weights are selected randomly and independently from a suitable domain and the remaining hidden-to-output layer weights are learned.

Although RVFL networks are proving their usefulness in practice, the supporting theoretical framework is currently lacking. The most notable result in the existing literature is due to B. Igelnik and Y.H. Pao, who showed that RVFL networks universally approximate continuous functions on compact sets. However, there is a sizable gap between theory and practice. We aim to bridge this gap, bringing the mathematical theory behind RFVL networks into the modern spotlight.

Presentation on "Random vector functional link neural networks as universal approximators."

 
Data processing using non-negative matrix factorization (joint with D. Needell, H. Lyu, M. Perlmutter, and A. Sack)

Dictionary learning is a principal tool in data processing that allows to obtain efficient data-driven representations. Nonnegative matrix factorization provides a powerful mathematical setting for dictionary learning problems. It generates interpretable "additive" features that compose the data much like building blocks. In this project, we use the interpretable data representations obtained using (online) non-negative matrix factorization to solve source separation problems arising in audio and EEG signal proceeding. In the latter case, the main question we aim to answer is: Can ONMF effectively parse event-related neural responses into their underlying neural components?

Presentation on "Online Non-negative Matrix Factorization as a Tool in Data Processing" (1st AIM workshop, 2022)

 

Harmonic and time-frequency analysis on graphs

In many signal processing applications, such as social and economic networks, brain imaging, epidemiology and traffic networks, high dimensional data is naturally associated with the vertices of a weighted graph that represents the relations between data units. Since such data should be processed and analyzed taking these relations into account, extending basic operators and signal processing methods is one of the main challenges. I am particularly interested in developing time-frequency analysis in the setup of graph-based signals. Since time-frequency representation of signals on a graph domain should respect both underlying graph structure and “classical” properties, I combine algebraic graph theory and harmonic analysis to obtain novel methods of analyzing and processing signals.
As a part of this project, we aim to study uncertainty principle for graph-based signals and its dependence on the underlying graph structure.