Transfer Learning Allows Accurate RBP Target Site Prediction with Limited Sample Sizes

Logo poskytovatele

Varování

Publikace nespadá pod Ústav výpočetní techniky, ale pod Přírodovědeckou fakultu. Oficiální stránka publikace je na webu muni.cz.
Autoři

VACULÍK Ondřej CHALUPOVÁ Eliška GREŠOVÁ Katarína MAJTNER Tomáš ALEXIOU Panagiotis

Rok publikování 2023
Druh Článek v odborném periodiku
Časopis / Zdroj Biology
Fakulta / Pracoviště MU

Přírodovědecká fakulta

Citace
www https://www.mdpi.com/2079-7737/12/10/1276
Doi http://dx.doi.org/10.3390/biology12101276
Klíčová slova RNA-binding protein; CLIP-seq; deep learning; transfer learning; interpretation
Popis RNA-binding proteins are vital regulators in numerous biological processes. Their disfunction can result in diverse diseases, such as cancer or neurodegenerative disorders, making the prediction of their binding sites of high importance. Deep learning (DL) has brought about a revolution in various biological domains, including the field of protein–RNA interactions. Nonetheless, several challenges persist, such as the limited availability of experimentally validated binding sites to train well-performing DL models for the majority of proteins. Here, we present a novel training approach based on transfer learning (TL) to address the issue of limited data. Employing a sophisticated and interpretable architecture, we compare the performance of our method trained using two distinct approaches: training from scratch (SCR) and utilizing TL. Additionally, we benchmark our results against the current state-of-the-art methods. Furthermore, we tackle the challenges associated with selecting appropriate input features and determining optimal interval sizes. Our results show that TL enhances model performance, particularly in datasets with minimal training data, where satisfactory results can be achieved with just a few hundred RNA binding sites. Moreover, we demonstrate that integrating both sequence and evolutionary conservation information leads to superior performance. Additionally, we showcase how incorporating an attention layer into the model facilitates the interpretation of predictions within a biologically relevant context.
Související projekty:

Používáte starou verzi internetového prohlížeče. Doporučujeme aktualizovat Váš prohlížeč na nejnovější verzi.

Další info