Please use this identifier to cite or link to this item:
http://hdl.handle.net/20.500.11861/7408
Title: | Classical scoring functions for docking are unable to exploit large volumes of structural and interaction data |
Authors: | Li, Hong-Jian Peng, Jiangjun Sidorov, Pavel Leung, Yee Prof. LEUNG Kwong Sak Wong, Man-Hon Lu, Gang Ballester, Pedro J |
Issue Date: | 2019 |
Source: | Bioinformatics, October 2019, vol. 35 (20), pp. 3989–3995 |
Journal: | Bioinformatics |
Abstract: | Motivation Studies have shown that the accuracy of random forest (RF)-based scoring functions (SFs), such as RF-Score-v3, increases with more training samples, whereas that of classical SFs, such as X-Score, does not. Nevertheless, the impact of the similarity between training and test samples on this matter has not been studied in a systematic manner. It is therefore unclear how these SFs would perform when only trained on protein-ligand complexes that are highly dissimilar or highly similar to the test set. It is also unclear whether SFs based on machine learning algorithms other than RF can also improve accuracy with increasing training set size and to what extent they learn from dissimilar or similar training complexes. Results We present a systematic study to investigate how the accuracy of classical and machine-learning SFs varies with protein-ligand complex similarities between training and test sets. We considered three types of similarity metrics, based on the comparison of either protein structures, protein sequences or ligand structures. Regardless of the similarity metric, we found that incorporating a larger proportion of similar complexes to the training set did not make classical SFs more accurate. In contrast, RF-Score-v3 was able to outperform X-Score even when trained on just 32% of the most dissimilar complexes, showing that its superior performance owes considerably to learning from dissimilar training complexes to those in the test set. In addition, we generated the first SF employing Extreme Gradient Boosting (XGBoost), XGB-Score, and observed that it also improves with training set size while outperforming the rest of SFs. Given the continuous growth of training datasets, the development of machine-learning SFs has become very appealing. |
Type: | Peer Reviewed Journal Article |
URI: | http://hdl.handle.net/20.500.11861/7408 |
ISSN: | 1367-4811 |
DOI: | 10.1093/bioinformatics/btz183 |
Appears in Collections: | Applied Data Science - Publication |
Find@HKSYU Show full item record
SCOPUSTM
Citations
69
checked on Nov 17, 2024
Page view(s)
58
Last Week
1
1
Last month
checked on Nov 18, 2024
Google ScholarTM
Impact Indices
Altmetric
PlumX
Metrics
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.