Please use this identifier to cite or link to this item: http://hdl.handle.net/20.500.11861/7408
DC FieldValueLanguage
dc.contributor.authorLi, Hong-Jianen_US
dc.contributor.authorPeng, Jiangjunen_US
dc.contributor.authorSidorov, Pavelen_US
dc.contributor.authorLeung, Yeeen_US
dc.contributor.authorProf. LEUNG Kwong Saken_US
dc.contributor.authorWong, Man-Honen_US
dc.contributor.authorLu, Gangen_US
dc.contributor.authorBallester, Pedro Jen_US
dc.date.accessioned2023-02-22T06:12:33Z-
dc.date.available2023-02-22T06:12:33Z-
dc.date.issued2019-
dc.identifier.citationBioinformatics, October 2019, vol. 35 (20), pp. 3989–3995en_US
dc.identifier.issn1367-4811-
dc.identifier.urihttp://hdl.handle.net/20.500.11861/7408-
dc.description.abstractMotivation Studies have shown that the accuracy of random forest (RF)-based scoring functions (SFs), such as RF-Score-v3, increases with more training samples, whereas that of classical SFs, such as X-Score, does not. Nevertheless, the impact of the similarity between training and test samples on this matter has not been studied in a systematic manner. It is therefore unclear how these SFs would perform when only trained on protein-ligand complexes that are highly dissimilar or highly similar to the test set. It is also unclear whether SFs based on machine learning algorithms other than RF can also improve accuracy with increasing training set size and to what extent they learn from dissimilar or similar training complexes. Results We present a systematic study to investigate how the accuracy of classical and machine-learning SFs varies with protein-ligand complex similarities between training and test sets. We considered three types of similarity metrics, based on the comparison of either protein structures, protein sequences or ligand structures. Regardless of the similarity metric, we found that incorporating a larger proportion of similar complexes to the training set did not make classical SFs more accurate. In contrast, RF-Score-v3 was able to outperform X-Score even when trained on just 32% of the most dissimilar complexes, showing that its superior performance owes considerably to learning from dissimilar training complexes to those in the test set. In addition, we generated the first SF employing Extreme Gradient Boosting (XGBoost), XGB-Score, and observed that it also improves with training set size while outperforming the rest of SFs. Given the continuous growth of training datasets, the development of machine-learning SFs has become very appealing.en_US
dc.language.isoenen_US
dc.relation.ispartofBioinformaticsen_US
dc.titleClassical scoring functions for docking are unable to exploit large volumes of structural and interaction dataen_US
dc.typePeer Reviewed Journal Articleen_US
dc.identifier.doi10.1093/bioinformatics/btz183-
item.fulltextNo Fulltext-
crisitem.author.deptDepartment of Applied Data Science-
Appears in Collections:Applied Data Science - Publication
Show simple item record

SCOPUSTM   
Citations

72
checked on Dec 15, 2024

Page view(s)

60
Last Week
0
Last month
checked on Dec 20, 2024

Google ScholarTM

Impact Indices

Altmetric

PlumX

Metrics


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.