Please use this identifier to cite or link to this item:
http://hdl.handle.net/20.500.11861/10693
Title: | Classification of oil painting using machine learning with visualized depth information |
Authors: | Kim, J. Jun, Ji Young Hong, M. Dr. SHIM Hyeseung Ahn, Jaehong |
Issue Date: | 2019 |
Publisher: | ISPRS |
Source: | Kim, J., Jun, J. Y., Hong, M., Shim, H., & Ahn, J. (2019). Classification of oil painting using machine learning with visualized depth information. In Gonzalez-Aguilera, D., Remondino, F., Toschi, I., Rodriguez-Gonzalvez, P., Stathopoulou, E. (Eds.). 27th CIPA international symposium “documenting the past for a better future” (volume XLII-2/W15). 7th CIPA International Symposium, Ávila, Spain (pp. 617-623). ISPRS. |
Conference: | 27th CIPA International Symposium |
Abstract: | In the past few decades, a number of scholars studied painting classification based on image processing or computer vision technologies. Further, as the machine learning technology rapidly developed, painting classification using machine learning has been carried out. However, due to the lack of information about brushstrokes in the photograph, typical models cannot use more precise information of the painters painting style. We hypothesized that the visualized depth information of brushstroke is effective to improve the accuracy of the machine learning model for painting classification. This study proposes a new data utilization approach in machine learning with Reflectance Transformation Imaging (RTI) images, which maximizes the visualization of a three-dimensional shape of brushstrokes. Certain artist’s unique brushstrokes can be revealed in RTI images, which are difficult to obtain with regular photographs. If these new types of images are applied as data to train in with the machine learning model, classification would be conducted including not only the shape of the color but also the depth information. We used the Convolution Neural Network (CNN), a model optimized for image classification, using the VGG-16, ResNet-50, and DenseNet-121 architectures. We conducted a two-stage experiment using the works of two Korean artists. In the first experiment, we obtained a key part of the painting from RTI data and photographic data. In the second experiment on the second artists work, a larger quantity of data are acquired, and the whole part of the artwork was captured. The result showed that RTI-trained model brought higher accuracy than Non-RTI trained model. In this paper, we propose a method which uses machine learning and RTI technology to analyze and classify paintings more precisely to verify our hypothesis. |
Type: | Conference Paper |
URI: | http://hdl.handle.net/20.500.11861/10693 |
DOI: | 10.5194/isprs-archives-XLII-2-W15-617-2019 |
Appears in Collections: | Sociology - Publication |
Find@HKSYU Show full item record
SCOPUSTM
Citations
6
checked on Jan 12, 2025
Page view(s)
7
Last Week
0
0
Last month
checked on Jan 19, 2025
Google ScholarTM
Impact Indices
Altmetric
PlumX
Metrics
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.