Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-6721(Print)
ISSN : 2288-0771(Online)
The Korean Society of Manufacturing Process Engineers Vol.19 No.7 pp.7-15
DOI : https://doi.org/10.14775/ksmpe.2020.19.07.007

Defect Classification of Cross-section of Additive Manufacturing Using Image-Labeling

Jeong-Seong Lee*, Byung-Joo Choi*, Moon-Gu Lee*, Jung-Sub Kim**, Sang-Won Lee**, Yong-Ho Jeon*#
*Department of Mechanical Engineering, Ajou UNIV.
**School of Mechanical Engineering, Sungkyunkwan UNIV.
#Corresponding Author : princaps@ajou.ac.kr Tel: +82-31-219-3652, Fax: +82-31-219-2528
07/05/2020 26/05/2020 29/05/2020

Abstract


Recently, the fourth industrial revolution has been presented as a new paradigm and additive manufacturing (AM) has become one of the most important topics. For this reason, process monitoring for each cross-sectional layer of additive metal manufacturing is important. Particularly, deep learning can train a machine to analyze, optimize, and repair defects. In this paper, image classification is proposed by learning images of defects in the metal cross sections using the convolution neural network (CNN) image labeling algorithm. Defects were classified into three categories: crack, porosity, and hole. To overcome a lack-of-data problem, the amount of learning data was augmented using a data augmentation algorithm. This augmentation algorithm can transform an image to 180 images, increasing the learning accuracy. The number of training and validation images was 25,920 (80 %) and 6,480 (20 %), respectively. An optimized case with a combination of fully connected layers, an optimizer, and a loss function, showed that the model accuracy was 99.7 % and had a success rate of 97.8 % for 180 test images. In conclusion, image labeling was successfully performed and it is expected to be applied to automated AM process inspection and repair systems in the future.



이미지 라벨링을 이용한 적층제조 단면의 결함 분류

이 정성*, 최 병주*, 이 문구*, 김 정섭**, 이 상원**, 전 용호*#
*아주대학교 기계공학과
**성균관대학교 기계공학부

초록


    Ministry of Trade, Industry and Energy
    2019R1A6A3A01 096831

    © The Korean Society of Manufacturing Process Engineers. All rights reserved.

    This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

    1. Introduction

    Recently, as the fourth industrial revolution has been presented as a new paradigm, interest in big data analysis, robotics, and artificial intelligence has been on the rise. Therefore, various studies are being conducted on artificial intelligence to achieve rational human-like thinking[1]. In particular, the possibility of making use of this enhanced artificial intelligence for improving process efficiency by applying it to data storage, processing, automation, and optimization is being studied. Furthermore, analysis techniques using deep learning algorithms[2] that extract common properties from signals and conduct data-based inference experiments can be applied to the detection of surface defects[3] and image classification[4]. Furthermore, data processing speed has been improved with the advancement of computing power, accelerating the development of deep learning algorithms[5]. In this paper, the classification of metal cross-sectional defects is performed using the convolution neural network (CNN) algorithm[4], which is specialized in extracting and classifying image characteristics. Defects in typical metal additive manufacturing processes are common problems in powder bed fusion (PBF) and direct metal deposition (DMD); therefore, studies have been conducted to solve them as follows.

    Scime et al.[6] found metal defects during a powder redistribution process based on monitoring images. The causes of these defects were classified into six categories: recoater hopping, recoater streaking, debris, super-elevation, part damage, and incomplete spreading.

    Additionally, Gaja et al.[7] found defects in the DMD process such as cracking and porosity, which were detected and classified by analyzing peak amplitude, rise time, duration, and number of acoustic emission (AE) sensor signal counts.

    When inexperienced people attempt to classify metal defects, it is difficult for them to decide which class to use. Thus, research has been done in other areas to assist human judgment with deep learning algorithms[8]. As a result, metal defects can also be detected more accurately if humans and AI interact closely. Furthermore, if the accuracy of such algorithms is improved, defects can be almost perfectly detected without the use of human resources. Therefore, if defects in the cross sections of metal additive manufacturing can be accurately classified using deep learning, the reliability of the final products can be improved and manpower efficiency increased.

    In this paper, the unspecified defects were classified using image labeling based on the deep learning CNN. Defects were labeled as “crack”, “hole”, and “porosity”, and studies were conducted to classify them more accurately. In the process of preparing the training data, the method of augmenting a small number of data using augmentation is presented, and the process of converting images into binary data is described. To improve the accuracy of the algorithm, the case study was performed considering the structures of the optimizer, the loss function, and the fully connected layer. Furthermore, in an attempt to analyze problems such as overfitting or underfitting, the model accuracy derived from the image labeling algorithm was compared with the accuracy of the classification test using new images.

    2. Training Data

    Before building the learning data, to increase learning efficiency, images of various sizes were resized to 16,384 (128 × 128) pixels. Thus, an image had three RGB channels and consisted of 49,152 pixels of data.

    To construct a useful deep learning model, training errors and validation errors must be reduced. Therefore, training large amounts of data in algorithms that have many layers shows high labeling performance, but there is difficulty in obtaining many defect images. Insufficient data reduces the accuracy of the classification to an overfitting of the classification regression line[9]. Moreover, in terms of obtaining training data, it is difficult to get enough defect images that are section images of the metal. To solve the lack of data, a data augmentation method that increases the amount of training data is proposed[10]. Images can be rotated, warped, transformed geometrically and in color, randomly erased, and adversarial training can be used to amplify the data. Figure1 shows an image that has been rotated 5° from the center to create 36 rotated images from one image. It has also been moved from the center to the x, -x, y, and -y axes up to 1 pixel and augmented five times, including the reference image, resulting in a single image of 180 pieces.

    Algorithm training not only provides an opportunity to extract more information by recognizing the magnified images as different but can also produce an effect equivalent to training using several images[11]. Additionally, the magnified training images are stored as matrix data, which is the binary data set shown in Figure 2a. This format has been used mainly in the existing field of computer vision, where it has been used as a way of entering multiple data in machine learning[12]. The generated binary data contains pixel data and image label data. In conclusion, 25,920 (80 %) and 6,480 (20 %) image data sets were organized for training and validation.

    3. Structure of the CNN Model

    CNN, which is mainly used for image recognition as a type of deep learning tool, extracts image features by repeating the convolution and pooling layers. Following the purpose of use, the fully-connected layer of classification, segmentation, and object detection is applied[13]. In this paper, an image labeling algorithm that extracts and classifies the characteristics of images and labels is constructed (Figure 2b). Moreover, to prevent overfitting from occurring in the algorithm training process, drop-out layers that arbitrarily throw away nodes in the network are added[14]. In addition, the case study was conducted in terms of the training speed of convergence and the accuracy of the labeling algorithm according to the number of fully connected layers, the type of optimizer, and the loss function of the algorithm.

    First, the fully connected layer is structured according to the number of layers and nodes, and the information is exchanged between them as interconnected neurons. Each node has a weight and a bias that converge to have a similar probability distribution for the training data. Therefore, the greater the number of fully connected layers, the better the learning accuracy, but more learning time is required[15].

    Second, the loss function can be divided into the cross-entropy and the mean square error function, with the former being used mainly because it converges faster than the latter[16]. Within the cross-entropy, there is a binary cross-entropy that executes binary classification and a categorical cross-entropy that performs multi-classification. The major difference between the two functions is the activation function. The binary cross-entropy uses the sigmoid function as shown in Equation (1) and the categorical cross-entropy uses the Softmax function as in Equation (2).

    f ( x ) = 1 1 + e x
    (1)

    f ( x i ) = e x i Σ j e x j
    (2)

    Third, in the gradient descent method, which is a method for finding the global minimum, the speed and direction of the optimizer are very important.

    Particularly representative are RMSProp, which has a fast convergence speed, and Adam, which is most widely used because of its fast convergence speed and outstanding direction[17].

    The comparison is adjusted every 5 cases to the number of fully connected layers, whose numbers are 32, 128, and 512. This comparison is also made considering the loss function (binary and categorical cross-entropy) and the optimizer (RMSProp and Adam), as shown in Table 1.

    4. Training and Testing

    4.1 Algorithm Training

    Previous research related to CNN reported that significant learning time (hours to dozens of hours) is required for training image data[18]. Although high-performance hardware was required to process the data from 32,400 images in this study, it was possible to reduce computation time to minutes by using the Python 3 GPU accelerator from Google Colab (Open Cloud, Google). Image labeling algorithms configured for each case were trained and validated during 20 epochs with 25,200 (80 %) and 6,480 (20 %) images, respectively.

    Convergence was also assessed according to the type of optimizer. Figures 3a and 3b show the results of the Adam and RMSprop optimizer. To simply introduce Adam, it has the same advantages of the Adargrad and RMSProp optimizers. Namely, because Adam’s step size is not affected by gradient and rescaling, any object function can stably converge. As a result, Adam and RMSProp optimizer had nearly the same convergence, but in the convergence process, the fluctuation of Adam was less[19]. The training time in all cases was nearly the same, taking between 6 and 7 seconds per epoch, according to the Google Colab GPU. Thus, the accuracy of the model in all cases was derived as shown in Table 2. Table 2 shows that Case 5, which consists of 512 fully connected l ayers and uses the binary cross-entropy loss function together with the Adam optimizer, shows the highest model accuracy of 99.7 %.

    4.2 Training Model Performance Test

    From a variance and bias point of view, the accuracy of the image labeling model needs to be confirmed whether it is underfitting or overfitting. Thus, an accuracy test was performed using 60 new input images showing crack, porosity, and hole defects. In the analysis of Case 5, the test accuracy can be determined as shown in Figure 4. For three labels, if a probability exceeds 50 %, the labeling is successful. If the judgment criterion is not satisfied, the labeling is failed. Accordingly, the test accuracy of each case is as shown in Table 2, and the test accuracy is depicted according to each class as shown in Figure 5. Case 1 shows a higher model accuracy than Case 2, but the accuracy test confirmed that Case 2 was more accurate. When the probability determined by the algorithm is higher, the actual performance tends to be better. The accuracy of the model can be overestimated with respect to the actual accuracy owing to problems such as overfitting in the learning and validation process. Thus, the test accuracy was confirmed to check and compare it with the model accuracy. As a result, the test accuracy of Case 5 was 98 % compared to 99.7 % of the model accuracy and it was selected as the best labeling algorithm.

    5. Result and Discussion

    For the trained model of Case 5, the images in Figures 6a, 6b, and 6c are introduced and labeled as shown in Figure 7. A blurry image was accurately classified if it had a resolution that could retain the image features, as shown in Figure 7c. Consequently, the results of the label data numerical classification were obtained, which were 0, 1, and 2. Moreover, the “crack”, “porosity”, and “hole” labels were given for that class, and an additional solution was introduced to suggest the following process. If this is applied to the automation process, a system can detect the defects and solve them independently. In other words, it can help detect and classify defects that depend on human judgment, or it can be performed independently.

    Additionally, in this paper, a numerical basis was needed to determine whether algorithm learning was occurring correctly. Thus, the reliability of the learning results was identified by comparing them with the accuracy of the algorithm by performing labeling with a new image to evaluate the accuracy of the test. Additionally, the advantage of the image labeling CNN algorithm is that it allows an easy classification of different types of defects by understanding the characteristic features of the images. However, from a different point of view, if multiple defects are detected in an image, it has the problem of detecting only one type of defect, which is the most dominant characteristic. In other words, when representative defects are important depending on the processing system, it is appropriate to use image labeling. However, to detect detailed defects, it was considered that models such as YOLOv3 or RetinaNet, which are types of multi-object detection algorithms, could detect all of them. Multi-object detection has the disadvantage of having to manually make the defect bounding box to produce the training data, but it has the advantage of being able to find and label all the defects in an image[20].

    6. Conclusion

    In this paper, to detect defects and repair them in a process worked layer-by-layer as in AM, defects in the cross-section image were classified and automatic restoration was suggested using the CNN algorithm. To increase model accuracy, large amounts of training data were made from missing images using the data augmentation method, and the CNN image labeling algorithm was trained with augmented training data. Afterward, to find the optimal condition of the labeling algorithm, a case study was performed by combining the number of the fully connected layers, the loss function, and the algorithm optimizer. As a result, the optimal model in the case study was able to obtain a model accuracy of 99.7 % using algorithms learned from a probability distribution close to the training data.

    However, as the learning model may have an overfitting or underfitting problem, it is necessary to evaluate the actual accuracy to evaluate it. In each case, 60 new images were introduced and the actual accuracy was tested. Comparing case 1 and 2, it was confirmed that the test accuracy was reversed differently from the model accuracy. Therefore, testing is essential for estimating the algorithm more precisely. As a result, Case 5 was verified to have the highest accuracy, and in other cases, according to the optimizer or the lack of the number of fully connected layers, situations occurred in which the images were misjudged. It was also confirmed that binary-cross entropy can distinguish between common features such as porosity and hole, which have the same circularity.

    Finally, labeling and post-processing were introduced in the most accurate Case 5, and the real defects were estimated by Case 5. As a result, it was observed that the CNN algorithm can classify defects and suggest post-processes according to the defects. Although the numerical judgment standard of the algorithm was suggested by the test accuracy, if training courses in the model algorithm were visualized, the model could have been judged more accurately. However, it was difficult to visualize the learning process of the algorithm model in real-time. Therefore, in the future, the judgment standard of the algorithm model will be suggested by visualizing the training step in the algorithm.

    Finally, this study made it possible to classify the defects based on images, and confirmed that post-processing can be performed continuously if the solution is introduced according to the class of the defect. If in the future the image labeling algorithm is applied to the automation process, it is expected that the system will be able to find the defects and perform the follow-up process by itself, and consequently improve the quality of the final result.

    Acknowledgement

    This research was supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT) (No. 2019R1A6A3A01 096831)

    Figure

    KSMPE-19-7-7_F1.gif
    Image augmentation by rotating from 0 to 180 degrees
    KSMPE-19-7-7_F2.gif
    Configuration of image labeling algorithm
    KSMPE-19-7-7_F3.gif
    Learning curves by loss reduction
    KSMPE-19-7-7_F4.gif
    Test accuracy of the case 5
    KSMPE-19-7-7_F5.gif
    Comparison between model accuracy and test accuracy about all cases
    KSMPE-19-7-7_F6.gif
    Input images for testing
    KSMPE-19-7-7_F7.gif
    Image labeling results using Case 5 algorithm

    Table

    List of algorithm configurations
    Comparison between model accuracy and test accuracy

    Reference

    1. Liu, Y. K. and Xu, X., "Industry 4.0 and Cloud Manufacturing : A Comparative Analysis," Journal of Manufacturing Science and Engineering Vol. 139 No. 3, 2016.
    2. Arel, I., Rose, D. C. and Karnowski, T. P., "Deep Machine Learning-A New Frontier in Artificial Intelligence Research," IEEE Computational Intelligence Magazine Vol. 5 No. 4, pp. 13-18, 2010.
    3. Lee, M. K. and Seo, K. S., "Comparison of Region-based CNN Methods for Defects Detection on Metal Surface," The Transactions of the Korean Institute of Electrical Engineers KIEE Vol. 67 No. 7, pp. 865-870, 2018.
    4. Ahn, J. H., "Multi-class image classification techniques based on CNN," A Thesis for a Master, Hanyang University, Republic of Korea, 2018.
    5. Cui, H., Zhang, H., Ganger, G. R., Gibbons, P. B. and Xing, E. P.,"Geeps: scalable deep learning on distributed GPUs with a GPU-specialized parameter server," Proceedings of the Eleventh European Conference on Computer Systems No. 4, pp. 1-16, 2016.
    6. Scime, L. and Liou, F., "A multi-scale convolutional neural network for autonomous anomaly detection and classification in a laser powder bed fusion additive manufacturing process," Additive Manufacturing, Vol. 24, pp. 273-286, 2018.
    7. Gaja, H. and Liou, F., "Defect classification of laser metal deposition using logistic regression and artificial neural networks for pattern recognition," The International Journal of Advanced Manufacturing Technology Vol. 94, pp. 315-326, 2017.
    8. Choi, E. (2018), "Doctor AI: Interpretable deep learning for modeling electronic health records," Retrieved 23, May, 2018, from http://hdl.handle.net/1853/60226.
    9. Kim, J. H., "Feeback-supervised Learning with A Small Dataset," A Thesis for a Master, Hanyang University, Republic of Korea, 2019.
    10. Shorten, C. and Khoshogoftaar, T. M., "A survey on Image Data, Augmentation for Deep Learning," Journal of Big Data Vol. 6 No. 60, pp. 1-48, 2019.
    11. Krizhevsky, A., Sutskever, I. and Hinton, G. E., "ImageNet Classification with Deep Convolutional Neural Networks," Communications of the ACM, Vol. 25 No. 6, pp. 84-90, 2017.
    12. Han, K. S., Lim, J. H. and Im, E. G., "Malware Anaysis Method using VIsualization of Binary Files," Proceedings of the 2013 Research in Adaptive and Convergent Systems, pp.317-321, 2013.
    13. Kim, J. B. and Seo, K. S., "Performance Analysis of Data Augmentation for Surface Defects Detection," The Transactions P of the Korean Institute of Electrical Engineers KIEEP, Vol. 67 No. 5, pp. 669~674, 2018.
    14. Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I. and Salakhutdinov, R., "Dropout: a simple way to prevent neural networks from overfitting," Journal of Machine Learning Research Vol. 15, pp. 1929-1958, 2014.
    15. Mazumdar, M., Dr.Sarasvathi V and Kumar, A., "Object recognition in videos by sequential frame extraction using convolutional neural networks and fully connected neural networks," International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS), pp. 1485-1488, 2017.
    16. Hampshire, J. B. and Waibel, A. H., "A Novel Objective Function for Improved Phoneme Recognition Using Time-Delay Neural Networks," IEEE TRANSACTIONS ON NEURAL NETWORKS, Vol. 1, No. 2, pp. 216-228, 1990.
    17. Bello, I., Zoph, B., Vasudevan, V. and Le, V. Q., "Neural Optimizer Search with Reinforcement Learning," Proceedings of the 34th International Conference on Machine Learning, Vol. 70, pp. 459-468, 2017.
    18. Dupre, R., Fajtl, J. and Argyriou, V., "Improving Dataset Volumes and Model Accuracy With Semi-Supervised Iterative Self-Learning," IEEE Transcations on Image Processing, Vol. 29, pp. 4337-4348, 2020.
    19. Kingma, D. P. and Ba, J. L., "ADAM : A METHOD FOR STOCHASTIC OPTIMIZATION," a conference paper at the 3rd International Conference for Learning Representations, pp. 1-15, 2017.
    20. Tian, Y., Yang, G., Wang, Z., Wang, H., Li, E. and Liang, Z., "Apple detection During different growth stages in orchards using the improved YOLO-V3 model," Computers and Electronics in Agriculture, Vol. 157, pp. 417-426, 2019.