Journal Search Engine
Search Advanced Search Adode Reader(link)
Download PDF Export Citaion korean bibliography PMC previewer
ISSN : 1598-6721(Print)
ISSN : 2288-0771(Online)
The Korean Society of Manufacturing Process Engineers Vol.21 No.4 pp.60-69
DOI : https://doi.org/10.14775/ksmpe.2022.21.04.060

3D Vision Implementation for Robotic Handling System of Automotive Parts

Ji Hun Nam*, Won Ock Yang**, Su Hyeon Park**, Nam Guk Kim***, Chul Ki Song****, Ho Seong Lee**#
*Dellics Co.
**Department of Mechanical Convergence Engineering, Gyeongsang National University
***R&D Center, Cybernetics Imaging Systems
****School of Mechanical Engineering, ERI, Gyeongsang National University
#Corresponding Author : hoslee@gnu.ac.kr Tel: +82-55-250-7301, Fax: +82-55-250-7399
01/03/2022 03/03/2022 11/03/2022

Abstract


To keep pace with Industry 4.0, it is imperative for companies to redesign their working environments by adopting robotic automation systems. Automation lines are facilitating the latest cutting-edge technologies, such as 3D vision and industrial robots, to outdo competitors by reducing costs. Considering the nature of the manufacturing industry, a time-saving workflow and smooth linkwork between processes is vital. At Dellics, without any additional new installation in the automation lines, only a few improvements to the working process could raise productivity. Three requirements are the development of gripping technology by utilizing a 3D vision system for the recognition of the material shape and location, research on lighting projectors to target long distances and high illumination, and testing of algorithms/software to improve measurement accuracy and identify products. With some of the functional requisites mentioned above, improved robotic automation systems should provide an improved working environment to maximize overall production efficiency. In this article, the ways in which such a system can become the groundwork for establishing an unmanned working infrastructure are discussed.



자동차 부품의 로봇 처리 시스템을 위한 3D 비전 구현

남 지훈*, 양 원옥**, 박 수현**, 김 남국***, 송 철기****, 이 호성**#
*(주)델릭스
**경상국립대학교 기계융합공학과
***연구개발센터, CIS
****경상국립대학교 기계공학부, 공학연구원

초록


    © The Korean Society of Manufacturing Process Engineers. All rights reserved.

    This is an Open-Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.

    1. Introduction

    Bin-picking technology adopted 3D vision scanning will recognize the position information of the materials. Vision tools are beneficial to the robots to identify the item to be manipulated as well as its orientation in the working area, and to identify the region of space where items should be placed after manipulation[1]. The automation process of such procedure will improve and entrench the company to have an allowance of seamless flow of nonstop operation. In turn, this will increase the overall productivity, cost reduction, and improvement of the working condition. Fig. 1 below shows the produced products of kappa cylinder block (a), g15 dtf head (b), oil pan (c), and spindle (d) – all relevant components to build engines of the commercial automotive and heavily equipped vehicles.

    Before 3D vision being applied to the automation line, the workers on the shop floor had to manually load the materials onto the slotting belt lines. If the materials are too heavy, air hoists are being utilized to help transport.

    This paper will navigate and explore the connectivity of needs and its necessity of the 3D vision, robot operations, and material loading stations to consummate already established automation lines to promote better productivity and to wane down the facility downtime.

    • 1) Installation of 3D vision will alternate the manual power to load the materials into the automation lines.

    • 2) Through the scanning technology, 3D vision will be the eyes of the robot operations to execute bin picking methods.

    • 3) The algorithms and binarization methods will calculate the object region detection to operate robot hands and grippers.

    These solutions facilitate quality control in production lines, better process efficiency with more productivity, lower manufacturing costs with higher profit margins, and also help to achieve improved presentation for customer requests[2].

    2. 3D Vision System

    2.1 3D Vision System for Automation

    We have implemented to facilitate the robotic automatic system, development of hand grippers, and putting the 3D vision system into the plant that is essential for the automation of material handling.

    Detailed information on such process is as following: 1) repetitive accuracy of robotic location is aimed at within 0.11 mm and its core procedure is to maintain the horizontal level of table and bed, 2) product perpendicularity is at within 0.5 mm and its core procedure is to adopt either contact method of the robot gripper or practical use of scale sensor, 3) use of two robot grippers instead of one, and incorporation of rotary motion, and 4) scanning scope targeted at 300 mm or more and this will be conjugated with camera and scanning technology.

    While carrying out the manufacturing task using industrial robots is the main purpose, the more critical aspect is that the built-in sensor precision of robot gripper is what directly related to the human safety of the workforce. One of the key components of smart factory adopted to this project is sensors. Conventional 2D vision systems have a serious limitation, since they require loading of parts on the flat surface by an operator using slip sheets or pallets. To cope with these challenges and to overstep the threshold, utilization of 3D vision camera for the material allocations and its input has been carried out. Bin-picking process is the core of this technology through utilization of the 3D recognition and feedback of the position information of the materials. The robotic system with the 3D vision recognizes randomly positioned materials, selects the target part, picks up the part, and relocates it to the desired location. In other words, it is a system with eyes applied to the robots. Such eyes may show slowdown of the image recognition depending on the condition of the parts and lighting situations. Based on the socio-technical perspective, three aspects of organization, human, and technology are closely intertwined to maximize and utilize the efficiency of smart factory[3].

    In order to generate the better result, Sauvola binary algorithm is used to calculate the average and standard deviation. As a result, real time image processing has improved. Information extracted from the image data has been passed onto the robot, implementation of the loading and unloading of the raw materials is then to be embodied. With projector and camera, the 3D vision system can scan large scale of manufactured crafts. Originally, engineers and workers had to manually insert materials into the automation lines. In terms of productivity, this was a very inefficient process. Void in between the time of lunch and breaktimes also resulted in the productivity drop. Function allocation of man/machine shows that in highly automated systems, manual tasks are mainly embedded in human supervisory control tasks and maintenance operations[4]. Amongst the 3D vision functionality, sensors are pivotally vital. These are devices that have the ability to self-organize, learn, and maintain environmental information to analyze behaviors and abilities[5].

    2.2 Impact of Lighting Condition on 3D Vision System Performance

    For the particular work environment of the automation line, the lighting condition is relatively bright compare to other manufacturing lines. Thus, the image recognition through 3D vision is noticeably low. In order to resolve the lighting issue, 3D vision for short distance can be an option. Due to the interference of the manipulator gripper movement and concern for the productivity reduction, long distance 3D module is adopted. Such module also retains with problems that it adds weaker points to its functionality, as it is exposed to even more extreme lighting conditions throughout the daylight and night. The optimal lighting projection for its intensity is rated at 400 lumen. It is only applicable to the indoor lightings where exterior lighting is completely cut off and adaptable for short distance range of bin-picking. In order to promote functional and clear images of the parts, higher lighting intensity re desirable in range of 700~2000 lumen. In addition, to measure the intensity of surrounding lights during the daylight or night, additional optical sensors are applied. The artificial intelligence-based algorithms calculate the optimal gripping points and orientation to pick up the parts from the pallet. Through this measure, engineering process becomes faster, and manufacturing process can be improved[6].

    The first application of the 3D vision system for the automation lines is U-oil pan line. Such product is adopted to 1.6 liter diesel engines. It is assembled to the lower part of the engine block to keep the engine oil from draining. Fig. 2 shows the assembled oil pan to the lower part of the engine. 3D vision test results are shown in Table 1.

    Some of the disadvantages and risk concerns for this installation is at the lighting. The sensors equipped on 3D vision are highly sensitive to the sunlight. Some of the automation lines are repeatedly reported to have constant malfunctions caused by such issue. Visions fail to scan the products because of the incoming lights from outside of the factory building. As response, the company had to set up awning shades to block the lights. It has been partially effective thus far. Also, from time to time, visions fail to read the pattern, not responding from recognizing the raw material image. This is because the quality and quantity of captured data is dependent on the pose of the vision system relative to the target object. This is particularly true for shiny surfaces, which can adversely scatter the illumination provided by the vision system such that it cannot be detected by cameras in the system[7].

    Along with 30 manufactured goods for the trial, the testing had gone through few modulations of bright control, procuring the optimal brightness of the lighting. 1,256 lumen around 9:00 to 11:00 AM and 1,394 lumen from 13:00 to 15:00 in the afternoon. The experimental standard deviation for the morning was 55.5 lumen and 68.7 lumen in the afternoon. Fig. 3 shows the awning shades installed upon the vision camera at the carrier camshaft assembly line and oil pan line. These shades are put on top of the 3D vision cameras to block the incoming lights from the outside of the factory buildings. They have helped projectors and 3D visions to elaborate the optimum imaging results.

    2.3 3D Vision System Algorithms and Test Results

    Determination of the X, Y, Z coordinates of the parts from the 3D scanning is done at a height of about 3 m for engine block bin-picking. Depending on the condition of the part and the lighting, the reflection or shading hinders the determination of the position of the part. This results in slow-down of the real-time image processing. In order to minimize the effect of reflections coming from the product surface, the image enhancement algorithm has been applied.

    To ensure the part position and orientation precisely, the maximum axis information that is passing the product center point is to be extracted by utilizing the central moment. Reflective materials such as aluminum tends to show irregular internal brightness level from the object area. The adaptive binarization method is used to detect distorted object regions due to lighting effects by dividing the image into local regions and setting a threshold according to the distribution of the brightness values of pixels for each local region. The binarization process for object region detection is performed using the improved algorithm of the Sauvola method, which has the best performance among adaptive binarization methods.

    t ( x , y ) = m ( x , y ) + k [ 1 s ( x , y ) R ]
    (1)

    In Eq. (1), the threshold value t(x, y) is calculated while using mean value of m(x, y) at the regional area and standard deviation of s(x, y). By utilizing the principle of Viola and Jones integral image, the mean and standard deviation can be calculated at high speed regardless of the size of the local area. As a result, under the natural lighting condition, the time of the location detection has shortened from average 2.5 s to 1.8 s. Fig. 4 shows shooting of the image, sample, and detection result. 2D image is shown from picture (a), its decoded image is shown in picture (b) and picture (c) displays the 3D depth image. For the data transmission, the projector generates gray code – which is often used as code for input/output device or analog-digital converter. Then, the camera input is to follow. When the picture intake is done, the composite video signal is separated to the component video signal through decoding process. After the preliminary stage, the coordinates of camera Fig. 4 3D vision scanning of engine blocks and its detected location and projector calibrates to demonstrate 2D, decoded, and 3D depth pictures as shown in Fig. 5. To understand the aim, the program extracts the feature points from the images taken and then recognizes its target.

    Once the feature points are derived, the program then goes repetitive tasks of deep learning process of learn, detect, and recognition. Because the learned objects and its relevant images have to be embedded to the program, the algorithm does not trace the novel objects thrown into the existing pallets. Fig. 5 shows the extraction of the feature points and its deep learning process. Model object (a) is configured for the feature point extractions (b). After the process, the images are recognized as shown in (c) .

    After the feature extraction, to validate the image confirmation, the program under goes the deep learning process. In Fig. 6, the program learns images for the extraction to recognize them.

    All the data above are to elicit the core algorithm of Point Feature Histogram(PFH). PFH is to generalize the average curve value around the points to extract characteristics or special features of the object. Within radius R, all the points are paring up with its neighboring points to calculate the fixed coordinate frame by using their normal.

    The figures below show the image of the feature point extractions and drawings to explain PFH. In Fig. 7, the normal values are embodied as black lines after the feature extraction. Fig. 8 illustrates the coordinate frames and established point pairs to show the point feature histogram.

    Fixed coordinate frame is then used to calculate the difference between the normal to set the triangular variables. This variable is stored along with Euclidean distance between points. After all the prerequisites from above, then vision camera recongnizes each individual part that is loaded on the pallet, deciding which part to grip on next in the chronological order. It provides 3D location information to robot controller and robot clamps the products regular sequence, putting them to the next manufacturing process. Currently, the 3D vision technology is primarily utilized to small products that are around 300 mm. For this project however, cameras and projectors are enabling to handle large size of working crafts up to 1,000mm. Table 2 below shows the specifications of the vision camera.

    Fig. 9 displays the vision camera (a) and the oil pan material pallet (b). Table 3 shows the improvement of average daily production output by 16% after the 3D vision system implementation. Table 4 indicates that the overall productivity has increased by reducing the dowtime from 57.6 hrs to 17 hours — 17% increase in the operational efficiency. Defective rate has dropped to 0.1% from 0.3% as shown in Table 5. This implies that the employment of the 3D vision system also improves the overall quality of products.

    This fact emphasizes the importance of the role of the 3D vision system for the improved automation operation. The juxtaposition of the 3D vision with the robot control of the material input would not only bring the working accommodation but would definitely fill in the void even when the workers are absent from the scenes. The Machine vision is used as sensor and the primary function of them is to drive the robotic arms to the right location of desired object for pick-and-place depending on the robot’s degree of freedom[8].

    3. Implementation of Novel Robotic Grippers

    3.1 Development of Grippers

    Conventional ways of material clamping have limitation in speeding up the manufacturing work flow. While upgrading the gripping capability, we have tested the actual time difference and production disparity between manual and robot hand guided material clamping. To hold and grip onto the part, a special jaw is attached on the tip of each air cylinder load. When the jaw is in action to move around, a linear motion guide is attached in order to enhance secure gripping. Each cylinder load is assigned to work on the other end from another to prevent any slip and tips are adhered to the end of each jaw in accordance of the exterior shape of the products. Conventional jaws to the machine are allowed to clamp from the outside skirts of the products. Since parts loaded onto the pellets are densely and closely packed togather, conventional jaws have limitation in picking them up properly. Hence a special customized jaw has been developed such a way that the gripper can hold on from the inside of the parts. The picture of gripper is shown in Fig. 10.

    To hold on to the part, the gripper is attached on each end of the robot hand at traversal degree. The gripper holder is removable with the robot in the center. Based on the working condition, it can be replaced with slab holder and others. When the grippers are loading the materials onto the machines, the robot facilitates the scale sensor to ascertain the deviation of right and left while maintaining perpendicularity and parallelism. When the scale sensors are not activated, “V” block shape function can be used to maintain the perpendicularity that is achieved by closely adhering to the right angle of the corner of the machine jig with an air cylinder.

    The mechanism of such process is designed to utilize two hands with one gripper rotating 360, having the robot axis in the center. Once the robot reaches out for the part, the degree of perpendicularity is adjusted by the contact method. The distance variation of scale cylinder x1 and x2 by measuring out, the material perpendicularity is confirmed within 0.5 mm. Before going into the machining process, few minor steps are to follow to relocate the materials into the right position. Fig. 11 shows a rollover machine and gripping correction jigs that are utilized to place the materials into the right orientation and position. Fig. 12 illustrates the overall manufacturing layout of robot hand grippers, 3D vision, and hydraulic jigs of rear flange automation line.

    3.2 Test Results

    An automation line for rear flange production has been selected as the testbed to undergo for these particular comparisons. The manual clamping time of the rear flange line took about average 100 seconds per item to load. Our targeted goal was to drop down to 35 seconds or below. To initiate the time measurement, the 3D vision recognition is the first to begin with. The robot gripper clamps the material, moves around to go into the machines, and settles the product onto the jig. Testing repeated ten times and the average loading time was measured at 33.2 seconds.

    4. Conclusion

    The introduction of new digital technology imposes challenges and the entire workforce needs to evolve as the digital transformation process unfolds[9]. If the organizational changes caused by Industry 4.0 are not taken into account, this may lead to considerable problems, reduced potentials, and delays in the implementation of Industry 4.0[10]. Nevertheless, Industrial automation is intended to improve factory productivity, product quality and overall cost reduction[11]. Along with the 3D vision in sight, the manufacturing lines will produce much better products both quality and quantity wise. The vision not only will replace the manual labors and potential voids in between the times, but also will accelerate to cut down the manufacturing hour and apparatus downtime to result fast turnovers of the whole cycle.

    For improved efficiency, the 3D vision has been studied of its proper lighting exposure value. Through the adjustments, the research was able to draw the optimal lighting standardization throughout day and night, facilitating the vision camera to shoot and read clear images of the target parts. Therefore, the overall productivity has increased and operating hour has been prolonged. Before the development, production output has marked 275 units per day. After the implementation, the number has climbed to 320 units per day. Defective rate has dropped from 0.3% to 0.1% based on two weeks of manufacturing trial. The machine downtime has dropped from 57.6 hours to 17 hours with which effecting the operational rate rising from 80% to 94%.

    Along with the major 3D vision development, supplementary features of robot hand grippers, rollover machine, and gripping correction jigs also have helped to increase the productivity. Especially the robot gripping technology is a bolstering functionality to replace manual clamping operation. From 100 seconds per item to load, the robot hand gripping time has been plummeted to 33.2 seconds when the robot loading test time has been averaged after 10 trials.

    Juxtaposition of utilizing real time data from 3D vision to load the materials for the input with fully functional automation lines is the key. Accompanying with the intricate internal sensors adopted to the vision camera and robots will maximize the output, accurate utilization of the cycle would allow the company be in control of the facility management. The field workers and engineers are also free from the apprehensions of the manual material loadings to the machines and environmental loss time only because of their occasional absences on the site when the materials are all run out of the conveyors. Taking the circumstances into consideration, The 3D vision certainly is an effective and attractive capacity to generate better outcomes of the automation lines associated to smart factory.

    Figure

    KSMPE-21-4-60_F1.gif
    Manufactured Products
    KSMPE-21-4-60_F2.gif
    Automotive engine with the old type oil pan
    KSMPE-21-4-60_F3.gif
    Awning shades upon the vision cameras
    KSMPE-21-4-60_F4.gif
    3D vision scanning of engine blocks and its detected location
    KSMPE-21-4-60_F5.gif
    Feature extractions
    KSMPE-21-4-60_F6.gif
    Deep learning process
    KSMPE-21-4-60_F7.gif
    The black lines of normal values are shown after the extraction
    KSMPE-21-4-60_F8.gif
    Point feature histogram
    KSMPE-21-4-60_F9.gif
    Vision camera sensor and the oil pan material pallet
    KSMPE-21-4-60_F10.gif
    Customized design of the gripper
    KSMPE-21-4-60_F11.gif
    Rollover machine and grip correction jigs

    Table

    Measurement results
    Vision camera data
    3D vision implementation results: comparison of daily production output
    Operational downtime rate
    Defect rate results

    Reference

    1. Bellandi, P., Docchio, F., and Sansoni, G., “Roboscan: A Combined 2D and 3D Vision System for Improved Speed and Flexibility in Pick-and-Place Operation,” The International Journal of Advanced Manufacturing Technology, Vol. 69, No. 5, 2013.
    2. Blanes, C., Mellado, M., Ortiz, C., and Valera, A., “Technologies for Robot Grippers in Pick and Place Operations for Fresh Fruits and Vegetables,” Spanish Journal of Agricultural Research, No. 4, pp. 1130-1141, 2011.
    3. Grunow, O., “Smart Factory and Industry 4.0. The Current State of Application Technologies,” Studylab, p. 71, 2016.
    4. Saften, K., Winroth, M., and Stahre, J., “The Content and Process of Automation Strategies,” International Journal of Production Economics, Vol. 110, No.1-2, pp. 25-38, 2007.
    5. Kalsoom, T., Ramzan, N., Ahmed, S., and Ur-Rehman, M., “Advances in Sensor Technologies in the Era of Smart Factory and Industry 4.0,” Sensors (MDPI), pp. 1-22, 2020.
    6. Park, S., “Development of Innovative Strategies for the Korean Manufacturing Industry by Use of the Connected Smart Factory (CSF),” Procedia Computer Science, Vol. 91, pp. 744-750, 2016.
    7. Kinnel, P., Rymer, T., Hodgson, J., Justham, L., and Jackson, M., “Autonomous Metrology for Robot Mounted 3D Vision Systems,” CIRP Annals, Vol. 66, No. 1, pp. 483-486, 2017.
    8. Patil, G. and Chaudhari, D., “SIFT Based Approach: Object Recognition and Localization for Pick-and-Place System,” International Journal of Advanced Research in Computer Science and Software Engineering, Vol. 3, No. 3, pp. 196-201, 2013.
    9. Sjodin, D., Parida, V., Leksell, M., and Petrovic, A., “Smart Factory Implementation and Process Innovation,” Research-Technology Management,
    10. Herrmann, F., “The Smart Factory and its Risks,” Systems (MDPI), pp. 1-15, 2018.
    11. Thangaraj, J., and Wahab, N. A., “Automation of Pick and Place Operation in Contact Lens Manufacturing,” ELECTRIKA, Journal of Electrical Engineering, Vol. 17, No. 2, pp. 25-29, 2018.