1. Introduction
With the rapid development in virtual reality (VR) technology in recent years, VR has been increasingly applied in a wide variety of areas. To implement VR, a method to realize visualization environments through headmounted displays (HMDs) has been widely used. However, the visualization environment implemented through HMDs has drawbacks in that it may cause dizziness in users or limits users’ behavior as the HMD blocks their eyes^{[12]}. This study aims to implement VR visualization environments through a multisurface and multiscreen environment that can overcome the drawbacks of the HMD method and increase the realistic effect of virtual environment using existing screens. To implement the multiscreen method as a realistic visualization environment, a technique is needed to match the spatial location in the multiscreen with that in the virtual environment. To do this, an efficient method that can measure the spatial location precisely is needed, and a methodology for spatial location correction that reflects the measurement of real space location efficiently and precisely to the virtual environment is presented in this study.
2. Concept of synchronization of virtual and real spaces
Fig. 1 shows the theoretical conceptual diagram of synchronization between virtual and real spaces. Fig. 2 shows the definition of the synchronization method between virtual and real spaces, which measures the spatial reference location and the multiscreen's location in the multiscreen environment of real space precisely and matches them with the reference location in the virtual environment.
As shown in Fig. 3, the user's location in the virtual space can be synchronized by reflecting the multiscreen's location to the virtual environment. The real and virtual spaces can be precisely synchronized by efficiently configuring the multiscreen environment, which is a real space, and reflecting the screen location and direction accurately to the virtual environment. To implement this, a stable multiscreen structure is required, as well as a measurement method to reflect the screen location accurately.
3. Design of the simulator main frame structure
3.1 Finite element modeling and analysis conditions
The deformation due to selfweight of the simulator main frame of a VR technologyapplied multiscreen structure should be minimized for accurate mapping of real and virtual spaces. Fig. 4 shows a simplified model for static stiffness analysis and threedimensional (3D) design shape of the main frame. Here, small parts were omitted and small grooves and edge parts that did not significantly affect the analysis were simplified, considering that a frame is assembled using bolts and nuts by part^{[35]}. To create the finite element model, the connection and support portions of each structure were divided into tetra elements, which became solid elements using HyperMesh, and the profile was divided using the Quad or sheet element. The length of each side of the finite element model was set to 5 mm as the basic length.
Table 1 presents the physical properties of the materials used in the analysis model. Here, the profile part was made from aluminum, and the connecting and supporting portions were made from steel. An integrated shape was assumed considering that the contact surface of each part was assembled by bolts. For the boundary condition for static stiffness analysis of the frame, a threedegree freedom constraint was applied to the bottom support surface as shown in Fig. 5(a), and a loading condition of 10.5 kg was applied to each of the five projectors mounted on the upper side of the simulator as shown in Figs. 5(B) and 5(c). The stress and strain distribution due to selfweight were analyzed at the frame's entire assembly condition using Hyperworks Optistruct based on the above analysis conditions.
3.2 Results of static stiffness analysis
Fig. 6 shows the results of the static stiffness analysis on the frame’s stress distribution according to the selfweight and the projector’s load. The maximum stress was 28.2 MPa, indicating that the stress was concentrated on the support frame in the upper projector. However, the stress was below the yield strength of the aluminum (55 MPa), which indicated no problem in static stiffness. In addition, the design safety factor was 1.7 times or higher, which also demonstrated that the structural safety was secured. Fig. 7 shows the results of the static stiffness analysis on the frame's strain distribution according to the selfweight and the projector's load. The maximum strain was 1.17 mm, which occurred at the end of the support portion of the projector. The motion recognition precision for precise mapping of the virtual and real spaces was ±5 mm, and the maximum strain of the frame, 1.17 mm, did not have a significant impact on the location correction of the virtual and real spaces.
4. Measurement method of spatial location of multiscreen
The location of the real space shall be measured precisely and reflected to visualize the virtual environment and synchronize the multiscreen system in real space with that in the virtual space. In particular, 3D precision measurement devices such as laser trackers may be used to measure the multiscreen in real space, but these expensive measurement devices have drawbacks with regard to mobility and inconvenience of use. To overcome these drawbacks, this study developed a measurement device that facilitated easy movements and precision measurements using a sensor applied to the multiscreen, and a measurement algorithm was investigated.
41. Development of spatial location measurement device
Fig. 8 shows the schematic design diagram of the two degrees of freedombased spatial location measurement device, and the actually fabricated device. The location of the multiscreen is measured by projecting a laser beam onto the rotation center of the two degrees of freedom mechanism and placing it in a specific location on the screen surface. In particular, a rotation direction in each rotation axis of the measurement device is read during measurements, and using this rotation value, a relative position and direction between the screen and the measurement device can be obtained.
42. Development of spatial location measurement algorithm
Fig. 9 shows the coordinate definition used to measure the location and direction of the multiscreen in real space. The value given at the coordinate definition is _{S}P_{Pn}, which is presented in Eq. (1), and it indicates an arbitrary location value given by the screen reference coordinate.
Here, _{S}T_{W} refers to the location and direction of the world reference coordinate viewed from the screen reference coordinate, _{W}P_{Pn} refers to the location of the measurement point above the screen viewed from the world reference coordinate, and _{S}P_{Pn} refers to the location of the measurement point above the screen viewed from the screen reference coordinate. In addition, the following condition is given for _{S}P_{Pn}. During measurements, and according to the two measurement conditions, a measurement point is defined according to the method shown in Fig. 10. In particular, each of the points is measured at the fixed location.
condition 1.
During the measurement of P_{n}(P_{1}, P_{2}, ... P_{n}), three points are measured, and three points should be located in a straight line.
condition 2.
The measurement reference location during the threepoint measurement should be measured by turning the angle only at the fixed state.
condition 3.
P_{1}, P_{5}, and P_{3} should be located in a straight line. condition 4.
P_{2}, P_{5}, and P_{4} should be located in a straight line. condition 5.
The spatial reference coordinate (W) and measurement reference location (J) are defined identically.
If the triangle made by the measurement reference location J and the measurement point P_{1} and P_{3}가 above the screen in Fig. 11 is defined using the location value of each point obtained in the above as shown in Fig. 11, the vectors of x1, x2, and x3 are first calculated to obtain the rotation angle (θ_{1}, θ_{2}) at the measurement reference location J, thereby calculating the inbetween angle using the vector’s inner product. Here, the vector values of x1, x2, and x3 are directional values of the laser pointer that moves from the measurement reference location J plane perpendicularly and employs the three measurement point values (P_{1}, P_{5}, P_{3}) at the measurement reference location J. Two vectors over the measurement plane are calculated using the three points. Then, if the outer vector product of these two vectors is calculated, the direction vector of the laser pointer can be calculated. Assuming that the measurement laser pointer’s direction at _{W}T_{1} and WT5 is the Z direction, the definitions can be produced as presented in Eqs. (2) to (5)^{[6]}.
θ2 can also be defined in the same manner, through which θ1, θ2, L1, and L2 can be defined. Here, x1, x2, x3, α, and β can be calculated, and Eqs. (6) to (12) can be defined using the trigonometric function^{[7]}.
A = (L1+L2)/sin(θ1+θ2)
B = L1/sinθ1
C = L2/sinθ2
Using the above equations, a nonlinear equation consisting of five unknown numbers (x1, x2, x3, α, β) and seven equations (Eqs. (6) to (12)) is solved to obtain the distance and angle of each screen. Finally, the reference coordinate location and the direction of the screen viewed from the reference coordinate can be acquired^{[8]}.
5. Measurement method of spatial location and verification of the device
To verify the reliability of the measurement device and the method developed to measure the spatial location, a testbed was configured utilizing a laser tracker (a 3D precision measurement device) and a screen frame as shown in Fig. 12. Table 2 presents the analysis results after comparing the measured values of the precision measurement device and measured values from the developed measurement device. As presented in Table 2, the measurement results reveal that the standard deviation is within 0.294 mm, which satisfies the measurement error criterion.
6. Conclusions
This study proposed a spatial location measurement device and method for the synchronization of a virtual environment and a stable multiscreen frame structure to implement the multiscreenbased visualization environment, and the following conclusions were made.

An efficient and safe multiscreen frame structure could be designed that could be installed portably.

The basis of the technology for a VR display environment using a multiscreen structure could be obtained through the reliability verification of the real screen measurement method and an algorithm for synchronization between real and virtual spaces.

Compared to expensive precision measurement devices, an efficient measurement device was developed that can obtain comparable measurement results, and the reliability was verified.

This study proposed an efficient method for synchronization between virtual and real spaces, through which an alternative to the existing HMDbased VR visualization environment and system was presented. These results will be utilized to propose new directions in VR application fields in the future.