Digital Model in Close-Range Photogrammetry Using a Smartphone Camera

. Smartphones recently expanded the potential for low-cost close-range photogrammetry for 3D modeling. They enable the simultaneous collection of large amounts of data for a variety of requirements. It is possible to calculate image orientation elements and triangular coordinates in phases as in Relative and Absolute image orientation. This study demonstrates the photogrammetric 3D reconstruction approach that performs on tablets and smartphones as well. Images are taken with smartphone cameras of iPhone 6 and then calibrated automatically using normal calibration model for photogrammetry and computer vision on a PC, depend on Agisoft Lens add-on that imbedded in Agisoft program, and MATLAB camera calibration Toolbox, and by using an oriented bunch of images of chessboard pattern for large point cloud-based picture using matching. The camera calibration results indicate that the calibration processing routines pass without any error, and the accuracy of estimated IOPs was convenient compared with non-metric digital cameras and are more accurate in Agisoft Lens in terms of standard error. For the 3D model, 435 cameras were used, 428 cameras located from 435 are aligned in two photogrammetric software, Agisoft PhotoScan, and LPS. The number of tie points that are used in LPS is 10 tie points, and 4 control points which used to estimate the EOPs, and the number of tie points that are regenerated in Agisoft PhotoScan were 135.605 points, the number of Dense cloud 3,716,912 points are generated, for 3D model a number of 316,253 faces are generated, after processing the tiled model generated (6 levels, 1.25 cm/pix), the generated DEM having (2136×1774/pix), the dimensions of the generated high-resolution orthomosaic are (5520×4494, 4.47 cm/pix). For accuracy assessment, the Xerr. = 0.292 m, Yerr. = 0.38577 m, Zerr.= 0.2889 m, and the total RMS = 0.563 m in the estimated locations of the exterior orientation parameters.


Introduction
Image orientation has great importance because it precedes the process of reproducing threedimensional coordinates from two-dimensional images such as DTM, DSM, Orthophotos, and access to GIS [1]. Orientation is a process in which the location and behavior of a camera, a photo, a model, or any such unit is determined in space, based on reference coordinates [2]. In contrast, internal orientation parameters (IOP) receive the incoming light cone into the camera lens to form the original display. The internal orientation parameters are focal length, principal point, and distortion elements [3]. External orientation parameters (EOP) are three-dimensional coordinates and three orientation angles. The orientation is the reconfigure relative perspective mode between a pair of images, while the absolute orientation comes after the relative orientation where the relationship between the model and the coordinate system is ground [4]. The main image modeling (using a digital camera) requires a mathematical formula to convert the two-dimensional image into a three-dimensional image [5].
The image contains much important information to form the geometrical shape for use in 3D modeling applications. However, the process of recreating accurate 3D models remains a difficult task, especially in large complex locations that require a set of separate or converged images [6]. The process of reconstructing three-dimensional models is very important and has many applications [7]: 1) Allows dealing with a three-dimensional model without having to direct communication with the object. 2) Used to determine the landscape terrain of aerial photos and satellites. 3) To measure the movement of the slopes of a long distance. 4) For accurate measurements in industry and production. 5) To document scenes of crime and archaeological sites. 6) To reconstruct a traffic accident. 7) Control structures to measure deformation. Using a non-metric camera for generating 3D models is not new, but using devices for communication for 3D models seems to be entering a dark room.
Ebrahim [8] assesses the use and reliability of these mobile cameras, Nokia 3650 and Nokia 7650, in the photogrammetric application fields. The work is divided into two phases: the first phase is the laboratory test of the digital mobile phone camera's accuracy. A second phase is a functional approach on a building of several floors. The tests demonstrate that the relative precision for digital cameras embedded in the mobile phone is 1/400, which is reasonable sufficiently for various digital photogrammetric uses. Satchet [9] used the Direct Linear Transformation (DLT) method to compute the interior and exterior orientation parameters for the digital camera of two mobile phones, NOKIA N82 and CoolPAD 288, a total station TOPCON-GPT7501 was used to measure the control point targets; he found that the Total RMSE from using the camera of NOKIA N82 less than Total RMSE of CoolPAD 288, in the amount of 1.5 mm. Liba et al. [10] tested different photogrammetric software and digital cameras, one of them the digital camera of Sony Xperia Z1, the found that the worst result was obtained using Sony Xperia Z1 camera (total error 17.9 mm). Kim et al. [11] the potential to use smartphones in photogrammetric UAV systems has been evaluated. Yilmazturk et al. [12] implemented the digital camera of the Galaxy S4 smartphone and divided the search into two stages, the first camera calibration and the second stage the generate a 3D mesh model of an ahistorical cylinder.
The research is to assess the creation of a 3D model by reconstructing three-dimensional images using close-range images through digital images captured with a smartphone and processed in PC. It is divided into three phases. Phase one is the determination and evaluation of the Interior Orientation Parameters (IOP). The second stage is to determine and evaluate the EOPs through different methods and compare them, and the third stage the generating 3D model of the building.

Materials and methods
This paper introduces the following workflow Digital camera. The digital camera used in this research is the embedded digital camera of iPhone 6 smartphones. It has two cameras, rear and front camera (selfie camera), some of its rear digital camera specifications useful in photogrammetry as in Table 1. The value of the focal length that equivalent to 35 mm is 29.89 mm. Matlab camera calibration Toolbox. MATLAB Calibration Toolbox offers a range of calibration procedures, and various conventional calibration techniques give an efficient and robust calibration target, delivers the perfect conditions for camera calibration. The smartphone digital camera can be calibrated using the calibration tool in the MATLAB toolbox, and the calibration tool can be used to easily and accurately calibrate the interior camera's parameters and adjust the interior orientation parameters [13][14][15]. Several 10 digital images are captured by the iPhone 6 and uploaded to the toolbox, Figure 2. The dimensions of each imaged square in time of capturing were 20 mm. The toolbox then searches and identifies the corners of the squares. The toolbox interface is shown in Figure 2, and the green circles represent the identification for corners of the squares. IMAGINE Photogrammetry Project Manager. IMAGINE Photogrammetry Project Manager, or commonly known as (LPS), is one of the toolboxes that can be found in the will know ERDAS IMAGINE, and it is rigorous digital photogrammetry software is offered in a simple to use the environment to quickly and accurately triangulate and orthorectify images from different cameras and satellite-sensing types [16]. In this research, the IMAGINE Photogrammetry Project Manager used to compute the exterior orientation parameters based on the bundle block adjustment by the collinearity equation [16]. Firstly 10 tie points used to match the digital image and can be used later to compute the EOPs for all the images, and for scaling the EOPs, 4 control points are used to orient the model relative to control points Table 3, the coordinate system of the control points is an arbitrarily local coordinate system which based and measured on tiles of the building.  into a precise 3D model in a 3-step process. [17]. Being a commercial product, the algorithm's performing the different operations in the background are not available to the public. Company representatives stated on the user forum that "we have favored algorithms with higher accuracy output over faster approaches with less accurate output" [18]. The PhotoScan is used to orient and compute the camera EOP, which then compared with the results gained from IMAGINE Photogrammetry Project Manager, and then used to find the 3D model of the building Agisoft Lens. The Agisoft comes with an add-on called Agisoft lens [19]. Agisoft Lens is for precalibration of the camera lens used to take capture. With the smartphone's digital cameras, the photographer cannot change the lens or the focal length, and that will be an advantage because we only need to compute the IOP of the camera for one time. The distortions in the picture captured are mainly caused by the optical properties of the glass used in the lens and the accuracy of both the lens and the camera components. The effect of these errors must be eliminated for accurate image coordinates to be obtained. The Agisoft Lens calibration process is straightforward and easy to perform. First, the software shows a black and white chessboard pattern on the PC screen Fig. 3.  After run calibrate in the toolbox, the camera intrinsic is given in Table 3, and camera extrinsic will be neglected because it is computed according to the control points in the chessboard. The camera calibration results of the Agisoft lens can be found in Table 4 and Figure 5. The calibration values are used later as initial data to the 3-D reconstruction which is applied to all the Align photos. The lens of the smartphone used in this research indicates minimal distortion. The above calibration values were exported to an XML file compatible with Agisoft PhotoScan and imported as pre-calibration results. It has to be noted that the software does not give any indication of the quality of the calibration, except the standard error. From Tables 3 and 4, the results of the Agisoft Lens are more accurate when compared according to the estimated standard errors of the results.  Estimating the EOPs in IMAGINE Photogrammetry Project Manager. IMAGINE Photogrammetry Project Manager is used to estimate the EOPs used later in Agisoft PhotoScan to assignment the results by computing the differences between the computed EOPs in IMAGINE Photogrammetry Project Manager and Agisoft PhotoScan. In this approach, 10 well-identified tie points are used to find 371 tie points across the entire model, and the control points used in Table 2 are used to gather with tie points to estimate the EOPs the camera locations Figure 6 and Table 5. Table 5 represent the EOPS the estimated of 7 camera locations from 428 cameras. Generating the 3D Model in Agisoft PhotoScan. The following steps can summarize the generation of the 3D model.
Loading Photos. The first step in PhotoScan is to load and review all the images. In the software, all images are called cameras, and they are loaded into chunks. Chunks are used to discriminate digital images taken with different cameras/lenses, at different heights, or if there is a need to apply different processing regimes for parts of the photoset. Since none of the above applied to the images taken, they were all processed in the same chunk. PhotoScan has a built-in quality estimation feature ( Figure  7), and all the photos were checked. According to the user manual [20], the pass criteria is a value higher than 0.5. The images were taken in cloudy weather with equal lighting around the object.  Camera Alignment. In camera alignment, the first step is to organize and sequentially rearrange images (Camera Alignment), or in other words, estimating the EOP. In this stage, in addition to finding the position of the camera and for each image, the program will search for the common points in the images and work on matching them, as in Figure 8. A series of camera locations represent the positions of the images and, as a consequence of the processing, 428 camera location from 435 digital images were arranged in a sequence in which the results of this step were checked. Based on the control points in Table 2, the EOPs are estimated in Table 6.
Tie and dense cloud points. The consequence of the rearrangement of the images is this tie point Figure 9, consisting of points over 130 thousand points (135,605 points). In addition, in Figure 10

Results
The photos are analyzed, and feature points are detected and matched with overlapping photos. The alignment then computes a sparse point cloud based on camera positions and valid feature points. The cloud, scene orientation, and camera locations are visible. A Table of estimated errors for all cameras is available in Figure 11 and Table 7, computed in Agisoft Scan. It is computed based on the camera location information estimated at Agisoft PhotoScan, and imported EOPs estimated in IMAGINE Photogrammetry Project Manager.   This research is based on using the camera of a smartphone as a non-metric digital camera for close-range photogrammetric application. Table 3, Table 4 shows that the camera calibration parameters are reasonable for this type of low-cost digital camera. The estimated error for the two cases Agisoft Lens, and MATLAB camera calibration Toolbox, are accepted and solved without errors in the processing routines. Accordingly, the stability of IOPs of the smartphone considered good when compared with non-metric digital cameras. On the other hand, the EOPs are estimated by two types of photogrammetric software, Agisoft PhotoScan, which considered a fully automatic digital photogrammetric software, and IMAGINE Photogrammetry Project Manager, which considered from the semiautomatic digital photogrammetric software [21][22], and the result of the EOPs are given in Tables 5 and 6. The errors in Figure 11 and Table 7 indicated that the camera of the smartphone could deliver high accurate EOPs coordinates.

Conclusions
The accuracy of every photogrammetric project is based on the accuracy of IOPs and EOPs. Thus, this research aims to investigate the feasibility of creating a 3D model using smartphone digital images, which can be done through the estimation of the IOPs and EOPs with different tools and methods. The smartphone was used in this research (iPhone 6) with a focal length (4 mm). The calibration parameters IOPs that are estimated are similar or even better from some of the commercial digital cameras, the calibration results of the Agisoft lens seem to be more accurate in terms of the estimated standard error. To evaluate EOPs and generate the 3D model, about 435 images were taken around the Civil department, where 428 images are used, the EOPs are computed in two different software and the difference between the two approaches was. the total error Xerr. = 0.292 m, Yerr. = 0.38577m, Zerr. = 0.2889 m, and the total RMS = 0.563 m, which indicate high accuracy measurements and calculations, and the difference between the methods used are very trivial.