目的 针对城市实施的路灯杆编码项目，即获取路灯坐标并依次编号，传统的测量方法耗费大量人力物力，且作业周期长，激光测量精度高，但成本也高。本文提出一种将基于深度学习的目标检测与全景量测结合自动获取路灯坐标的方法。方法 通过Faster R-CNN训练检测模型，对全景图像中的路灯底座进行检测，同时输出检测框坐标，并与 HOG（histogram of oriented gradient）特征结合 SVM（support vector machine）的检测结果进行效果对比。再将检测框的对角线交点作为路灯脚点，采用核线匹配的方式找到两张图像中一个或多个路灯相对应的同名像点并进行前方交会得到路灯的实际空间坐标，为路灯编码做好前期工作。结果 采用上述两种方法分别对100张全景影像的路灯进行检测，结果显示Faster R-CNN具有明显优势，采用Faster R-CNN与全景量测结合自动获取路灯坐标。由于路灯底部到两成像中心的距离大小及三点构成的交会角大小对坐标量测精度具有较大影响，本文分别对距离约为7m、11m、18m的点在交会角为0°至180°的范围进行量测结果对比。经验证，交会角在30°至150°时，距离越近，对量测精度的影响越小。基于上述规律，在自动量测的120个路灯坐标中筛选出交会角大于30°且小于150°，距离小于20 m 的102个点进行精度验证，其空间坐标量测结果中误差小于0.3米，最大不超过0.6m，满足路灯编码项目中路灯坐标精度在1m以内的需求。结论 本文提出了一种自动获取路灯坐标的方法，将基于深度学习的目标检测应用于全景量测中，避免手动选取全景图像的同名像点进行双像量测，节省大量人力物力，具有一定的实践意义。本文方法适用于城市车流量较少的路段或时段，以免车辆遮挡造成过多干扰，对于路灯遮挡严重的街道全景，本文方法存在一定的局限性。
Objective With the development of urban management, more and more cities are implementing coding projects for street light poles, which is obtaining the coordinates of street lamps and to assign them serial numbers. There are many ways to obtain the coordinates, such as RTK and laser measurements. We require a quick and easy approach to obtain the data because there are tens of thousands of street lamps in a city. In consideration of the cost, mobile panorama measurement is preferred. However, most current panorama measurements are conducted by means of human-computer interaction, selecting homologous image points to perform forward intersection to obtain the coordinates, which consumes substantial energy and time. Therefore, in this paper, we propose an automatic method to obtain the coordinates of street lamps by combining object detection with panoramic measurement. Method The method combines deep learning and panoramic measurement, to automatically obtain the coordinates of street light poles. There is no obvious feature point on the poles due to their rod-shaped features, and the top of the street lamp is different because of the different design. The distortion of panoramic images has a strong influence on the detection of the top of a street lamp, so the bottom of the poles is used as the detection target in this paper. The pole bottoms are detected by Faster R-CNN; meanwhile, the coordinate file containing the upper-left and lower-right corners of the detection frames are output and compared with the detection results obtained by the combination of histogram of oriented gradient (HOG) and support vector machine (SVM). Then, the diagonal intersection of the detection box is regarded as the foot of the street light pole, and an epipolar line is used to find homologous image points in two panoramic images because there can be multiple street light poles in a panoramic image. According to the matching results, the space coordinates of the street light poles are obtained by forward intersection of the panoramas, confirming the potential of this preliminary work for the coding projects. Result The above two methods were used to detect the street lights of 100 panoramic images, which include 162 street lights. There are 1826 detection results based on HOG features, of which the correct bottom of street lamps is 142. And there are 149 detection results based on the Faster R-CNN, of which 137 are correct. We can conclude that Faster R-CNN has obvious advantages. So in this paper, we use the Faster R-CNN combined with panoramic measurement to automatically obtain the streetlight coordinates. As the distance from the bottom of the street lamp to the two imaging centers and the intersection angle formed by the three points have a great influence on the accuracy of coordinate measurement. In order to filter out the coordinates that are less affected by the above two factors, we compare measurement results, which the distances of about 7m, 11m, 18m and the intersection angle are 0° to 180°. It has been verified that when intersection angle are from 30° to 150°, the influence on the measurement accuracy is smaller as the distance is closer. Based on the above rules, we select 120 coordinates of street lamps to determine the statistical distribution of the intersection angle and distance. Points whose distance is less than 20m and intersection angle is greater than 30° and less than 150° are selected for the coordinate error analysis, and 102 points meet the requirements for accuracy verification. The deviation of space coordinate measurement is less than 0.3m and the maximum is not more than 0.6m, satisfying the requirement that the accuracy of the coordinates is within 1 m. Conclusion This paper presents a method of automatically obtaining the coordinates of street lamps. The method of target detection based on deep learning is applied to the panorama measurement. It avoids the manual selection of the homologous image points for measurement, which saves a lot of manpower and material resources. We can conclude that this method has practical significance in some extent. The method in this paper is suitable for road sections or periods with low traffic volume in the city, which avoid excessive obstruction caused by vehicles. For the panoramas whose street lights are obstructed seriously, this method has certain limitations.