Keywords

1 Introduction

Multiple examples of systems for tracking objects using dynamic displacement have been previously reported [1, 2]. This is important because highly accurate techniques for capturing the detailed motion of objects are essential ingredients for the design of human-computer interaction systems involving image processing. However, methods for determining the displacement of objects, including optical flow techniques [3], typically yield results contaminated by large numbers of errors.

In this study, we investigate a highly accurate computational technique for optical flow in which object motions are identified with high accuracy by merging multiple color adjusted captured images. To facilitate this, we adopt block matching as a computational technique for optical flow. The object motions we consider are restricted to translation within a flat surface.

2 Optical Flow Analysis

We begin by analyzing an optical flow obtained via block-matching methods. The images used in this analysis are shown in Fig. 1. In this figure, the image on the right is shifted overall by 50 pixels to the right compared to the image on the left. In our analysis, we also constructed two modified versions of the figure at right: (1) an image in which +5 was added to the R values of all pixels, and (2) an image in which +5 was added to all RGB values of all pixels.

Fig. 1.
figure 1

Images of object moving

Figure 2 shows optical flows computed using the images of Fig. 1 and their color-adjusted versions. The left image of Fig. 2 shows the results obtained for the unmodified images, while the center and right images show results obtained for images with R values increased by 5 and with all RGB values increased by 5. The five most commonly occurring displacements determined from the optical flows of Fig. 2 are listed in Table 1, ranked in order of the frequency with which they occur in the images.

Fig. 2.
figure 2

Optical flow results using color balance change

Table 1. Frequency of occurrence of optical flow

The following features are evident in Fig. 2 and Table 1:

  • The optical flows contain numerous displacements errors for similar color regions.

  • Large numbers of correct flows are computed with high frequency, while erroneous flows are present with low frequency.

  • Color-adjusting the analyzed images yields different detection results.

3 Proposed Method

Next, we propose a computational method that corrects optical flows based on the three aforementioned features observed in our analysis. The correction methods used by our proposed method may be broadly classified into three types. In this section we discuss these three methods and describe their implementation in our proposed method. Figure 3 shows the images used in this section. These images consist of a color-chart board, as captured before and after a rightward displacement of 6.5 cm, together with the corresponding optical-flow image. The displacement here corresponds to approximately 105 pixels in the images.

Fig. 3.
figure 3

Images of object moving and optical flow

3.1 Correction Using the Most Frequent Vector

Based on the results of the analysis of Sect. 2, we conclude that correct optical flows are computed with high frequency, while erroneous optical flows are present at low frequencies. This observation suggests the possibility of correcting errors by replacing computed optical flows with the optical flows that appear with the greatest frequency. Moreover, even in cases involving multiple duplicate objects with different displacements, we expect the optical flow that appears with the greatest frequency for each duplicate to be correct. Motivated by these observations, we attempted a correction scheme in which we replaced optical flows for each of the various edges that are present in an image, with similarly colored regions taken to delineate duplicate objects.

3.2 Correction Using Perturbations of Image Color

Based on the results of the analysis of Sect. 2, we conclude that adjusting the coloration of analyzed images yields differing results. For this reason, seems reasonable to expect that applying the correction scheme of Sect. 3.1 to coloration-adjusted images will also give rise to discrepancies. The left image of Fig. 4 shows the result of applying the correction scheme of Sect. 3.1 to an image before coloration adjustment. Similarly, the right image of Fig. 4 shows the result of applying the correction scheme of Sect. 3.1 to an image in which all R values have been increased by 5.

Fig. 4.
figure 4

Optical flow of correction result of Sect. 3.1

From Fig. 4, we see that intentionally adjusting the coloration of the image yields different results for optical-flow correction. These considerations suggest that errors may be corrected by preparing three versions of an image, with colorations intentionally adjusted, and then applying the correction scheme of Sect. 3.1 to each image. In this proposal, we use 14 coloration adjusted images together with the unmodified image, and we can select as many as three versions of each object duplicate. Next, we will compute optical flows for these three image versions and execute the correction scheme of Sect. 3.1.

3.3 Correction Using Consecutive Images

In this study, we use moving images to determine optical flows. In general, the motion of objects in moving images is continuous. Consequently, there is a high probability of finding the same vector in two consecutive images captured for optical-flow computations. For this reason, we obtain vectors using the three frames of images that follow the two consecutive images used for our optical-flow computations, and we use these vectors to assist in selecting images within the correction scheme of Sect. 3.2.

4 Comparative Experiments

To test whether or not the optical-flow correction scheme proposed in this study actually yields corrections in practice, we conducted experiments to compare corrected results to pre-correction results. In these experiments, we use moving images obtained from a single calibrated camera. We used six seconds of moving images at a frame rate of five frames per second (FPS). The content of these images consists of color-chart board held with one hand and moved from left to right in front of the camera in a room with stable illumination. Figure 5 shows two consecutive images selected at random from the set of moving images used in these experiments. In this figure, we have selected portions of the images involving the color-chart board moving in the downward left direction.

Fig. 5.
figure 5

Images of object moving of comparison experiment

Figure 6 shows the optical flow computed using Fig. 5. The left and right images in this figure correspond pre-correction and post-correction results, respectively. As we can see in this figure, the results of our comparison experiments demonstrate that our correction scheme successfully eliminates errors. However, there are many regions in which the optical flow has been replaced by a displacement of 0. One possible explanation for this is that, in this method, we use the Canny edge-detection technique to distinguish duplicate objects. In this method, edges are not detected in situations deemed to involve only minor variations in the gradient. Instead, such cases result in a merger with regions corresponding to other duplicate objects. For this reason, the values that occur with the greatest frequency differ from the ideal values, and are thus replaced by not ideal values.

Fig. 6.
figure 6

Optical flow results of Fig. 5

In addition, in this study we restrict the motion of objects to translational motion in the plane. For this reason, images which a person is moving the board from left to right may include slight expansions or contractions and slight rotational motions. Thereby, it is considered that the maximal-frequency values differ from ideal values.

5 Conclusion

In this study we investigated an optical-flow correction scheme designed to improve the accuracy of optical flows. We began by attempting to detect key features present in the results of existing optical-flow computations and succeeded in identifying three such features. Based on these findings, we proposed an optical-flow correction scheme whose operational steps may be broadly classified into three categories.

To verify the effectiveness of our proposed method, we conducted experiments to compare corrected and uncorrected results. Upon drawing optical flows computed before and after our correction process, we found that our correction scheme successfully eliminated large numbers of errors. However, some optical flows were overwritten by erroneous displacements, and thus lost. We attribute these losses to the low accuracy of our edge-selection technique – which caused the correction process to fail due to mergers with other regions – as well as to motions other than translational motion within a plane in the moving images we used.

Topics for future work include efforts to improve the accuracy of our method by increasing the accuracy of our edge-selection algorithm, and to detect motions of duplicate objects other than in-plane translations, including expansion, contraction, and rotation.