Next Article in Journal
A Study of the Analysis Method for the Surface Roughness on the Inner Bore of Diesel Engines before and after Running-in Operations
Previous Article in Journal
Identifying the Unknown Parameters of Equivalent Circuit Model for Li-Ion Battery Using Rao-1 Algorithm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Vision-Based Structural Identification Using an Enhanced Phase-Based Method †

1
Department of Civil and Environmental Engineering, Politecnico di Milano, 20133 Milano, Italy
2
Department of Civil Engineering, University of Kurdistan, Sanandaj P.O. Box 416, Iran
*
Author to whom correspondence should be addressed.
Presented at the 10th International Electronic Conference on Sensors and Applications (ECSA-10), 15–30 November 2023; Available online: https://ecsa-10.sciforum.net/.
Eng. Proc. 2023, 58(1), 62; https://doi.org/10.3390/ecsa-10-16036
Published: 15 November 2023

Abstract

:
Operational modal analysis is based on data collected using a network of sensors installed on a monitored structure to measure its response to the external stimuli. As the instrumentation can be costly, sensors are placed at a limited number of locations where damage-sensitive features can hopefully be sensed. Hence, the actual ability to detect a shift from the undamaged structural state in real time might be detrimentally affected. Non-contact measurement methods relying on, e.g., digital video cameras, which have gained interest in recent years, can instead provide high-resolution and diffused measurements/information. In this study, moving from videos of a vibrating structure, a shift in its dynamic response was assessed. By means of a phase-based optical flow methodology, a linear correlation between the phase and the structural motion was customarily assumed using, e.g., the Gabor filter. Since such a correlation does not result in being always linear, linearization is necessary for all the frames. By using the blind source separation method, mode shapes and vibration frequencies were finally obtained. The performance of the proposed method is investigated to verify the accuracy in extracting the dynamic features of the considered structure using the proposed method.

1. Introduction

Modal analysis is a vital tool used to identify the dynamic behavior of structures in terms of natural frequencies, mode shapes, and damping ratios [1,2,3]. By understanding how the structures respond to the external forces, it can be ensured that they withstand such loads as well as the relevant environmental conditions [4]. Modal analysis serves as a crucial tool used in model order reduction techniques [5,6] and structural health monitoring (SHM) [7]. In recent times, the use of vision-based measurements has emerged as a highly effective method for full-field identification [8,9], damage detection [10], model updating [11], and response measurement [12,13]. This innovative approach exploits the data from images to gain insights into structural behavior. Besides high-resolution response measurements, it avoids the additional weight linked to the sensing system and reduces the cost of purchasing, installing, and maintaining sensors.
Optical flow estimation is a method used in computer vision that tracks the movement of pixels between consecutive frames of a video. Fleet and Jepson [14] demonstrated that the local phase of an image, obtained through quadratic filters [15], represents motion more robustly than the intensity. Phase-based motion magnification, introduced by Wadhwa et al. [16], involves the amplification of subtle motions in a video sequence by focusing on phase information. By means of this technique, Yang et al. [17] obtained mode shapes from videos of vibrating structures. These mode shapes were subsequently employed in [18] to identify and locate damage (also refer to [19]). Southwick et al. [20] expanded this approach to extract 3D volumetric motions. Luo et al. [21] introduced a novel image-processing technique that addresses the challenge vision sensors face in outdoor structural displacement monitoring, as the conventional approach is susceptible to noise and limited in its measurement range. To enhance accuracy, Cai et al. [22] addressed the limitations of phase-based estimation and proposed a novel multi-view measurement method. Miao et al. [23], by optimizing the Gabor filter parameters, introduced a robust phase-based displacement measurement technique to capture vibration responses.
In this paper, we explore the impact of a specific filter parameter on the filtering response. As shown in the results, it is crucial to recognize that not all the pixels in an image are suitable for motion detection or identification processes; the selection of the appropriate one(s) is a critical concern. Via a defined criterion, regions are identified where the linear correlation between phase and motion is not established. This detection process enhances the accuracy of motion estimation. The phase-based displacement measurement was applied to a real case test, and phase-based identification, incorporating Independent Component Analysis (ICA) blind-source separation, was performed on videos of a vibrating structure.

2. Motion and Phase Relation

In one-dimensional signal analysis, phase delay is related to how signals evolve over time; the same concept can be applied to two-dimensional signals like images. When a feature in an image undergoes spatial movement, like a translation or a rotation, this process leads to changes in the local phase of the pixels. This shift in phase is directly related to the extent and direction of the movement.
Fleet and Jepson [14] explored the connection between local phase difference and motion. By tracking the constant phase contours in successive frames, a motion field can be obtained. Assuming that the intensity of the first frame at time t0 in a pixel with coordinates  X x , y is I ( X ) if it becomes displaced by X , t , the intensity profile of the next frame at time t1 is I ( X + X , t ) . To extract the local phase, it is necessary to use filters like the Gabor filter.
The response of the Gabor filter is a complex valued function that can be expressed as
R X , t = ρ X , t × e i ϕ X , t
where ρ X , t and ϕ X , t represent its amplitude and phase component, given by
ρ X , t = R X , t = R e [ R X , t ] 2 + I m [ R X , t ] 2 ϕ X , t = arg   [ R X , t ]
The tracking of continuous changes in the phase contours over time, providing a reliable approximation of the motion field. In simple terms, points represented by X x , y on these contours maintain a constant value of ϕ X , t = c . The displacement in a direction θ is then derived from the movement of local phase contours.

3. Unreliable Phase Detection

Motion estimation from videos is characterized by a significant challenge: it is not feasible to extract genuine motion data from every pixel. This issue arises due to various factors, including fluctuations in lighting conditions, Gabor filter parameters, varying object scales within an image, and the presence of noise.
To provide an illustration of this problem, Figure 1a displays an image of a chimney located at Politecnico di Milano, as captured using a standard mobile phone camera. Images were then processed to create a simulated displacement scenario (ten pixels).
By applying the Gabor filter with two different scales, the phase and amplitude obtained from suitable pixels were selected for displacement measurement. When the phase varies between values of π and −π, jumps (phase wrapping) occur, but they should not be considered as unstable regions. A simple way of unwrapping the phase was adopted in this study: when the phase difference between two frames at a point is greater than π, 2π is added or subtracted.
In Figure 1d, displacement measurements with three colors are shown: the green region stands for measurements characterized by an error smaller than 5%, while the red region features measurement errors larger than 5%; the cyan region is instead related to an unstable phase. As shown in the figure, the area representing accurate measurements increases in size at a higher scale, but finer details are lost; for instance, the upper-left part of the tower is not captured by the selected pixels at the higher scale.
The phase-based motion estimation methodology is based on the linear correlation between the phase and the structural motion. However, phase nonlinearity stands out as the primary cause of inaccurate motion estimation. By detecting the region in which the phase contours are not likely to provide reliable information about the motion, the measurement error can be reduced. According to [24], two constraints may be needed to detect the points with such a behavior. This approach was adopted here, with different bounds on the output of the applied Gabor filter tuned to one scale. The first one is the frequency constraint: the instantaneous frequency of the filter output, which is the spatial gradient of the phase ϕ x X , t , was constrained such that it would be close to the value at which the filter was tuned:
ϕ x X , t k < τ
The second constraint was instead based on the amplitude: its derivative ρ x X , t was constrained such that it would be small, according to:
ρ x X , t / ρ X , t < τ
To determine the impact of these constraints, the displacement field between two frames in Figure 1a were computed at four scales. Pixels were selected using a simple threshold on the amplitude using Equations (3) and (4). Figure 2 shows that by employing these thresholds with τ = 0.05 , the cyan regions are completely avoided, and the red region is reduced in comparison with that linked to the phase selection solely based on the amplitude.

4. Full-Field Identification

An investigation of the performance of pixel selection based on the proposed criteria is provided in this section, with the aim of identifying full-field modal shapes and vibration frequencies. A model of a ruler subject to free vibrations was captured in a video using MATLAB. Non-linearity in the phase was introduced to account for the lack of alignment between the filter orientation and the edge of the body. The process of identification is depicted in Figure 3.
By defining the appropriate pixels on the basis of the discussion in Section 3, a matrix was created to represent the time series of the selected phases across all frames. By employing PCA, the dimensionality of this matrix was reduced to the number of excited modes, which turned out to be three in the present case. In general cases, this number of modes is set on the basis of the eigenvalues of the original matrix: to avoid issues related to noise, only principal components linked to eigenvalues larger than 1% of the maximum value were retained in the model. Afterward, by way of ICA [25], the frequencies of the vibrations and the corresponding mode shapes were obtained.
In Figure 4a, the identified frequencies (2.99, 18.45, and 51.87) are compared with their corresponding real values (2.93, 18.63, and 51.77) to demonstrate the remarkable accuracy obtained, amounting to 98%, 99.04%, and 99.8%, respectively. Figure 4b shows the identified mode shapes, which are compared to the real ones using the modal assurance criterion, revealing similarities of 99.1%, 96.7%, and 98.2%, respectively. In Figure 5, it is further demonstrated that the selected regions all have consistent representations of the mode shapes.

5. Conclusions

In this study, the impact of the bandwidth of the Gabor filter on the phase extracted from images of a video of structural vibrations was investigated. By enlarging the value of this parameter, the level of detail under examination is decreased; alternatively, by decreasing it, the linear relationship assumption between phase and motion is violated. Choosing appropriate pixels for the analysis therefore poses challenges in image processing. By imposing specific conditions on the phase and amplitude of the filtered images, artificial displacement in a real structure was examined. It was observed that these conditions allowed for the identification of unsuitable phase areas in order to attain more-accurate solutions. Moreover, the selected phases were exploited for blind modal identification after dimensionality reduction through PCA and the application of ICA. The identified frequencies and modal shapes were shown to accurately match the real ones.
In this investigation, pixel selection was performed exclusively using the first frame of the video. However, considering the potential effects of non-linearity, this choice might differ across different frames. The objective in future research will be to identify these locations across all frames, remove them, and subsequently provide accurate estimations using learning-based methods.

Author Contributions

Conceptualization, S.A., S.M. and K.K.; methodology, S.A. and K.K.; validation, S.A., S.M. and K.K.; formal analysis, S.A.; resources, K.K.; data curation, S.A.; writing—original draft preparation, S.A.; writing—review and editing, S.M.; visualization, S.M.; supervision, S.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Rosafalco, L.; Eftekhar Azam, S.; Manzoni, A.; Corigliano, A.; Mariani, S. Unscented Kalman Filter Empowered by Bayesian Model Evidence for System Identification in Structural Dynamics. Comput. Sci. Math. Forum. 2022, 2, 3. [Google Scholar]
  2. Azizi, S.; Karami, K.; Nagarajaiah, S. Developing a semi-active adjustable stiffness device using integrated damage tracking and adaptive stiffness mechanism. Eng. Struct. 2021, 238, 112036. [Google Scholar] [CrossRef]
  3. Amini, F.; Karami, K. Damage detection algorithm based on identified system Markov parameters (DDA/ISMP) in building structures with limited sensors. Smart Mater. Struct. 2012, 21, 055010. [Google Scholar] [CrossRef]
  4. Gatti, F.; Rosafalco, L.; Colombera, G.; Mariani, S.; Corigliano, A. Multi-storey shear type buildings under earthquake loading: Adversarial learning-based prediction of the transient dynamics and damage classification. Soil Dyn. Earthq. Eng. 2023, 173, 108141. [Google Scholar] [CrossRef]
  5. Torzoni, M.; Rosafalco, L.; Manzoni, A.; Mariani, S.; Corigliano, A. SHM under varying environmental conditions: An approach based on model order reduction and deep learning. Comput. Struct. 2022, 266, 106790. [Google Scholar] [CrossRef]
  6. Rosafalco, L.; Manzoni, A.; Mariani, S.; Corigliano, A. Combined model order reduction techniques and artificial neural network for data assimilation and damage detection in structures. In Computational Sciences and Artificial Intelligence in Industry: New Digital Technologies for Solving Future Societal and Economical Challenges; Springer: Cham, Switzerland, 2022; pp. 247–259. [Google Scholar]
  7. Torzoni, M.; Manzoni, A.; Mariani, S. Structural health monitoring of civil structures: A diagnostic framework powered by deep metric learning. Comput. Struct. 2022, 271, 106858. [Google Scholar] [CrossRef]
  8. Feng, D.; Feng, M.Q. Experimental validation of cost-effective vision-based structural health monitoring. Mech. Syst. Signal Process. 2017, 88, 199–211. [Google Scholar] [CrossRef]
  9. Yang, Y.; Dorn, C. Affinity propagation clustering of full-field, high-spatial-dimensional measurements for robust output-only modal identification: A proof-of-concept study. J. Sound Vib. 2020, 483, 115473. [Google Scholar] [CrossRef]
  10. Dworakowski, Z.; Kohut, P.; Gallina, A.; Holak, K.; Uhl, T. Vision-based algorithms for damage detection and localization in structural health monitoring. Struct. Control Health Monit. 2016, 23, 35–50. [Google Scholar] [CrossRef]
  11. Martini, A.; Tronci, E.M.; Feng, M.Q.; Leung, R.Y. A computer vision-based method for bridge model updating using displacement influence lines. Eng. Struct. 2022, 259, 114129. [Google Scholar] [CrossRef]
  12. Yang, Y.; Jung, H.; Dorn, C.; Park, G.; Farrar, C.; Mascareñas, D. Estimation of full-field, full-order experimental modal model of cable vibration from digital video measurements with physics-guided unsupervised machine learning and computer vision. Struct. Control Health Monit. 2019, 26, e2358. [Google Scholar] [CrossRef]
  13. Bhowmick, S.; Nagarajaiah, S.; Lai, Z. Measurement of full-field displacement time history of a vibrating continuous edge from video. Mech. Syst. Signal Process. 2020, 144, 106847. [Google Scholar] [CrossRef]
  14. Fleet, D.J.; Jepson, A.D. Computation of component image velocity from local phase information. Int. J. Comput. Vis. 1990, 5, 77–104. [Google Scholar] [CrossRef]
  15. Weldon, T.P.; Higgins, W.E.; Dunn, D.F. Efficient Gabor filter design for texture segmentation. Pattern Recognit. 1996, 29, 2005–2015. [Google Scholar] [CrossRef]
  16. Wadhwa, N.; Rubinstein, M.; Durand, F.; Freeman, W.T. Phase-based video motion processing. ACM Trans. Graph. 2013, 32, 1–10. [Google Scholar] [CrossRef]
  17. Yang, Y.; Dorn, C.; Mancini, T.; Talken, Z.; Kenyon, G.; Farrar, C.; Mascareñas, D. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification. Mech. Syst. Signal Process. 2017, 85, 567–590. [Google Scholar] [CrossRef]
  18. Yang, Y.; Dorn, C.; Mancini, T.; Talken, Z.; Theiler, J.; Kenyon, G.; Farrar, C.; Mascareñas, D. Reference-free detection of minute, non-visible, damage using full-field, high-resolution mode shapes output-only identified from digital videos of structures. Struct. Health Monit. 2018, 17, 514–531. [Google Scholar] [CrossRef]
  19. Yang, Y.; Jung, H.; Dorn, C.; Park, G.; Farrar, C.; Mascareñas, D. Estimation of full-field dynamic strains from digital video measurements of output-only beam structures by video motion processing and modal superposition. Struct. Control Health Monit. 2019, 26, e2408. [Google Scholar] [CrossRef]
  20. Southwick, M.; Mao, Z.; Niezrecki, C. Volumetric Motion Magnification: Subtle Motion Extraction from 4D Data. Measurement 2021, 176, 109211. [Google Scholar] [CrossRef]
  21. Luo, L.; Feng, M.Q.; Wu, Z.Y. Robust vision sensor for multi-point displacement monitoring of bridges in the field. Eng. Struct. 2018, 163, 255–266. [Google Scholar] [CrossRef]
  22. Cai, E.; Zhang, Y.; Ji, X.; Lu, X.; Xie, L.; Zhuang, Y.; Zhao, T.; Lin, G. Estimating small structural motions from multi-view video measurement. Eng. Struct. 2023, 275, 115259. [Google Scholar] [CrossRef]
  23. Miao, Y.; Jeon, J.Y.; Kong, Y.; Park, G. Phase-based displacement measurement on a straight edge using an optimal complex Gabor filter. Mech. Syst. Signal Process. 2021, 164, 108224. [Google Scholar] [CrossRef]
  24. Jepson, A.D.; Fleet, D.J. Phase singularities in scale-space. Image Vis. Comput. 1991, 9, 338–343. [Google Scholar] [CrossRef]
  25. Hyvärinen, A.; Oja, E. Independent component analysis: Algorithms and applications. Neural Netw. 2000, 13, 411–430. [Google Scholar] [CrossRef]
Figure 1. (a) Pair of images with 10-pixel displacement between them; (b) contours of amplitude and phase; (c) selected pixel locations; (d) displacement measurement. Here, green regions are characterized by an error smaller than 5%, red regions by an error larger than 5%, and cyan regions by an unstable phase; (e) close-up area. Upper row with scale = 50; lower row with scale = 100.
Figure 1. (a) Pair of images with 10-pixel displacement between them; (b) contours of amplitude and phase; (c) selected pixel locations; (d) displacement measurement. Here, green regions are characterized by an error smaller than 5%, red regions by an error larger than 5%, and cyan regions by an unstable phase; (e) close-up area. Upper row with scale = 50; lower row with scale = 100.
Engproc 58 00062 g001
Figure 2. Effect of different thresholds on the displacement measurement between two frames at different scales (colors are defined in the caption of Figure 1).
Figure 2. Effect of different thresholds on the displacement measurement between two frames at different scales (colors are defined in the caption of Figure 1).
Engproc 58 00062 g002
Figure 3. Full-field identification using PCA–ICA.
Figure 3. Full-field identification using PCA–ICA.
Engproc 58 00062 g003
Figure 4. Identified (a) vibration frequencies and (b) mode shapes using PCA−ICA.
Figure 4. Identified (a) vibration frequencies and (b) mode shapes using PCA−ICA.
Engproc 58 00062 g004
Figure 5. First, second, and third full field mode shapes with pixel intensity represented with different colors.
Figure 5. First, second, and third full field mode shapes with pixel intensity represented with different colors.
Engproc 58 00062 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Azizi, S.; Karami, K.; Mariani, S. Vision-Based Structural Identification Using an Enhanced Phase-Based Method. Eng. Proc. 2023, 58, 62. https://doi.org/10.3390/ecsa-10-16036

AMA Style

Azizi S, Karami K, Mariani S. Vision-Based Structural Identification Using an Enhanced Phase-Based Method. Engineering Proceedings. 2023; 58(1):62. https://doi.org/10.3390/ecsa-10-16036

Chicago/Turabian Style

Azizi, Samira, Kaveh Karami, and Stefano Mariani. 2023. "Vision-Based Structural Identification Using an Enhanced Phase-Based Method" Engineering Proceedings 58, no. 1: 62. https://doi.org/10.3390/ecsa-10-16036

Article Metrics

Back to TopTop