Next Article in Journal
Bidirectional Face Aging Synthesis Based on Improved Deep Convolutional Generative Adversarial Networks
Previous Article in Journal
Foreword to the Special Issue: “Semantics for Big Data Integration”
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection

1
LRIT-CNRST URAC 29, Rabat IT Center, Faculty of Sciences, Mohammed V University in Rabat, Rabat, BP 1014, Morocco
2
PRISME Laboratory, University of Orleans, 45100 Orléans, France
3
Higher School of Technology, Sale, BP 227, Morocco
4
LRIT-CNRST URAC 29, Rabat IT Center, FLSH, Mohammed V University in Rabat, Rabat, BP 1014, Morocco
5
Laboratoire Electronique, Informatique et Image (Le2i) UMR 6306 CNRS, University of Burgundy, 21078 Dijon, France
*
Author to whom correspondence should be addressed.
Information 2019, 10(2), 67; https://doi.org/10.3390/info10020067
Submission received: 1 January 2019 / Revised: 18 January 2019 / Accepted: 22 January 2019 / Published: 18 February 2019

Abstract

:
Three-dimensional models have been extensively used in several applications including computer-aided design (CAD), video games, medical imaging due to the processing capability improvement of computers, and the development of network bandwidth. Therefore, the necessity of implementing 3D mesh watermarking schemes aiming to protect copyright has increased considerably. In this paper, a blind robust 3D mesh watermarking method based on mesh saliency and wavelet transform for copyright protection is proposed. The watermark is inserted by quantifying the wavelet coefficients using quantization index modulation (QIM) according to the mesh saliency of the 3D semiregular mesh. The synchronizing primitive is the distance between the mesh center and salient points in the descending order. The experimental results show the high imperceptibility of the proposed scheme while ensuring a good robustness against a wide range of attacks including smoothing, additive noise, element reordering, similarity transformations, etc.

1. Introduction

Due to the achieved development in telecommunication networks and digital media processing, the multimedia contents including image, audio, video, 3D objects can be easily copied and redistributed by unauthorized users. As a result, the need to protect these contents has increasingly become crucial. Digital watermarking, which consists of inserting a watermark in the host data to protect copyright, is considered as an efficient solution to overcome the above-mentioned issue [1]. Here, we consider only 3D triangular mesh as cover media and we address the robust watermarking. Three-dimensional objects are widely used in several applications such as computer aided design (CAD), virtual reality, and medical imaging. Unlike 2D images, several representation for 3D models exist, including NURBS, voxels, and meshes. However, the 3D mesh has become the standard representation because of its simplicity and usability [2]. A 3D mesh is a collection of polygonal facets that approximate a real 3D object. It has three primitives: vertices, facets, and edges. Another representation of the 3D mesh is geometry which refers to the coordinates of vertices and connectivity that describes the adjacency relations between the vertices and facets. The degree of a facet refers to the number of its component’s edges, while the number of incidents edges of a vertex represents its valence. The majority of digital watermarking approaches have focused on image, video, and audio, while few works have been proposed for 3D meshes. This situation is mainly due to the difficulties encountered while manipulating the complex topology and irregular representation of 3D meshes, as well as the severity of the existing attacks. Unlike 2D images in which pixels have an intrinsic order to synchronize the watermark bits, 3D meshes have no obvious robust intrinsic ordering. Indeed, the intuitive order such as the order of vertices in the Cartesian coordinate system can be modified easily [3].
Each 3D watermarking technique should ensure a tradeoff between three main constraints [4]: imperceptibility, capacity, and robustness. Imperceptibility is the similarity between the original model and the stego model, while capacity refers to the maximum of bits that can be embedded in the 3D model. Robustness means the ability to extract the watermark bits from the stego mesh even after applying some manipulations called attacks. These latter can be divided into two main kinds. Geometric attacks such as local deformation operations, similarity transformations (translation, rotation, and uniform scaling), and signal processing manipulations that include noise addition, smoothing, and compression. Connectivity attacks include subdivision, cropping, remeshing, and simplification. Three-dimensional mesh watermarking techniques have different applications such as authentication, content enhancement, and copyright protection, among others. It is worth noticing that the proposed method focuses on copyright protection.
There are several classification criteria for 3D watermarking methods. According to the embedding domain, the existing methods can be divided into spatial [5] and transform techniques [6]. In the spatial domain, the watermark is embedded by modifying the geometry or the connectivity of the 3D mesh, while in the transform domain the watermark is inserted by altering the coefficients obtained after a certain transformation such as wavelet transform. Based on resistance to attacks, the watermarking algorithms can be classified into robust, semi-fragile, and fragile. Each kind is used in a specific application. It is well known that the robust methods are used with the aim of protecting copyright. These methods should ensure high robustness against common attacks such as signal processing attacks as well as geometric manipulations, while maintaining good imperceptibility. The proposed method is based on wavelet transform using the subdivision of Lounsbery et al. [7]. We note that such a transformation can be applied only for semi-regular meshes [3]. Several methods using wavelet transformation have been proposed [3,8,9].
Three-dimensional blind watermarking schemes have a very challenging issue, which consists of the geometric distortions causing damage to the mesh appearance. This problem can be more complex in some applications such as medical diagnostic or manufacturing, where a small modification can cause a significant difference between the original mesh and the watermarked one. The obvious way to overcome this issue is to use techniques that preserve the important regions of the 3D mesh. The importance of a vertex of a region can be different throughout the 3D surfaces for the human eye. To describe this importance, some techniques used the mesh saliency in order to define salient and non-salient regions [10,11,12]. Therefore, to ensure the invisibility requirement of the 3D mesh, the embedding of the watermark should not have a visual impact in the geometry of perceptual important regions. Thus, mesh saliency can provide important information and adjust the effects to the perceived quality of 3D models. In [3], Wang et al. proposed a hierarchical watermarking scheme for semi-regular meshes in which fragile, high-capacity, and robust watermarks have been embedded at different levels of resolutions using wavelet transform.
In fact, a few saliency-based watermarking methods for 3D meshes have been proposed. In [10], a 3D mesh watermarking algorithm based on the mesh saliency of Lee et al. [13] is presented. First, using the mesh saliency, perceptually conspicuous regions have been identified. Second, for each vertex, the norm is calculated and its histogram is constructed. Finally, the associated vertex norms of each bin are normalized. In [11], Son et al. presented a 3D watermarking technique aiming to preserve the appearance of the watermarked 3D mesh. The vertex norm histogram is used as a watermarking primitive, which has been already proposed by Cho et al. [14]. Indeed, the watermark is embedded by modifying the mean or the variance of the vertex norm histogram. Recently, Medimegh et al. [12] proposed a robust and blind watermarking method for 3D meshes based on auto diffusion function (ADF). The authors extract the salient point using the (ADF), and the mesh is segmented according to these points and the watermark is inserted statistically in each region.
In this context, the proposed method takes the full advantage of mesh saliency and (QIM) quantization to design a watermarking method robust to a wide range of attacks and ensures high imperceptibility. The proposed method is based on the visual saliency associated with wavelet coefficient vectors. The watermark bits are inserted in the original 3D mesh by quantization of wavelet coefficient vectors after carrying out one wavelet decomposition. The coefficients to be quantized are chosen based on visual saliency of the 3D mesh, while the watermarking synchronization is the distance between the mesh center and salient points. This order has been found to be robust to different attacks such as similarity transformations and elements reordering.
The rest of this paper is organized as follows. Section 2 describes the used terminologies. The proposed method is reported in Section 3 followed by a presentation of the experimental setup in Section 4. Section 5 discusses the experimental results. The paper is concluded in Section 6.

2. Background

2.1. Three-Dimensional Mesh Saliency

Generally, the visual attention of humans is directed to the salient regions of the 3D mesh. The mesh saliency used in our method is Lee et al.’s [13] scheme. The saliency of each vertex is computed by utilizing the difference in the mean curvature of the 3D model regions from those at other vertices in the neighborhood. The surface curvatures are computed; the curvature at each vertex v is calculated based on Taubin’s scheme [15]. We note C u r v ( v ) as the mean curvature of a 3D model at a vertex v. The Gaussian-weighted average of the mean curvature can be defined as
G ( C u r v ( v ) , σ ) = x N ( v , 2 σ ) C u r v ( x ) e x p ( x v 2 2 σ 2 ) x N ( v , 2 σ ) e x p ( x v 2 2 σ 2 )
where x is a mesh point and N ( v , σ ) denotes the neighborhood for a vertex v, which represents a set of points within a Euclidean distance σ calculated as
N ( v , σ ) = { x | x v < σ }
The saliency S ( v ) of a vertex v is calculated as the absolute difference between the Gaussian-weighted averages computed at fine and coarse scale.
S ( v ) = | G ( C u r v ( v ) , σ ) G ( C u r v ( v ) , 2 σ ) |
Figure 1 exhibits an example of mesh saliency of a cat and vase using Lee’s method [13].

2.2. Quantization Index Modulation

Quantization index modulation (QIM) schemes represent a set of non-linear data hiding. Most previous research has focused on applying QIM in image, audio, and video. However, few watermarking methods based on QIM have been proposed for 3D data. With the aim of inserting a binary message composed of 0 and 1 in the 3D mesh, two quantifiers are needed. Moreover, QIM methods are simple to implement and have a small complexity. In addition, they ensure a high tradeoff between capacity and robustness.
Let b 0 , 1 represent the watermark bit and x the host signal to quantize. It is worth noticing that the QIM operates independently on the above-mentioned elements. Two quantizers Q 0 and Q 1 are needed to insert a watermark bit b [16]. These quantizers are calculated as follows:
Q b ( x ) = Δ [ 1 Δ ( x ( 1 ) b Δ 4 ) + ( 1 ) b Δ 4 ]
where [ ] refers to the rounding operation and Δ refers to the quantization step. In order to extract the watermark bits, the two quantizers are recalculated as in an embedding process. The extracted bits are calculated as follows:
b ^ = a r g m i n y Q b ( y )
where y = Q b ( x ) + n and n refers to the noise caused by the channel.

2.3. Multiresolution Wavelet Decomposition

Multiresolution analysis applied to 3D meshes has been widely used in the literature since it guarantees a good tradeoff between the mesh complexity and the processing of the available resource [17]. It produces different representations of the 3D mesh starting from the low frequencies (coarse mesh) to a set of medium and high frequencies representing detailed information at different resolution levels. Each level represents the same 3D mesh but with a different complexity. The advantage of using multiresolution analysis in 3D watermarking lies in the fact that it offers several embedding locations and provides high robustness and imperceptibility requirements. In addition, such analysis renders the watermarking method useful for several applications. Indeed, the coarsest level is used to insert a robust watermark for copyright protection, a fragile watermark can be embedded into a dense mesh to ensure authentication, etc. Wavelet transformation is a common tool for performing mesh multiresolution analysis. The mathematical formula of synthesis and wavelet analysis of 3D models was introduced by Lounsbery et al. [7]. The principle of lazy wavelet decomposition for semi-regular triangular meshes is sketched in Figure 2. One iteration of lazy wavelet transform consists of merging each of the four triangles in one triangle at low-resolution level j + 1 , i.e., three of the six initial vertices are conserved in the lower resolution and so on. The prediction errors for the deleted vertices ( v 4 j , v 5 j , v 6 j ) represent the wavelet coefficients ( W 1 j + 1 , W 2 j + 1 , W 3 j + 1 ). Note that such analysis can only be applied on a dense mesh with semi-regular connectivity. Figure 3 illustrates the wavelet decomposition of a bunny mesh. The multiresolution representation of the six vertices ( v 1 j , v 2 j , v 3 j , v 4 j , v 5 j , v 6 j ) can be expressed as follows:
V j + 1 = A j + 1 V j
W j + 1 = B j + 1 v j
where V j = v 1 j , v 2 j , , v k j T refers to the vertices coordinates at resolution level j, k j represents the number of vertices at level j, and W j + 1 = w 1 j + 1 , w 2 j + 1 , , w t j + 1 T is the wavelet coefficient vector at resolution level j + 1 . t j + 1 is the number of wavelet coefficient vectors at resolution level j + 1 , where t j + 1 = k j k j + 1 . A j + 1 is a non-square matrix that represents the triangle reduction by joining four triangles into one. Non-square matrix B j + 1 produces the wavelet vectors that start from the midpoint of the edge in the lower resolution j + 1 and ends at the vertices that are deleted at the same level j + 1 .

3. The Proposed Method

In this paper, a blind robust 3D mesh watermarking technique based on mesh saliency and wavelet coefficients for copyright protection is proposed. The main contribution of this work is the use of QIM quantification of the wavelet coefficients as well as mesh saliency in order to ensure both high imperceptibility and robustness to a wide range of attacks. The imperceptibility requirement is achieved by exploiting the advantages of mesh saliency in order to define the wavelet vectors to be quantized, while robustness is ensured by using the QIM scheme of the selected wavelet vectors using the distance between the vertices and the mesh center as synchronizing primitives. Firstly, multiresolution analysis is applied to the original 3D semi-regular mesh giving a series of approximation meshes and a sequence of wavelet coefficients. Next, the mesh saliency is calculated for the mesh obtained after one wavelet decomposition and salient points are extracted. The reason behind using a certain level I (Figure 3b) is due to the good capacity-invisibility tradeoff offered. Afterwards, the norms of wavelet coefficients of these points are quantized using QIM quantization. The principle role of the mesh saliency is to define the candidate wavelet coefficients norms to be quantified. The wavelet coefficients, which correspond to the vertices that have the biggest saliency values, are chosen and their norms are quantized. A threshold ( T r ) representing the 70% maximum values of the saliency vector was adopted for all the 3D meshes. The embedding and extracting schemes are described in Figure 4 and Figure 5, respectively.

3.1. Watermark Embedding

The first step is to do a wavelet analysis until an intermediate level I is reached, after applying one wavelet decomposition of the original mesh. The watermarking primitive is the wavelet coefficient in the level I. The watermark bits are inserted by modifying the norm wavelet coefficients associated with the sorted vertices according to a predefined order. This order is defined as the Euclidean distance between the vertices and the mesh center. Firstly, the vertices are sorted in the descending order according to their distance to the mesh gravity. Next, the salient points in the level I are extracted by calculating the mesh saliency using Lee et al.’s methods [13]. Afterwards, the wavelet coefficients corresponding to the salient vertices in the level I and their Euclidean norms are calculated. To define a salient point, a threshold T r is defined in order to choose a candidate wavelet coefficient to be altered in the embedding process. The parameter T r is here chosen as 70% maximum values of the saliency vector, which has been adopted for all 3D objects. Thus, the wavelet coefficients corresponding to the salient points are quantized using a QIM scheme. Finally, the watermarked 3D mesh is reconstructed starting from the modified wavelet coefficients norms using Equation (8) [19].
V ( x , y , z ) = W C W C V ( x , y , z )
where W C and W C refers to the modified wavelet coefficient after embedding and the wavelet coefficients calculated before the watermark embedding, respectively. V ( x , y , z ) is the new vertex coordinate of the watermarked mesh, and V ( x , y , z ) is the vertex coordinate of the original mesh. Figure 4 sketches the watermark insertion process, which is described in detail in Algorithm 1.
Algorithm 1 Watermark embedding.
Require: Original 3D mesh, Watermark, key_1, key_2.
Ensure: Watermarked 3D mesh.
  • Do wavelet analysis until an intermediate level I is reached.
  • Extract salient points of the 3D mesh in scale I using mesh saliency.
  • Find out the wavelet coefficients corresponding to the salient vertices according to the threshold T r and calculate their norms.
  • Quantify the norms of wavelet coefficients by the QIM scheme using Equation (4).
  • Reconstruct the mesh starting from the modified wavelet coefficients norms to obtain the watermarked 3D mesh.

3.2. Watermark Extraction

The watermark extraction is quite simple and blind. Neither the original object nor the watermark is needed in the extraction process; only the secret keys (key_1, key_2) that represent the generator of the watermark and the quantization step, respectively, are needed. First, multiresolution analysis is applied to the watermarked 3D mesh by performing wavelet decomposition until the level I is reached. Afterwards, the mesh saliency is carried out to the watermarked mesh and the modified wavelet coefficients are defined according to the threshold T r . Next, the norms of the modified wavelet coefficients are re-quantized using the same quantizer as the embedding. Finally, the watermark bits are extracted using Equation (5). Figure 5 illustrates the watermark extraction process that is presented in detail in Algorithm 2.
Algorithm 2 Watermark extracting.
Require: Watermarked 3D mesh, key_1, key_2.
Ensure: Watermark.
  • Do wavelet analysis starting from the watermarked mesh until level I is reached.
  • Calculate mesh saliency and extract the modified wavelet coefficients according to the threshold  T r .
  • Calculate the norms of the extracted wavelet coefficients and apply QIM quantization.
  • Extract the watermark bits using Equation (5).

4. Experimental Setup

The performance of the proposed watermarked method in terms of imperceptibility and robustness is tested on several 3D objects: Bunny ( 34,835 vertices, 69,666 facets), Horse ( 112,642 vertices, 225,280 facets), Venus ( 100,759 vertices, 201,514 facets), Armadillo ( 26,002 vertices, 52,000 facets), Rabbit ( 70,658 vertices, 141,312 facets), Flower (2523 vertices, 4895 facets), Vase (2527 vertices, 5004 facets), Cup (9076 vertices, 18,152 facets), Ant (7654 vertices, 15,304 facets), Bimba (8857 vertices, 17,710 facets), and Cat (3534 vertices, 6975 facets). LIRIS/EPFL General-Purpose [20] database was created in Switzerland and contains 88 models in total. It contains 4 reference models which are Armadillo, Dyno, venus and RockerArm. The subjective evaluation was done by 12 observers. The distortions that the reference mesh undergoes are smoothing and noise addition. We note that the scores are between 0 referring to good quality and 10 representing a bad quality. For each mesh, the average of the scores given by the 12 observers is computed in order to have a normalized mean opinion score (MOS). The LIRIS Masking database was created at the University of Lyon in France and contains 26 models, and the subjective evaluation was done by 11 observers [21]. Only some objects are taken from the two mentioned databases. The model sizes are provided in Table 1.
Figure 6a,c,e show an example of three original 3D objects. The quantization step is tuned experimentally in such a way that ensures the best tradeoff between imperceptibility and robustness. To do so, extensive experiments have been conducted (see Figure 7, Figure 8 and Figure 9) using several empirically different values of Δ . We note that these experiments have been carried out for all the 3D meshes. For the brevity of space we have given only the results for 3D meshes Bimba, Horse, and Bunny. According to these results, the best found value is Δ = 0.10 . The parameter T r , which represents the threshold used to choose the candidate wavelet coefficients, is kept at the 70% maximum value of the saliency vector. We note that the size of the used watermark in the simulations is 64 bits.
Subjective evaluation is done by human subjects, each subject provides a quality score. The MOS is the average of all the scores. Objective evaluation is done by metrics. The mean opinion score reflects the observers’ opinions regarding the visual difference between the original mesh and the distorted one. The MOS values presented in Table 2 are obtained after calculating the average of scores given by the observers. For example, the MOS value of Armadillo taken from LIRIS-EFPL-Gen-Purpose [21] is 2.5 . This value is obtained after averaging all the scores given by the 12 observers. The mean opinion score (MOS) values of Armadillo, Venus, and Bimba are shown in Table 2.

4.1. Evaluation Metrics

Before applying the attacks to the 3D models, different experiments were conducted to evaluate the effectiveness of the proposed method in terms of imperceptibility and robustness. The distortion introduced by the watermark insertion is compared objectively and visually. The robustness of the proposed scheme is evaluated using the normalized correlation (NC).

4.1.1. Imperceptibility

In order to evaluate the imperceptibility of the proposed method, several metrics were used to measure the amount of distortion introduced by the embedding process. This distortion can be measured geometrically or perceptually. The maximum root mean square error (MRMS) proposed in [22] was used to calculate the objective distortion between the original mesh and the watermarked ones.
The MRMS, which refers to the maximum between the two root mean square error (RMS) distances, is calculated by
d M R M S = m a x ( d R M S ( M , M w ) , d R M S ( M w , M ) )
d R M S ( M , M w ) = 1 | M | p M d ( p , M w ) 2 d M
where p is a point on surface M, | M | represents the area of M, and d ( p , M w ) is the point-to-surface distance between p and M w . It is worth noticing that surface-to-surface distance, as the MRMS metric, does not represent the visual distance between the two meshes [21]. Therefore, another perceptual metric is needed to measure the distortion caused by the watermark insertion.
The mesh structural distortion measure (MSDM) metric is chosen to measure the visual degradation of the watermarked meshes [21].
The MSDM value is equal to 0 when the original and watermarked 3D objects are identical. Otherwise, the MSDM value is equal to 1 when the objects are visually very different.
The global MSDM distance between the original mesh M and the watermarked mesh M w having n vertices respectively is defined by
d M S D M ( M , M w ) = 1 n i = 1 n d L M S D M ( a i , b i ) 3 1 3 [ 0 , 1 ) .
d L M S D M is the local MSDM distance between two mesh local windows a and b (in meshes M and M w , respectively), which is defined by
d L M S D M ( a , b ) = ( 0.4 × C u r v ( a , b ) 3 + 0.4 × C o n t ( a , b ) 3 + 0.2 × S u r f ( a , b ) 3 ) 1 3 .
C u r v , C o n t , and S u r f refers to curvature, contrast, and structure comparison functions, respectively.

4.1.2. Robustness

The robustness is measured using the normalized correlation (NC) between the inserted watermark and the extracted one as given by the following equation:
N C = i = 1 M ( w i w ¯ * ) ( w i w ¯ ) i = 1 M ( w i w ¯ * ) 2 . i = 1 M ( w i w ¯ ) 2
where i 1 , 2 , , M , w ¯ * and w ¯ are the averages of the watermark bits, respectively.

5. Experimental Results

5.1. Imperceptibility

Figure 6 shows the original objects and the watermarked ones. It can be seen that the distortion is negligible and cannot be noticed by the human eye thanks to the use mesh saliency in the embedding process. This means that the imperceptibility of the proposed method is enough to make the viewer hard to distinguish the original mesh from the watermarked one. Figure 10 sketches the visual impact of the watermark caused by the watermark embedding for Bunny, Bimba, and Cat meshes. It can be seen in Figure 10 that no perceptible distortion caused by the watermark embedding exists, especially in salient regions. Besides, according to Table 3, it can be concluded that the proposed method can achieve a high imperceptibility performance in terms of MRMS, HD, and MSDM. We guess that this is achieved thanks to the exploitation of mesh saliency to minimize distortions after watermark insertion. In fact, only wavelet coefficients corresponding to salient vertices are altered. It can also be observed that imperceptibility results in terms of MRMS, HD, and MSDM are not the same for the test 3D meshes. We believe this is due to the curvature nature of each one of these 3D objects. In order to evaluate the importance of using mesh saliency in the proposed work, we compared the imperceptibility performance in terms of MRMS, HD, and MSDM without and with saliency. The obtained results for six 3D meshes are listed in Table 4. According to this table, the proposed scheme based on mesh saliency gives higher imperceptibility scores than the scheme without using saliency, which illustrates the importance of saliency to improve the imperceptibility performance of the proposed scheme especially for the MSDM, which is more correlated with human perception.

5.2. Robustness

The resistance of the proposed watermarking method is tested under several attacks including element reordering, noise addition, smoothing, quantization, similarity transformations translation, rotation, and uniform scaling) and cropping. To this end, Wang et al.’s benchmarking system has been used [23]. Figure 11 shows the Bimba object after several attacks. The robustness is measured using the normalized correlation N C between the extracted watermark bit and the original one.
The noise addition attack aims to add a pseudo-random noise on vertex coordinates. The robustness against this attack is primordial since it simulates the artifacts induced during mesh transmission. We added random noise to each vertex of the original 3D meshes using several intensities 0.05 %, 0.10 %, 0.30 %, and 0.50 %. Table 5 sketches the robustness results in terms of NC for six 3D meshes. It can be seen from Table 5 that the proposed method is robust against noise addition for all the 3D test meshes.
The robustness performance of the proposed method was also tested against smoothing attack. This is a common operation aiming to remove the noise caused by the generation of the mesh. The smoothing applied to the watermarked 3D meshes is Laplacian smoothing [15] with a fixed deformation factor ( λ = 0.1 ) with several amounts of iterations (5, 10, 30, and 50). Table 6 highlights the robustness evaluation of the proposed method in terms of NC. According to Table 6, it can be seen that our method can achieve high performance against this attack. Even with 50 iterations, the obtained NC values for the six objects are above 0.88 .
The robustness performance has been investigated for element reordoring attack in which the vertex/facets are reordered. It clearly appears from Table 7 that the method shows good robustness against element reordering manipulation for the three types of this attack presented in Wang et al.’s benchmark [23].
Quantization is a common operation used mainly in mesh compression. The watermarked 3D meshes have undergone this attack with several parameters (11 bits, 10 bits, 9 bits, 8 bits, and 7 bits), and the robustness performance in terms of normalized correlation is presented in Table 8. According to these results, it can be concluded that our method is able to withstand quantization attack. Table 9 presents the robustness comparison between Jeongho’s method and the proposed method in terms of bit error rate (BER). It can be seen from Table 9 that the proposed method outperforms the scheme in [11].
Similarity transformation is a common attack that every robust technique should be able to resist. It includes rotation, translation, and uniform scaling and their combination while keeping the mesh shape intact. Table 10 sketches the NC values after carrying out three types of similarity transformations. It can be remarked that the watermark is robust against these attacks regardless of the used 3D mesh.
Subdivision is one of the most used geometric attacks that objects can undergo. It consists of adding vertices and edges to the 3D mesh in order to obtain a modified version of a higher visual quality containing more details. The 3D test meshes were exposed to three typical subdivision schemes using one iteration. It includes a loop scheme, a midpoint scheme, and a 3 scheme. Table 11 presents the obtained results in terms of NC after applying the schemes mentioned before. According to Table 11, it can be observed that our method is quite robust to subdivision attacks for the three scheme and for all the 3D meshes used in the experiments. In addition, all the NC values are above 0.9 .
Cropping attack is considered one of the most severe attacks that 3D can suffer from. This manipulation consists of cutting one part or several parts of the 3D mesh. This attack has been applied to the 3D objects using different ratios (10, 30, and 50). As depicted in Table 12, it can be concluded that the resistance of the proposed method against this attack shows relative weakness. We believe that this is due to the fact that the cropped regions could contain salient points that have been used to choose the wavelet coefficients to be quantized.

5.3. Comparison with Alternative Methods

To further evaluate the performance of the proposed method, we compare it with methods [3,11,12,14,24] in terms of imperceptibility and robustness. It can be seen from Table 13 that the proposed method outperforms the imperceptibility of the schemes [14,24], and [11] in terms of MRMS and MSDM. In addition, the imperceptibility performance in terms of the HD of the proposed technique is compared to the scheme [12].
From Table 14, it can be highlighted that the proposed method achieves good imperceptibility performance. The reported results in terms of MRMS and HD illustrates the invisibility of the proposed method. Moreover, our scheme outperforms Wang et al.’s [3] scheme for MRMS and HD for the three objects Venus, Horse, and Rabbit.
Table 15 sketches the imperceptibility comparison with [14] and [12] schemes in terms of the Hausdorff distance for Bunny, Venus, and Horse models. As depicted in Table 15, the proposed technique achieves good results since all the obtained HD values are less than 1.71 . Moreover, the proposed method outperforms Cho et al.’s [14] and Medimegh et al.’s [12] schemes. In addition, as depicted in Table 16, the proposed method shows high robustness to different attacks including noise addition and smoothing, and outperforms the schemes in [14] and [12], except with respect to cropping attack, Medimegh et al.’s scheme [12] shows high robustness compared to our method. We believe that this is due to the redundant insertion of the watermark in several patches.
It can be seen in Figure 12 that the proposed scheme shows high robustness to several attacks including noise addition, quantization, and smoothing. Moreover, our methods achieves high robustness compared to Cho’s method [14] in terms of normalized correlation.
The robustness of our method in terms of normalized correlation (NC) was compared to Wang et al.’s [3] method for noise addition, quantization, and smoothing attacks with different parameters. For noise addition, three amplitudes were used in the comparison. Quantization was applied with 9 bits, 8 bits, and 7 bits. Regarding smoothing attack, a watermarked rabbit has undergone this process using a fixed deformation factor λ = 0.10 with different iteration numbers. Figure 13 highlighted the obtained results for the Rabbit object. It can be observed from Figure 13 that the proposed method shows high robustness compared to Wang’s method in terms of NC for noise addition, quantization, and smoothing. Moreover, as depicted in Table 17, the proposed method is able to withstand a smoothing attack, and the obtained results in terms of BER show the superiority of our method compared to Son’s method [11]. Table 18, Table 19 and Table 20 show the quality and robustness comparison with Wang et al.’s method [3] against noise addition, quantization, and smoothing for Venus and Horse, respectively. The quality evaluation metrics used in the comparison are MRMS, HD, and MSDM, while the robustness is evaluated using NC.
Regarding noise addition, Horse and Venus were chosen as comparison meshes, and three amplitudes were used in the comparison 0.05 %, 0.25 %, and 0.5 %. According to Table 18, it can be concluded that our method is robust against noise and outperforms the obtained results of Wang’s method in terms of robustness (NC) and quality (MRMS, HD, and MSDM).
For quantization attack, 9 bits, 8 bits, and 7 bits are the parameters used in the comparison, respectively. It can be seen from Table 19 that the proposed method is able to withstand quantization attack for Venus and Horse 3D meshes. In addition, Table 19 shows high robustness and imperceptibility performances compared to Wang et al.’s [3] scheme. Moreover, the robustness comparison with Son et al.’s method sketched in Table 9 demonstrates the superiority of the proposed method.
For comparison purposes, Venus and Horse models have undergone Laplacian smoothing using λ = 0.10 with 10, 30, and 50 iterations. The robustness and imperceptibility performances were evaluated using NC, MRMS, HD, and MSDM. In Table 20, it can be observed that the proposed technique exhibits superiority over Wang’s method in terms of robustness. Moreover, the quality performance illustrates the superiority of the proposed method in terms of MRMS, HD, and MSDM. Table 21 presents the robustness comparison with Cho’s method [14] in terms of NC for the Bunny object. According to Table 21, it can be seen that our method shows good robustness against noise, quantization, smoothing, and simplification. Moreover, the proposed method outperforms Cho et al.’s method in terms of the mentioned attacks. It can be seen in Table 22 that the proposed method outperforms Nakazawa et al.’s method in a wide range of attacks including noise addition, quantization, smoothing, and simplification.
In sum, the majority of the previous proposed 3D watermarking schemes based on saliency give good performance in terms of imperceptibility due to the use of this perceptual characteristic. However, they generally show weakness in several attacks since they used a spatial domain to embed the watermark. The novelty of the proposed method is that it gives good results for both imperceptibility and robustness thanks to the exploitation of the advantages of QIM quantification of wavelets and mesh saliency.

6. Conclusions

In this paper, a blind robust 3D mesh watermarking method based on visual saliency and wavelet coefficient vectors for copyright protection is proposed. The proposed method takes the full advantage of jointing mesh saliency and QIM quantization of wavelet coefficients to ensure both high imperceptibility and robustness. The robustness requirement is achieved by quantifying the wavelet coefficients using the QIM scheme, while the imperceptibility performance is ensured by adjusting the embedding process according to the visual saliency. The experimental results demonstrate that the proposed scheme yields a good tradeoff between the imperceptibility and robustness requirements. Moreover, experimental simulations show that the proposed method outperforms the existing methods against the majority of the attacks. Future work will focus on improving the robustness against severe attacks such as cropping and remeshing using the weights of the saliency mesh in order to embed more data.

Author Contributions

Conceptualization, A.C.; Methodology, H.C.; Software, M.H.; Supervision, M.E.H. (Mohamed El Haziti), M.E.H. (Mohammed El Hassouni) and H.C.; Writing—original draft, M.H.; Validation, A.C.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Praun, E.; Hoppe, H.; Finkelstein, A. Robust mesh watermarking. In Proceedings of the 26th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA, 8–13 August 1999; ACM Press/Addison-Wesley Publishing Co.: New York, NY, USA, 1999; pp. 49–56. [Google Scholar] [Green Version]
  2. Wang, K.; Lavoué, G.; Denis, F.; Baskurt, A. Three-dimensional meshes watermarking: Review and attack-centric investigation. In International Workshop on Information Hiding; Springer: Berlin/Heidelberg, Germany, 2007; pp. 50–64. [Google Scholar]
  3. Wang, K.; Lavoué, G.; Denis, F.; Baskurt, A. Hierarchical watermarking of semiregular meshes based on wavelet transform. IEEE Trans. Inf. Forensics Secur. 2008, 3, 620–634. [Google Scholar] [CrossRef]
  4. Hou, J.U.; Kim, D.G.; Lee, H.K. Blind 3D Mesh Watermarking for 3D Printed Model by Analyzing Layering Artifact. IEEE Trans. Inf. Forensics Secur. 2017, 12, 2712–2725. [Google Scholar] [CrossRef]
  5. Bors, A.G. Watermarking mesh-based representations of 3D objects using local moments. IEEE Trans. Image Process. 2006, 15, 687–701. [Google Scholar] [CrossRef] [PubMed]
  6. Valette, S.; Prost, P. Wavelet-based multiresolution analysis of irregular surface meshes. IEEE Trans. Vis. Comput. Graph. 2004, 10, 113–122. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  7. Lounsbery, M.; DeRose, T.D.; Warren, J. Multiresolution analysis for surfaces of arbitrary topological type. ACM Trans. Graph. 1997, 16, 34–73. [Google Scholar] [CrossRef] [Green Version]
  8. Kim, M.S.; Valette, S.; Jung, H.Y.; Prost, R. Watermarking of 3D irregular meshes based on wavelet multiresolution analysis. In International Workshop on Digital Watermarking; Springer: Berlin/Heidelberg, Germany, 2005; pp. 313–324. [Google Scholar]
  9. Hamidi, M.; El Haziti, M.; Cherifi, H.; Aboutajdine, D. A robust blind 3-D mesh watermarking based on wavelet transform for copyright protection. In Proceedings of the 2017 IEEE International Conference on Advanced Technologies for Signal and Image Processing (ATSIP’2017), Fez, Morocco, 22–24 May 2017. [Google Scholar]
  10. Nakazawa, S.; Kasahara, S.; Takahashi, S. A visually enhanced approach to watermarking 3D models. In Proceedings of the 2010 Sixth International Conference on Intelligent Information Hiding and Multimedia Signal Processing (IIH-MSP), Darmstadt, Germany, 15–17 October 2010; pp. 110–113. [Google Scholar]
  11. Son, J.; Kim, D.; Choi, H.Y.; Jang, H.U.; Choi, S. Perceptual 3D Watermarking Using Mesh Saliency. In Proceedings of the 2017 International Conference on Information Science and Applications, Macau, China, 20–23 March 2017; Springer: Singapore, 2017; pp. 315–322. [Google Scholar]
  12. Medimegh, N.; Samir, B.; Mohamed, A.; Naoufel, W. 3D mesh watermarking using salient points. In Multimedia Tools and Applications; Springer: New York, NY, USA, 2018; pp. 1–23. [Google Scholar]
  13. Lee, C.H.; Varshney, A.; Jacobs, D.W. Mesh saliency. ACM Trans. Graph. 2005, 24, 659–666. [Google Scholar] [CrossRef]
  14. Cho, J.-W.; Prost, R.; Jung, H.-Y. An oblivious watermarking for 3D polygonal meshes using distribution of vertex norms. IEEE Trans. Signal Process. 2007, 55, 142–155. [Google Scholar] [CrossRef]
  15. Taubin, G. Estimating the tensor of curvature of a surface from a polyhedral approximation. In Proceedings of the ICCV 1995, Cambridge, MA, USA, 20–23 June 1995; p. 902. [Google Scholar]
  16. Vasic, B.; Vasic, B. Simplification resilient LDPC-coded sparse-QIM watermarking for 3D-meshes. IEEE Trans. Multimed. 2013, 15, 1532–1542. [Google Scholar] [CrossRef]
  17. Dodgson, N.; Floater, M.S.; Sabin, M. (Eds.) Advances in Multiresolution for Geometric Modelling; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  18. Kanai, S.; Date, H.; Kishinami, T. Digital watermarking for 3D polygons using multiresolution wavelet decomposition. In Proceedings of the Sixth IFIP WG, Helsinki, Finland, 10–13 December 1998; Volume 5, pp. 296–307. [Google Scholar]
  19. Wang, K.; Luo, M.; Bors, A.G.; Denis, F. Blind and robust mesh watermarking using manifold harmonics. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 3657–3660. [Google Scholar]
  20. Lavoué, G. A local roughness measure for 3D meshes and its application to visual masking. ACM Trans. Appl. Percept. 2009, 5, 21. [Google Scholar] [CrossRef]
  21. Lavoué, G.; Gelasca, E.D.; Dupont, F.; Baskurt, A.; Ebrahimi, T. Perceptually driven 3D distance metrics with application to watermarking. In Applications of Digital Image Processing XXIX; International Society for Optics and Photonics: Bellingham, WA, USA, August 2006; Volume 6312, p. 63120L. [Google Scholar]
  22. Cignoni, P.; Rocchini, C.; Scopigno, R. Metro: Measuring error on simplified surfaces. In Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 1998; Volume 17, pp. 167–174. [Google Scholar]
  23. Wang, K.; Lavoué, G.; Denis, F.; Baskurt, A.; He, X. A benchmark for 3d mesh watermarking. In Proceedings of the 2010 Shape Modeling International Conference (SMI), Aix-en-Provence, France, 21–23 June 2010; pp. 231–235. [Google Scholar]
  24. Rolland-Neviere, X.; Doërr, G.; Alliez, P. Triangle surface mesh watermarking based on a constrained optimization framework. IEEE Trans. Inf. Forensics Secur. 2014, 9, 1491–1501. [Google Scholar] [CrossRef]
Figure 1. Three-dimensional meshes and their corresponding mesh saliency: (left) original 3D meshes; (right) 3D mesh saliency.
Figure 1. Three-dimensional meshes and their corresponding mesh saliency: (left) original 3D meshes; (right) 3D mesh saliency.
Information 10 00067 g001
Figure 2. Illustration of the lazy wavelet mechanism applied to a triangular semiregular mesh [18].
Figure 2. Illustration of the lazy wavelet mechanism applied to a triangular semiregular mesh [18].
Information 10 00067 g002
Figure 3. Wavelet decomposition of the semi-regular Bunny mesh: (a) dense mesh; (b) after one wavelet decomposition; (c) after two wavelet decompositions; (d) coarsest representation of Bunny.
Figure 3. Wavelet decomposition of the semi-regular Bunny mesh: (a) dense mesh; (b) after one wavelet decomposition; (c) after two wavelet decompositions; (d) coarsest representation of Bunny.
Information 10 00067 g003
Figure 4. The proposed embedding scheme.
Figure 4. The proposed embedding scheme.
Information 10 00067 g004
Figure 5. The proposed extraction scheme.
Figure 5. The proposed extraction scheme.
Information 10 00067 g005
Figure 6. (a) Flower; (b) Watermarked Flower; (c) Vase; (d) Watermarked Vase; (e) Cup; (f) Watermarked Cup; (g) Ant; (h) Watermarked Ant; (i) Cat; (j) Watermarked Cat.
Figure 6. (a) Flower; (b) Watermarked Flower; (c) Vase; (d) Watermarked Vase; (e) Cup; (f) Watermarked Cup; (g) Ant; (h) Watermarked Ant; (i) Cat; (j) Watermarked Cat.
Information 10 00067 g006
Figure 7. The robustness performance in terms of correlation using several quantization steps for Bimba, Horse, and Bunny models.
Figure 7. The robustness performance in terms of correlation using several quantization steps for Bimba, Horse, and Bunny models.
Information 10 00067 g007
Figure 8. The imperceptibility performance in terms of MSDM using several quantization steps for Bimba, Horse, and Bunny models.
Figure 8. The imperceptibility performance in terms of MSDM using several quantization steps for Bimba, Horse, and Bunny models.
Information 10 00067 g008
Figure 9. The imperceptibility performance in terms of MRMS using several quantization steps for Bimba, Horse, and Bunny models.
Figure 9. The imperceptibility performance in terms of MRMS using several quantization steps for Bimba, Horse, and Bunny models.
Information 10 00067 g009
Figure 10. Closeups of the original and watermarked meshes: (a) Original Bunny; (b) Watermarked Bunny; (c) Original Bimba; (d) Watermarked Bimba; (e) Original Cat; (f) Watermarked Cat.
Figure 10. Closeups of the original and watermarked meshes: (a) Original Bunny; (b) Watermarked Bunny; (c) Original Bimba; (d) Watermarked Bimba; (e) Original Cat; (f) Watermarked Cat.
Information 10 00067 g010
Figure 11. Original Bimba and seven attack versions: (a) the original Bimba; (b) a noise addition of 0.50 %; (c) a smoothing of λ = 0.1 with 5 iterations; (d) a quantization of 9 bits; (e) similarity transformation; and (f) a cropping ratio of 10.0 .
Figure 11. Original Bimba and seven attack versions: (a) the original Bimba; (b) a noise addition of 0.50 %; (c) a smoothing of λ = 0.1 with 5 iterations; (d) a quantization of 9 bits; (e) similarity transformation; and (f) a cropping ratio of 10.0 .
Information 10 00067 g011
Figure 12. The robustness comparison of Cho’s method [14] with the proposed method in terms of NC for the Bunny model against several attacks: A: noise 0.1 %; B: noise 0.3 %; C: noise 0.5 %; D: quantization of 9 bits; E: quantization of 8 bits; F: quantization of 7 bits; G: smoothing ( 0.3 , 10 iterations); H: smoothing ( 0.3 , 30 iterations); I: smoothing ( 0.3 , 50 iterations).
Figure 12. The robustness comparison of Cho’s method [14] with the proposed method in terms of NC for the Bunny model against several attacks: A: noise 0.1 %; B: noise 0.3 %; C: noise 0.5 %; D: quantization of 9 bits; E: quantization of 8 bits; F: quantization of 7 bits; G: smoothing ( 0.3 , 10 iterations); H: smoothing ( 0.3 , 30 iterations); I: smoothing ( 0.3 , 50 iterations).
Information 10 00067 g012
Figure 13. The results of the robustness comparison of Wang’s method [3] with the proposed method in terms of NC for the Bunny model against several attacks: A: noise 0.05 %; B: noise 0.25 %; C: noise 0.5 %; D: quantization of 9 bits; E: quantization of 8 bits; F: quantization of 7 bits; G: smoothing ( 0.1 , 10 iterations); H: smoothing ( 0.1 , 30 iterations); I: smoothing ( 0.1 , 50 iterations).
Figure 13. The results of the robustness comparison of Wang’s method [3] with the proposed method in terms of NC for the Bunny model against several attacks: A: noise 0.05 %; B: noise 0.25 %; C: noise 0.5 %; D: quantization of 9 bits; E: quantization of 8 bits; F: quantization of 7 bits; G: smoothing ( 0.1 , 10 iterations); H: smoothing ( 0.1 , 30 iterations); I: smoothing ( 0.1 , 50 iterations).
Information 10 00067 g013
Table 1. Size of the 3D models used in the experiments.
Table 1. Size of the 3D models used in the experiments.
ModelBunnyHorseVenusArmadilloRabbitFlowerVaseCupAntCatBimba
Size (Ko)24488168718228285039154155576482219577
Table 2. Mean opinion score (MOS) for Armadillo, Venus, and Bimba objects taken from LIRIS-Mask [20] and LIRIS-EFPL-Gen-Purpose [21] databases.
Table 2. Mean opinion score (MOS) for Armadillo, Venus, and Bimba objects taken from LIRIS-Mask [20] and LIRIS-EFPL-Gen-Purpose [21] databases.
ObjectMean Opinion Score (MOS)
Armadillo 2.5
Venus 2.917
Bimba 3.091
Table 3. Watermark imperceptibility measured in terms of maximum root mean square error (MRMS), HD, and mesh structural distortion measure (MSDM).
Table 3. Watermark imperceptibility measured in terms of maximum root mean square error (MRMS), HD, and mesh structural distortion measure (MSDM).
ModelMRMS ( 10 3 )HD ( 10 3 )MSDM
Flower 0.43 3.23 0.24
Vase 0.32 2.86 0.34
Cup 0.78 2.58 0.31
Ant 0.41 3.87 0.38
Cat 0.49 0.78 0.18
Bimba 0.31 1.41 0.12
Table 4. Watermark imperceptibility without and with saliency measured in terms of MRMS, HD, and MSDM.
Table 4. Watermark imperceptibility without and with saliency measured in terms of MRMS, HD, and MSDM.
ModelMRMS ( 10 3 )HD ( 10 3 )MSDM
Flower 0.84 / 0.43 4.88 / 3.23 0.78 / 0.24
Vase 0.55 / 0.32 4.67 / 2.86 0.75 / 0.34
Cup 0.09 / 0.78 3.22 / 2.58 0.81 / 0.31
Ant 0.77 / 0.41 3.36 / 3.87 0.89 / 0.38
Cat 0.95 / 0.49 1.8 / 0.78 0.28 / 0.18
Bimba 0.74 / 0.31 2.88 / 1.41 0.61 / 0.12
Table 5. Watermark robustness against additive noise measured in terms of correlation.
Table 5. Watermark robustness against additive noise measured in terms of correlation.
Noise IntensityFlowerVaseCupAntCatBimba
0.05% 1.0 1.0 0.98 0.99 0.98 1.0
0.10% 0.98 0.96 0.96 0.95 0.93 0.95
0.30% 0.86 0.88 0.86 0.92 0.91 0.90
0.50% 0.83 0.77 0.79 0.81 0.83 0.84
Table 6. Watermark robustness against Laplacian smoothing ( λ = 0.1 ) measured in terms of correlation.
Table 6. Watermark robustness against Laplacian smoothing ( λ = 0.1 ) measured in terms of correlation.
Number of IterationsFlowerVaseCupAntCatBimba
5 1.0 0.99 1.0 0.99 1.0 1.0
10 1.09 0.99 0.98 1.0 0.98 0.98
30 0.99 0.97 0.93 0.95 0.94 0.94
50 0.93 0.92 0.91 0.88 0.90 0.95
Table 7. Watermark robustness against element reordering measured in terms of correlation.
Table 7. Watermark robustness against element reordering measured in terms of correlation.
Elements ReorderingFlowerVaseCupAntCatBimba
Element Reordering 1 1.0 1.0 1.0 1.0 1.0 1.0
Element Reordering 2 0.99 0.98 1.0 0.97 1.0 0.99
Element Reordering 3 1.0 1.0 0.97 1.0 1.0 0.99
Table 8. Watermark robustness against quantization measured in terms of correlation.
Table 8. Watermark robustness against quantization measured in terms of correlation.
QuantizationFlowerVaseCupAntCatBimba
11 bits 1.0 1.0 1.0 1.0 1.0 1.0
10 bits 1.0 1.0 0.99 1.0 0.97 1.0
9 bits 0.99 0.99 0.97 0.99 1.0 0.98
8 bits 0.95 0.94 0.93 0.94 0.96 0.96
7 bits 0.83 0.83 0.81 0.82 0.82 0.91
Table 9. Results of comparison with the robustness of Son et al.’s [11] method measured in terms of BER against quantization.
Table 9. Results of comparison with the robustness of Son et al.’s [11] method measured in terms of BER against quantization.
Quantization[11]Our Method
7 bit 0.23 0.14
8 bit 0.15 0.09
9 bit 0.14 0.06
10 bit 0.03 0.02
11 bit 0.01 0.01
Table 10. Watermark robustness against similarity transformations measured in terms of correlation.
Table 10. Watermark robustness against similarity transformations measured in terms of correlation.
Similarity TransformationsFlowerVaseCupAntCatBimba
Similarity Transformation 1 1.0 0.97 1.0 0.98 1.0 0.99
Similarity Transformation 2 0.98 0.96 1.0 0.98 1.0 1.0
Similarity Transformation 3 0.99 0.99 1.0 1.0 0.98 0.94
Table 11. Watermark robustness against subdivision measured in terms of correlation.
Table 11. Watermark robustness against subdivision measured in terms of correlation.
SubdivisionFlowerVaseCupAntCatBimba
Loop iter 1 1.0 1.0 1.0 0.98 0.98 0.99
Midpoint iter 1 0.94 0.90 0.91 0.92 0.95 0.96
Sqrt3 iter 1 0.99 0.97 0.96 0.99 1.0 0.98
Table 12. Watermark robustness against cropping measured in terms of correlation.
Table 12. Watermark robustness against cropping measured in terms of correlation.
CroppingFlowerVaseCupAntCatBimba
10 0.55 0.59 0.64 0.61 0.68 0.52
30 0.45 0.32 0.37 0.41 0.38 0.34
50 0.32 0.15 0.17 0.22 0.24 0.19
Table 13. Imperceptibility comparison with Cho’s [14], Rolland-Neviere’s [24] and Jeongho Son’s [11] schemes measured in terms of MRMS and MSDM for the Horse model.
Table 13. Imperceptibility comparison with Cho’s [14], Rolland-Neviere’s [24] and Jeongho Son’s [11] schemes measured in terms of MRMS and MSDM for the Horse model.
MethodMRMS ( 10 3 )MSDM
[14] 3.17 0.3197
[24] 1.48 0.2992
[11] 2.90 0.3197
Proposed method 0.38 0.2254
Table 14. Imperceptibility comparison with Wang et al.’s [3] method measured in terms of MRMS and MSDM for Horse and Venus models.
Table 14. Imperceptibility comparison with Wang et al.’s [3] method measured in terms of MRMS and MSDM for Horse and Venus models.
ObjectVenusHorseRabbit
MRMS ( 10 3 ) 1.21 / 0.72 0.64 / 0.62 1.12 / 0.54
HD ( 10 3 ) 5.87 / 2.18 2.65 / 1.86 3.99 / 3.63
Table 15. Imperceptibility comparison with Cho’s [14], and Medimegh’s [12] schemes measured in terms of HD.
Table 15. Imperceptibility comparison with Cho’s [14], and Medimegh’s [12] schemes measured in terms of HD.
ObjectMethodHausdorff Distance ( × 10 3 )
[14] 0.73
Bunny[12] 0.41
Our method 0.36
[14] 2.50
Venus[12] 3.90
Our method 1.71
[14] 3.21
Horse[12] 2.90
Our method 1.57
Table 16. Robustness comparison with Cho’s [14], and Medimegh’s [12] schemes measured in terms of NC.
Table 16. Robustness comparison with Cho’s [14], and Medimegh’s [12] schemes measured in terms of NC.
Attack[14][12]Our Method
No attacks 1.0 1.0 1.0
Noise 0.10 % 1.0 1.0 1.0
Smoothing 0.88 0.87 1.0
Cropping 10% 0.2 1.0 0.66
Table 17. Results of robustness comparison with Son et al.’s [11] method measured in terms of BER against smoothing.
Table 17. Results of robustness comparison with Son et al.’s [11] method measured in terms of BER against smoothing.
Smoothing[11]Our Method
0 0.15 0.10
10 0.24 0.16
30 0.45 0.28
50 0.58 0.41
Table 18. Quality and robustness comparison with Wang et al.’s scheme [3] against noise addition measured in terms of MRMS, HD, MSDM, and NC.
Table 18. Quality and robustness comparison with Wang et al.’s scheme [3] against noise addition measured in terms of MRMS, HD, MSDM, and NC.
ModelNoiseNCMRMS ( 10 3 )HD ( 10 3 )MSDM
0.05%0.97/0.850.16/0.170.58/0.620.12/0.28
Venus0.25%0.88/0.590.36/0.840.81/3.150.26/0.70
0.50%0.78/0.310.41/1.672.23/6.250.38/0.83
0.05%0.98/0.960.18/0.110.37/0.410.17/0.23
Horse0.25%0.92/0.500.35/0.551.3/2.030.32/0.64
0.5%0.77/0.080.79/1.102.98/4.070.39/0.78
Table 19. Results of quality and robustness comparison with Wang et al.’s scheme [3] against quantization measured in terms of MRMS, HD, MSDM, and NC.
Table 19. Results of quality and robustness comparison with Wang et al.’s scheme [3] against quantization measured in terms of MRMS, HD, MSDM, and NC.
ModelParameterNCMRMS ( 10 3 )HD ( 10 3 )MSDM
9 bits0.96/0.930.18/0.930.31/1.950.17/0.49
Venus8 bits0.93/0.700.76/1.852.4/3.900.23/0.66
7 bits0.81/0.631.65/3.704.1/7.800.28/0.79
9 bits0.97/0.610.31/0.681.87/1.950.19/0.44
Horse8 bits0.89/0.250.51/1.351.02/3.900.24/0.60
7 bits0.78/0.171.1/2.703.21/7.800.13/0.73
Table 20. Results of quality and robustness comparison with Wang et al.’s scheme [3] against Laplacian smoothing ( λ = 0.1 ) measured in terms of MRMS, HD, MSDM, and NC.
Table 20. Results of quality and robustness comparison with Wang et al.’s scheme [3] against Laplacian smoothing ( λ = 0.1 ) measured in terms of MRMS, HD, MSDM, and NC.
ModelIterationsNCMRMS ( 10 3 )HD ( 10 3 )MSDM
101.0/0.740.31/0.271.87/5.650.14/0.15
Venus300.98/0.710.66/0.683.95/9.750.23/0.27
500.93/0.620.73/1.015.54/12.200.31/0.34
100.99/0.950.19/0.213.98/5.670.14/0.15
Horse300.96/0.500.33/0.544.16/9.970.16/0.23
500.92/0.350.68/0.808.87/12.950.24/0.28
Table 21. Results of robustness comparison with Cho et al.’s [14] method measured in terms of NC for the Bunny model.
Table 21. Results of robustness comparison with Cho et al.’s [14] method measured in terms of NC for the Bunny model.
AttackParameters[14]Our Method
0.1 % 0.87 0.99
Noise 0.3 % 0.51 0.90
0.5 % 0.18 0.84
9-bit 0.94 0.99
Quantization8-bit 0.88 0.95
7-bit 0.39 0.89
(10, 0.03 ) 0.75 0.92
Smoothing(30, 0.03 ) 0.57 0.88
(50, 0.03 ) 0.46 0.71
32.11 % 0.94 0.98
Simplification 51.44 % 0.77 0.85
70.79 % 0.58 0.71
90.65 % 0.38 0.46
Table 22. Results of robustness comparison with Nakazawa et al.’s [10] method measured in terms of NC for the Bunny model.
Table 22. Results of robustness comparison with Nakazawa et al.’s [10] method measured in terms of NC for the Bunny model.
AttackParameters[10]Our Method
0.1 % 0.75 0.99
Noise 0.3 % 0.5148 0.90
0.5 % 0.111 0.84
9-bit 0.77 0.99
Quantization8-bit 0.61 0.95
7-bit 0.32 0.89
(10, 0.03 ) 0.73 0.92
Smoothing(30, 0.03 ) 0.55 0.88
(50, 0.03 ) 0.39 0.71
32.11 % 0.80 0.98
Simplification 51.44 % 0.61 0.85
70.79 % 0.33 0.71
90.65 % 0.17 0.46

Share and Cite

MDPI and ACS Style

Hamidi, M.; Chetouani, A.; El Haziti, M.; El Hassouni, M.; Cherifi, H. Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection. Information 2019, 10, 67. https://doi.org/10.3390/info10020067

AMA Style

Hamidi M, Chetouani A, El Haziti M, El Hassouni M, Cherifi H. Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection. Information. 2019; 10(2):67. https://doi.org/10.3390/info10020067

Chicago/Turabian Style

Hamidi, Mohamed, Aladine Chetouani, Mohamed El Haziti, Mohammed El Hassouni, and Hocine Cherifi. 2019. "Blind Robust 3D Mesh Watermarking Based on Mesh Saliency and Wavelet Transform for Copyright Protection" Information 10, no. 2: 67. https://doi.org/10.3390/info10020067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop