Keywords

1 Introduction

Convolutional neural networks are powerful tools for analyzing data that can naturally be represented as signals on regular grids, such as audio and images [10]. Thanks to the translation invariance of lattices in \(\mathbb {R}^n\), the number of parameters in a convolutional layer is independent of the input size. Composing convolution layers and activation functions results in a multi-scale hierarchical learning pattern, which is shown to be very effective for learning deep representations in practice.

With the recent proliferation of applications employing 3D depth sensors [23] such as autonomous navigation, robotics and virtual reality, there is an increasing demand for algorithms to efficiently analyze point clouds. However, point clouds are distributed irregularly in \(\mathbb {R}^3\), lacking a canonical order and translation invariance, which prohibits using CNNs directly. One may circumvent this problem by converting point clouds to 3D voxels and apply 3D convolutions [13]. However, volumetric methods are computationally inefficient because point clouds are sparse in 3D as they usually represent 2D surfaces. Although there are studies that improve the computational complexity, it may come with a performance trade off [2, 18]. Various studies are devoted to making convolution neural networks applicable for learning on non-Euclidean domains such as graphs or manifolds by trying to generalize the definition of convolution to functions on manifolds or graphs, enriching the emerging field of geometric deep learning [3]. However, it is challenging theoretically because convolution cannot be naturally defined when the space does not carry a group action, and when the input data consists of different shapes or graphs, it is difficult to make a choice for convolutional filters.Footnote 1

Fig. 1.
figure 1

The integral formula for convolution between a signal f and a filter g is \( f~*~g (p) = \int _{q \in \mathbb {R}^n} f(q)g(p-q) dq\). Discretizing the integral formula on a set of points P in \(\mathbb {R}^n\) gives \( f *g (p) = \sum _{q \in P, \Vert p-q \Vert \le r} f(q) g (p-q)\) if g is supported in a ball of radius r. (a) when P can be represented by regular grids, only 9 values of a filter g are needed to compute the convolution due to the translation invariance of the domain. (b) when the signal is on point clouds, we choose the filter g from a parameterized family of function on \(\mathbb {R}^3\).

In light of the above challenges, we propose an alternative convolutional architecture, SpiderCNN, which is designed to directly extract features from point clouds. We validate its effectiveness on classification and segmentation benchmarks. By discretizing the integral formula of convolution as shown in Fig. 1, and using a special family of parametrized non-linear functions on \(\mathbb {R}^3\) as filters, we introduce a novel convolutional layer, SpiderConv, for point clouds.

The family of filters is designed to be expressive while still being feasible to optimize. We combine simple step functions, which are used to capture the coarse geometry described by local geodesic distance, with order-3 Taylor expansions, which ensure the filters are complex enough to capture intricate local geometric variations. Experiments in Sect. 4 show that SpiderCNN with a relatively simple network architecture achieves the state-of-the-art performance for classification on ModelNet40 [4], and shows competitive performance for segmentation on ShapeNet-Part [4].

2 Related Work

First we discuss deep neural network based approaches that target point clouds data. Second, we give a partial overview of geometric deep learning.

Point Clouds as Input: PointNet [15] is a pioneering work in using deep networks to directly process point sets. A spatial encoding of each point is learned through a shared MLP, and then all individual point features aggregate to a global signature through max-pooling, which is a symmetric operation that doesn’t depend on the order of input point sequence.

While PointNet works well to extract global features, its design limits its efficacy at encoding local structures. Various studies addressing this issue propose different grouping strategies of local features in order to mimic the hierarchical learning procedure at the core of classical convolutional neural networks. PointNet++ [17] uses iterative farthest point sampling to select centroids of local regions, and PointNet to learn the local pattern. Kd-Network [9] subdivides the space using K-d trees, whose hierarchical structure serves as the instruction to aggregate local features at different scales. In SpiderCNN, no additional choice for grouping or sampling is needed, for our filters handle the issue automatically.

The idea of using permutation-invariant functions for learning on unordered sets is further explored by DeepSet [22]. We note that the output of SpiderCNN does not depend on the input order by design.

Voxels as Input: VoxNet [13] and Voxception-ResNet [2] apply 3D convolution to a voxelization of point clouds. However, there is a high computational and memory cost associated with 3D convolutions. A variety of work [6, 7, 18] has aimed at exploiting the sparsity of voxelized point clouds to improve the computational and memory efficiency. OctNet [18] modified and implemented convolution operations to suit a hybrid grid-octree data structure. Vote3Deep [6] uses a feature-centric voting scheme so that the computational cost is proportional to the number of points with non-zero features. Sparse Submanifold CNN [7] computes the convolution only at activated points whose number does not increase when the convolution layers are stacked. In comparison, SpiderCNN can use point clouds as input directly and can handle very sparse input.

Convolution on Non-euclidean Domain: There are two main philosophically different approaches to define convolutions for non-Euclidean domains: one is spatial and the other is spectral. The recent work ECC [20] defines convolution-like operations on graphs where filter weights are conditioned on edge labels. Viewing point clouds as a graph, and taking the filters to be MLPs, SpiderCNN and ECC [20] result in similar convolution. However, we show that our proposed family of filters outperforms MLPs.

Spatial Methods: GeodesicCNN [12] is an early attempt at applying neural networks to shape analysis. The philosophy behind GeodesicCNN is that for a Riemannian manifold, the exponential map identifies a local neighborhood of a point to a ball in the tangent space centered at the origin. The tangent plane is isomorphic to \(\mathbb {R}^d\) where we know how to define convolution.

Let M be a mesh surface, and let \(F: M \rightarrow \mathbb {R}\) be a function, GeodesicCNN first uses the patch operator D to map a point p and its neighbors N(p) to the lattice \(\mathbb {Z}^2 \subseteq \mathbb {R}^2\), and applies Eq. 2. Explicitly, \(F *g (p) = \sum _{j \in J} g_j (\sum _{q \in N(p)} w_j(u(p, q)) F(q))\), where u(pq) represents the local polar coordinate system around p, \(w_j(u)\) is a function to model the effect of the patch operator \(D = \{D_j\}_{j \in J}\). By definition \(D_j = \sum _{q \in N(p)} w_j(u(p, q)) F(q)\). Later, AnisotropicCNN [1] and MoNet [14] further explore this framework by improving the choice for u and \(w_j\). MoNet [14] can be understood as using mixtures of Gaussians as convolutional filters. We offer an alternative viewpoint. Instead of finding local parametrizations of the manifold, we view it as an embedded submanifold in \(\mathbb {R}^n\) and design filters, which are more efficient for point clouds processing, in the ambient Euclidean space.

Spectral Methods: We know that Fourier transform takes convolutions to multiplications. Explicitly, If \(f, g: \mathbb {R}^n \rightarrow \mathbb {C}\), then \(\widehat{f *g} = \hat{f} \cdot \hat{g}\). Therefore, formally we have \(f *g = {(\hat{f} \cdot \hat{g})}^{\vee }\),Footnote 2 which can be used as a definition for convolution on non-Euclidean domains where we know how to take Fourier transform.

Although we do not have Fourier theory on a general space without any equivariant structure, on Riemannian manifolds or graphs there are generalized notions of Laplacian operator. Taking Fourier transform in \(\mathbb {R}^n\) could be formally viewed as finding the coefficients in the expansion of the eigenfunctions of the Laplacian operator. To be more precise, recall that

$$\begin{aligned} \hat{f} (\xi ) = \int _{\mathbb {R}^n} f(x) \exp {(- 2 \pi i x \cdot \xi )} d \xi , \end{aligned}$$
(1)

and \(\{ \exp {(- 2 \pi i x \cdot \xi }) \}_{\xi \in \mathbb {R}^n}\) are eigen-functions for the Laplacian operator \(\varDelta = \sum _{i = 1}^n \frac{\partial }{\partial x_i}\). Therefore, if U is the matrix whose columns are eigenvectors of the graph Laplacian matrix and \(\varLambda \) is the vector of corresponding eigenvalues, for Fg two functions on the vertices of the graph, then \(F *g = U (U^{T}F \odot U^{T}g)\), where \(U^{T}\) is the transpose of U and \(\odot \) is the Hadamard product of two matrices. Since being compactly supported in the spatial domain translates into being smooth in the spectral domain, it is natural to choose \(U^Tg\) to be smooth functions in \(\varLambda \). For instance, ChebNet [5] uses Chebyshev polynomials that reduces the complexity of filtering, and CayleyNet [11] uses Cayley polynomials which allows efficient computations for localized filters in restricted frequency bands of interest.

When analyzing different graphs or shapes, spectral methods lack abstract motivations, because different spectral domains cannot be canonically identified. SyncSpecCNN [21] proposes a weight sharing scheme to align spectral domains using functional maps. Viewing point clouds as data embedded in \(\mathbb {R}^3\), SpiderCNN can learn representations that are robust to spatial rigid transformations with the aid of data augmentation.

3 SpiderConv

In this section, we describe SpiderConv, which is the fundamental building block for SpiderCNN. First, we discuss how to define a convolutional layer in neural network when the inputs are features on point sets in \(\mathbb {R}^n\). Next we introduce a special family of convolutional filters. Finally, we give details for the implementation of SpiderConv with multiple channels and the approximations used for computational speedup.

3.1 Convolution on Point Sets in \(\mathbb {R}^n\)

An image is a function on regular grids \(F: \mathbb {Z}^2 \rightarrow \mathbb {R}\). Let W be a \((2m+1)\times (2m+1)\) filter matrix, where m is a positive integer, the convolution in classical CNNs is

$$\begin{aligned} F *W (i, j) = \sum _{s = -m}^m \sum _{t = -m}^{m} F(i - s, j - t)W(s, t), \end{aligned}$$
(2)

which is the discretization of the following integration

$$\begin{aligned} f *g (p) = \int _{\mathbb {R}^2} f(q) g (p - q) d q, \end{aligned}$$
(3)

if \(f, g: \mathbb {R}^2 \rightarrow \mathbb {R}\), such that \(f(i, j) = F(i, j)\) for \((i, j) \in \mathbb {Z}^2\) and \(g(s,t) = W(s, t)\) for \(s, t \in \{-m,-m+1, ... , m-1, m \} \) and g is supported in \([-m, m] \times [-m, m]\).

Now suppose that F is a function on a set of points P in \(\mathbb {R}^n\). Let \(g: \mathbb {R}^n \rightarrow \mathbb {R}\) be a filter supported in a ball centered at the origin of radius r. It is natural to define SpiderConv with input F and filter g to be the following:

$$\begin{aligned} F *g (p) = \sum _{q \in P, \Vert q - p\Vert \le r} F(q) g(p-q). \end{aligned}$$
(4)

Note that when \(P = \mathbb {Z}^2\) is a regular grid, Eq. 4 reduces to Eq. 3. Thus the classical convolution can be seen as a special case of SpiderConv. Please see Fig. 1 for an intuitive illustration.

In SpiderConv, the filters are chosen from a parametrized family \(\{ g_w \}\) (See Fig. 2 for a concrete example) which is piece-wise differentiable in w. During the training of SpiderCNN, the parameters \(w \in \mathbb {R}^d\) are optimized through SGD algorithm, and the gradients are computed through the formula \(\frac{\partial }{\partial w_i} F *g_w (p) = \sum _{q \in P, \Vert q - p\Vert \le r} F(q) \frac{\partial }{\partial w_i} g_w(p-q)\), where \(w_i\) is the i-th component of w.

3.2 A Special Family of Filters \(\{ g_w \}\)

A natural choice is to take \(g_w\) to be a multilayer perceptron (MLP) network, because theoretically an MLP with one hidden layer can approximate an arbitrary continuous function [8]. However, in practice we find that MLPs do not work well. One possible reason is that MLP fails to account for the geometric prior of 3D point clouds. Another possible reason is that to ensure sufficient expressiveness the number of parameters in a MLP needs to be sufficiently large, which makes the optimization problem difficult.

Fig. 2.
figure 2

Visualization of a filter in the family \(\{ g_w\}\). (a) is the scatter plot (color represents the value of the function) of \(g^{Taylor}(x, y, z) = 1 + x + y + z + xy + xz + yz + xyz\). (b) is the scatter plot of \(g^{step}(x, y, z) = \frac{i+1}{8}\) if \(\frac{i}{8} \le \sqrt{x^2 + y^2 + z^2} < \frac{i+1}{8}\), when \(i = 0, 1, ... , 7\). (c) is the scatter plot of the product \(g =g^{Taylor} \cdot g^{step}\). In the second row, (d) (e) (f) are the graphs of \(g^{Taylor}\), \(g^{step}\) and g respectively when restricting their domain to the plane \(z = 0\) (the Z-axis represents the value of the function). (Color figure online)

To address the above issues, we propose the following family of filters \(\{ g_w \}\):

$$\begin{aligned} g_{w}(x, y, z) = g^{Step}_{w^S} (x, y, z) \cdot g^{Taylor}_{w^T} (x, y, z), \end{aligned}$$
(5)

with \(w = (w^S, w^T)\) is the concatenation of two vectors \(w^S = (w^S_i)\) and \(w^T = (w^T_i)\),Footnote 3 where

$$\begin{aligned} g^{Step}_{w^S} (x, y, z) = w^S_i \text { if } r_i \le \sqrt{x^2 + y^2 + z^2} < r_{i +1}, \end{aligned}$$
(6)

with \(r_0 = 0< r_1< r_2 ... < r_N\), and

$$\begin{aligned} \begin{aligned} g^{Taylor}_{w^T} (x, y, z)&= w^T_0 + w^T_1 x + w^T_2 y + w^T_3 z + w^T_4 xy + w^T_5 yz + w^T_6 xz + w^T_7 x^2 \\&+\, w^T_8 y^2 + w^T_9 z^2 + w^T_{10} xy^2 + w^T_{11} x^2y + w^T_{12} y^2z + w^T_{13} yz^2 \\&+ \,w^T_{14} x^2z + w^T_{15} xz^2 + w^T_{16} xyz + w^T_{17} x^3 + w^T_{18} y^3 + w^T_{19} z^3. \end{aligned} \end{aligned}$$
(7)

The first component \(g^{Step}_{w^S}\) is a step function in the radius variable of the local polar coordinates around a point. It encodes the local geodesic information, which is a critical quantity to describe the coarse local shape. Moreover, step functions are relatively easy to optimize using SGD.

The order-3 Taylor term \(g^{Taylor}_{w^T}\) further enriches the complexity of the filters, complementary to \(g^{Step}_{w^S}\) since it also captures the variations of the angular component. Let us be more precise about the reason for choosing Taylor expansions here from the perspective of interpolation. We can think of the classical 2D convolutional filters as a family of functions interpolating given values at 9 points \(\{ (i, j) \}_{i, j \in \{ -1, 0, 1\} }\), and the 9 values serve as the parametrization of such a family. Analogously, in 3D consider the vertices of a cube \(\{ (i, j, k) \}_{i, j, k = 0, 1}\), assume that at the vertex (ijk) the value \(a_{i, j, k}\) is assigned. The trilinear interpolation algorithm gives us a function of the form

$$\begin{aligned} f_{w^T}(x, y, z) = w^T_0 + w^T_1 x + w^T_2 y + w^T_3 z + w^T_4 xy + w^T_5 yz + w^T_6 xz + w^T_{16} xyz, \end{aligned}$$
(8)

where \(w^T_i\)’s are linear functions in \(c_{ijk}\). Therefore \(f_{w^T}\) is a special form of \(g^{Taylor}_{w^T}\), and by varying \(w^T\), the family \(\{ g^{Taylor}_{w^T}\}\) can interpolate arbitrary values at the vertexes of a cube and capture rich spatial information.

3.3 Implementation

The following approximations are used based on the uniform sampling process constructing the point clouds:

  1. 1.

    K-nearest neighbors are used to measure the locality instead of the radius, so the summation in Eq. 4 is over the K-nearest neighbors of p.

  2. 2.

    The step function \(g^{Step}_{w^T}\) is approximated by a permutation. Explicitly, let X be the \(1 \times K\) matrix indexed by the K-nearest neighbors of p including p, and X(1, i) is a feature at the i-th K-nearest neighbors of p. Then \(F *g^{Step}_{w^T} (p)\) is approximated by Xw, where w is a \(K \times 1\) matrix with w(i, 1) corresponds to \(w^T_i\) in Eq. 6.

Later in the article, we omit the parameters w, \(w^S\) and \(w^T\), and just write \(g = g^{Step} \cdot g^{Taylor}\) to simplify our notations.

The input to SpiderConv is a \(c_1\)-dimensional feature on a point cloud P, and is represented as \(F = (F_1, F_2, ... , F_{c_1})\) where \(F_v : P \rightarrow \mathbb {R}\). The output of a SpiderConv is a \(c_2\)-dimensional feature on the point cloud \(\tilde{F} = (\tilde{F}_1, \tilde{F}_2, ... , \tilde{F}_{c_2})\) where \(\tilde{F}_i : P \rightarrow \mathbb {R}\). Let p be a point in the point cloud, and \(q_1, q_2, ... , q_K\) are its K-nearest neighbors in order. Assume \(g^{Step}_{i, v, t} (p - q_j) = w^{(i, v, t)}_{j}\), where \(t = 1, 2, ... , b\) and \(v = 1, 2, ... , c_1\) and \(i = 1, 2, ... c_2\). Then a SpiderConv with \(c_1\) in-channels, \(c_2\) out-channels and b Taylor terms is defined via the formula: \(\tilde{F}_i(p) =\sum _{v = 1}^{c_1} \sum _{j = 1}^K g_i(p-q_j)F_{v}(q_j)\), where \(g_i(p - q_j) = \sum _{t = 1}^b g_t^{Taylor} (p - q_j) w_{j}^{(i, v, t)}\), and \(g_t^{Taylor}\) is in the parameterized family \(\{ g^{Taylor}_{w^T} \}\) for \(t = 1, 2, ..., b\).

4 Experiments

We analyze and evaluate SpiderCNN on 3D point clouds classification and segmentation. We empirically examine the key hyper-parameters of a 3-layer SpiderCNN, and compare our models with the state-of-the-art methods.

Implementation Details: All models are prototyped with Tensorflow 1.3 on 1080Ti GPU and trained using the Adam optimizer with a learning rate of \(10^{-3}\). A dropout rate of 0.5 is used with the fully connected layers. Batch normalization is used at the end of each SpiderConv with decay set to 0.5. On a GTX 1080Ti, the forward-pass time of a SpiderConv layer (batch size 8) with in-channel 64 and out-channel 64 is 7.50 ms. For the 4-layer SpiderCNN (batch size 8), the total forward-pass time is 71.68 ms.

4.1 Classification on ModelNet40

ModelNet40 [4] contains 12,311 CAD models which belong to 40 different categories with 9,843 used for training and 2,468 for testing. We use the source code for PointNet [15] to sample 1,024 points uniformly and compute the normal vectors from the mesh models. The same data augmentation strategy as [15] is applied: the point cloud is randomly rotated along the up-axis and the position of each point is jittered by a Gaussian noise with zero mean and 0.02 standard deviation. The batch size is 32 for all the experiments in Sect. 4.1. We use the (xyz)-coordinates and normal vectors of the 1,024 points as the input for SpiderCNN for the experiments on ModelNet40 unless otherwise specified.

Fig. 3.
figure 3

The architecture of a 3-layer SpiderCNN used in ModelNet40 classification.

3-Layer SpiderCNN: Figure 3 illustrates a SpiderCNN with 3 layers of SpiderConvs each with 3 Taylor terms, and the respective out-channels for each layer being 32, 64, 128.Footnote 4 ReLU activation function is used here. The output features of the three SpiderConvs are concatenated in the end. Top-k pooling among all the points is used to extract global features.

Fig. 4.
figure 4

On ModelNet40 (a) shows the effect of number of pooled features on the accuracy of 3-layer SpiderCNN with 20-nearest neighbors. (b) shows the effect of nearest neighbors in SpiderConv on the accuracy of 3-layer SpiderCNN with top-2 pooling.

Two important hyperparameters in SpiderCNN are studied: the number of nearest neighbors K chosen in SpiderConv, and the number of pooled features k after the concatenation. The results are summarized in Fig. 4. The number of nearest-neighbors K is analogous to size of the filter in the usual convolution. We see that 20 is the optimal choice among 12, 16, 20, and 24-nearest neighbors. In Fig. 5 we provide visualization for top-2 pooling. The points that contribute to the top-2 pooling features are plotted. We see that similar to PointNet, Spider CNN picks up representative critical points.

Fig. 5.
figure 5

Visualization of the effect of top-k pooling. Edge points and points with non-zero curvature are preserved after pooling. (a), (b), (c), (d) are the original input point clouds. (e), (f), (g), (h) are points contributing to features extracted via top-2 pooling.

SpiderCNN + PointNet: We train a 3-layer SpiderCNN (top-2 pooling and 20-nearest neighbors) and PointNet with only (xyz)-coordinates as input to predict the classical robust local geometric descriptor FPFH [19] on point clouds in ModelNet40. The training loss of SpiderCNN is only \(\frac{1}{4}\) that of PointNet’s. As a result, we believe that a 3-layer SpiderCNN and PointNet are complementary to each other, for SpiderCNN is good at learning local geometric features and PointNet is good at capturing global features. By concatenating the 128 dimensional features from PointNet with the 128 dimensional features from SpiderCNN, we improve the classification accuracy to \(92.2 \%\).

4-Layer SpiderCNN: Experiments show that 1-layer SpiderCNN with a SpiderConv of 32 channels can achieve classification accuracy \(85.5\%\), and the performance of SpiderCNN improves with the increasing number of layers of SpiderConv. A 4-layer SpiderCNN consists of SpiderConv with out-channels 32, 64, 128, and 258. Feature concatenation, 20-nearest neighbors and top-2 pooling are used. To prevent overfitting, while training we apply the data augmentation method DP (random input dropout) introduced in [17]. Table 1 shows a comparison between SpiderCNN and other models. The 4-layer SpiderCNN achieves accuracy of \(92.4\%\) which improves over the best reported result of models with input 1024 points and normals. For 5 runs, the mean accuracy of a 4-layer SpiderCNN is \(92.0 \%\).

Table 1. Classification accuracy of SpiderCNN and other models on ModelNet40.

Ablative Study: Compared to max-pooling, top-2 pooling enables the model to learn richer geometric information. For example, in Fig. 6, we see top-2 pooling preserves more points where the curvature is non-zero. Using max-pooling, the classification accuracy is \(92.0\%\) for a 4-layer SpiderCNN, and is \(90.4\%\) for a 3-layer SpiderCNN. In comparison, using top-2 pooling, the accuracy is \(92.4\%\) for a 4-layer SpiderCNN, and is \(91.5\%\) for a 3-layer SpiderCNN.

Fig. 6.
figure 6

Top-2 pooling learns rich features and fine geometric details.

MLP filters do not perform as well in our setting. The accuracy of a 3-layer SpiderCNN is \(71.3 \%\) with \(g_w = \text {MLP}(16, 1)\), and is \(72.8 \%\) with \(g_w = \text {MLP}(16, 32, 1)\).

Without normals, the accuracy of a 4-layer SpiderCNN using only the 1,024 points is \(90.5\%\). Using normals extracted from the 1,024 input points via orthogonal distance regression, the accuracy of a 4-layer SpiderCNN is \(91.8\%\).

Fig. 7.
figure 7

(b) and (c) are shapes in SHREC15. (d) is a shape in ModelNet40. (a) is the point cloud sampled from (b).

4.2 Classification on SHREC15

SHREC15 is a dataset for non-rigid 3D shape retrieval. It consists of 1,200 watertight triangle meshes divided in 50 categories. On average 10,000 vertices are stored in one mesh model. Comparing to ModelNet40, SHREC15 contains more complicated local geometry and non-rigid deformation of one object. See Fig. 7 for a comparison. 1,192 meshes are used with 895 for training and 297 for testing. We compute three intrinsic shape descriptors (Heat Kernel Signature, Wave Kernel Signature and Fast Point Feature Histograms) for deformable shape analysis from the mesh models. 1,024 points are sampled uniformly randomly from the vertices of a mesh model, and the (xyz)-coordinates are used as the input for SpiderCNN, PointNet and PointNet++. We use SVM with linear kernel when the inputs are classical shape descriptors. Table 2 summarizes the results. We see that SpiderCNN outperforms the other methods.

Table 2. Classification accuracy on SHEREC15.

4.3 Segmentation on ShapeNet Parts

ShapeNet Parts consists of 16,880 models from 16 shape categories and 50 different parts in total, with a 14,006 training and 2,874 testing split. Each part is annotated with 2 to 6 parts. The mIoU is used as the evaluation metric, computed by taking the average of all part classes. A 4-layer SpiderCNN whose architecture is shown in Fig. 8 is trained with batch of 16. We use points with their normal vectors as the input and assume that the category labels are known. The results are summarized in Table 3. For 4 runs, the mean of mean IoU of SpiderCNN is 85.24. We see that SpiderCNN achieves competitive results despite a relatively simple network architecture (Fig. 9).

Fig. 8.
figure 8

The SpiderCNN architecture used in the ShapeNet Part segmentation task.

Table 3. Segmentation results on ShapeNet Part dataset. Mean IoU and IoU for each categories are reported.
Fig. 9.
figure 9

Some examples of the segmentation results of SpiderCNN on ShapeNet Part.

5 Analysis

In this section, we conduct additional analysis and evaluations on the robustness of SpiderCNN, and provide visualization for some of the typical learned filters from the first layer of SpiderCNN.

Fig. 10.
figure 10

Classification accuracy of SpiderCNN and PointNet++ with different number of input points on ModelNet40.

Robustness: We study the effect of missing points on SpiderCNN. Following the setting for experiments in Sect. 4.1, we train a 4-layer SpiderCNN and PointNet++ with 512, 248, 128, 64 and 32 points and their normals as input. The results are summarized in Fig. 10. We see that even with only 32 points, SpiderCNN obtains \(87.7\%\) accuracy.

Fig. 11.
figure 11

Visualization of for the convolutional filters learned in the first layer of SpiderCNN.

Visualization: In Fig. 11, we scatter plot the convolutional filters \(g_w(x, y, z)\) learned in the first layer of SpiderCNN and the color of a point represents the value of \(g_w\) at the point.

Fig. 12.
figure 12

Visualization for the convolutional filters learned in the first layer of SpiderCNN. The 3D filters are shown as scatter plots projected on to the planes \(x = 0\) or \(y = 0\) or \(z = 0\).

In Fig. 12 we choose a plane passing through the origin, and project the points that lie on one side of the plane of the scatter graph onto the plane. We see some similar patterns that appear in 2D image filters. The visualization gives some hints about the geometric features that the convolutional filters in SpiderCNN learn. For example, the first row in Fig. 12 corresponds to 2D image filters that can capture boundary information.

6 Conclusions

A new convolutional neural network SpiderCNN that can directly process 3D point clouds with parameterized convolutional filters is proposed. More complex network architectures and more applications of SpiderCNN can be explored.