Next Article in Journal
Research on the Sensing Characteristics of an Integrated Grid-like Sensor Based on a Triboelectric Nanogenerator
Next Article in Special Issue
A Building Extraction Method for High-Resolution Remote Sensing Images with Multiple Attentions and Parallel Encoders Combining Enhanced Spectral Information
Previous Article in Journal
Confining Pressure Forecasting of Shield Tunnel Lining Based on GRU Model and RNN Model
Previous Article in Special Issue
A Dual-Branch Fusion Network Based on Reconstructed Transformer for Building Extraction in Remote Sensing Imagery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer

1
School of Measurement-Control and Communication Engineering, Harbin University of Science and Technology, Harbin 150080, China
2
School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin 150001, China
3
College of Physics and Electronic Engineering, Hainan Normal University, Haikou 571158, China
*
Author to whom correspondence should be addressed.
Sensors 2024, 24(3), 867; https://doi.org/10.3390/s24030867
Submission received: 13 December 2023 / Revised: 24 January 2024 / Accepted: 24 January 2024 / Published: 29 January 2024

Abstract

:
In the face of complex scenarios, the information insufficiency of classification tasks dominated by a single modality has led to a bottleneck in classification performance. The joint application of multimodal remote sensing data for surface observation tasks has garnered widespread attention. However, issues such as sample differences between modalities and the lack of correlation in physical features have limited the performance of classification tasks. Establishing effective interaction between multimodal data has become another significant challenge. To fully integrate heterogeneous information from multiple modalities and enhance classification performance, this paper proposes a dual-branch cross-Transformer feature fusion network aimed at joint land cover classification of hyperspectral imagery (HSI) and Light Detection and Ranging (LiDAR) data. The core idea is to leverage the potential of convolutional operators to represent spatial features, combined with the advantages of the Transformer architecture in learning remote dependencies. The framework employs an improved self-attention mechanism to aggregate features within each modality, highlighting the spectral information of HSI and the spatial (elevation) information of LiDAR. The feature fusion module based on cross-attention integrates deep features from two modalities, achieving complementary information through cross-modal attention. The classification task is performed using jointly obtained spectral and spatial features. Experiments were conducted on three multi-source remote sensing classification datasets, demonstrating the effectiveness of the proposed model compared to existing methods.

1. Introduction

Remote sensing technology plays an increasingly important role in Earth observation. By analyzing the spectral characteristics of objects in different bands, it is possible to identify, detect changes, and quantitatively analyze land features [1,2]. It has significant applications in fields such as agricultural monitoring, urban planning, military reconnaissance, and others. However, due to the specificity of hyperspectral image classification (classifying each pixel in an image), the impact of cloud cover or shadows during the data collection process is inevitable [3]. This can result in blurred spectral information and inaccurate classification. Additionally, the low spatial resolution exhibited by hyperspectral imagery to some extent limits the overall classification accuracy.
The rapid development of remote sensing sensor technology has made it possible to combine data from multiple sensors to describe land information comprehensively. Data from different sensors provide various types of information about the same geographic area. For instance, hyperspectral imagery effectively captures spectral and spatial information of observed targets [4] and LIDAR utilizes laser pulses to measure the elevation information of the Earth’s surface. The Digital Surface Model (DSM) contains elevation information for each point on the Earth’s surface [5,6,7]. Synthetic Aperture Radar (SAR) uses a radar system to transmit microwave signals, records the returning signals, and then utilizes this data to create high-resolution images. SAR can provide geometric information about surface objects, including their shape, size, and orientation [8]. Therefore, by combining data from different modalities, it is possible to address issues present in a single mode. For instance, combining LiDAR, which is less affected by atmospheric interference and contains rich elevation information, with hyperspectral imagery can provide complementary information [9]. This approach addresses the issue of spectral similarity among different materials by supplementing the spatial information of hyperspectral imagery. Therefore, multiple modalities of data can be used to analyze information related to land cover [10,11]. However, it is essential to address the challenges of disparate information dimensions and unrelated physical features between the two modalities.
In previous research, fusion classification methods for Hyperspectral Imaging (HSI) and Light Detection and Ranging (LiDAR) have often inclined towards reducing data dimensionality and manually designing feature fusion based on the intrinsic properties of the data [12,13,14,15]. For instance, in [14], Liao et al. proposed a method that integrates Morphological Profiles (MPs) of Hyperspectral (HS) and LiDAR data on a manifold using graph-based subspace learning, resulting in improved classification outcomes. In [15], the fusion of Hyperspectral (HS) and LiDAR data was enhanced by using Extinction Profiles (EPs) combined with Total Variation Component Analysis. Additionally, the use of multiple fusion strategies has been proven to further enhance classification performance. For instance, in [16], both feature-level fusion and decision-level fusion were employed, where Gabor features extracted from HSI and LiDAR data, along with their amplitude and phase features, were concatenated and input into the classifier. By normalizing the results of three classifiers from two superpixel segmentation algorithms and adopting a weighted majority voting decision fusion strategy, the efficiency of utilizing multiple features was effectively improved. However, the mentioned approaches heavily relied on manually designed features, incorporating more subjective ideas, making it challenging to adaptively generalize the intrinsic features of multimodal data. Secondly, these traditional methods have not fully exploited spatial information, limiting their classification performance. Moreover, due to the relatively large number of features extracted from different remote sensing data, it may lead to the “curse of dimensionality” problem, where the high dimensionality of features makes processing and analysis complex and challenging. Therefore, while traditional methods have achieved some success in land cover classification accuracy, their applicability and adaptability still need further expansion and improvement.
The algorithm based on deep learning demonstrates significant potential in the joint classification of multi-source remote sensing data [17,18,19]. Chen et al. [20] independently extracted features from multimodal data using a dual-branch CNN, and fused the heterogeneous features of each branch through a fully connected DNN. Building upon a dual-branch deep CNN structure, Xu [21] supplemented spatial information from other modalities in a cascading manner. However, the model does not place sufficient emphasis on spectral features, leading to incomplete feature fusion. Hang et al. [22] proposed a coupled CNN network that optimizes the fusion of multimodal features by combining feature-level fusion and decision-level fusion strategies, resulting in improved classification performance. CNN excels in handling spatial features; however, for HSI data containing a large number of spectral sequence attributes, CNN struggles to identify subtle spectral differences between pixels, especially the mid-to-long-term dependencies between spectra [23]. While Recurrent Neural Networks (RNNs) can establish sequence models, their inability to simultaneously train multiple sample networks limits classification performance.
In order to effectively highlight the key features of each modality and suppress irrelevant information during the analysis, researchers have incorporated attention mechanisms within the CNN framework. This approach is particularly suitable for handling spatial and spectral data, allowing simultaneous analysis of critical components in both types of data. Through attention mechanisms, CNN can focus more on important features in the data while disregarding information that is unimportant or irrelevant to the current task. The Squeeze-and-Excitation Networks (SE) module adjusts channel feature responses to enhance the network’s representational capability [24]. The SE module models interdependencies between channels and adaptively recalibrates channel feature responses, thereby improving the network’s performance significantly. This has led to a notable enhancement in the performance of existing deep learning architectures. Building upon this, Xu et al. proposed a novel multi-scale feature extraction module, SE-Res2Net. It utilizes channel grouping techniques to extract multi-scale features from hyperspectral images, achieving acquisition of different granularity receptive fields. This is combined with a channel optimization module to assess the importance of each channel in the feature map [25]. Roy et al. designed an attention-based adaptive spectral-spatial kernel improved residual network, using spectral attention to capture distinctive spectral-spatial features [26]. Gradually, CNN networks based on extracting both spectral and spatial features have been employed for joint classification of hyperspectral images and LiDAR data. Wang introduced non-local operations as a universal basic building block for capturing long-range dependencies, weighting features from all positions and summing them up [27]. Haut et al. proposed a spectral-spatial attention network based on a residual network. By selecting features at both shallow and deep levels, the network obtains more representative and significant features for classifying hyperspectral image data. Spectral and spatial attention focus on highlighting prominent bands and spatial information, respectively [28].
The Transformer model has garnered attention from researchers due to its excellent ability to capture global relationships [29]. Initially proposed for natural language processing, it has later found applications in image processing [30]. Qing et al. [31], leveraging a multi-head attention mechanism, successfully captured spectral relationships in sequences, enhancing the classification performance of HSI. Hong et al. [32] introduced a spectral transformer model that captures spectral features from neighboring configurational bands. However, the mentioned works did not utilize spatial information. Roy et al. [33] introduced a multimodal fusion transformer. This approach initializes the learning embedding with LiDAR data. However, this operation did not fully integrate effective information from both data sources, limiting classification accuracy.
A Transformer encoder based on self-attention mechanisms can learn sequential information from its own data. Meanwhile, cross-attention mechanisms tailored for multimodal data can concurrently consider relationships between two distinct sequences, thereby better capturing their correlations. In contrast to the MFT proposed by Roy [33], researchers like Zhao [34] introduced a cross-modal attention network. This network combines the learnable labels from the hyperspectral image branch with LiDAR data and computes internal attention to achieve complementary information integration. Similarly, Zhang et al. [35] achieved information fusion between two modalities by exchanging cls (class) tokens and introducing a learnable feature fusion method for modality integration. While the mentioned methods effectively leverage cross-attention mechanisms for complementary information integration, the random initialization of cls tokens significantly impacts subsequent attention calculations. In summary, fusion networks based on CNNs combined with Transformer for cross-modal feature interaction may lead to the oversight of crucial shared high-level features in the processing of multimodal data, thereby impacting the comprehensiveness and accuracy of data analysis. Additionally, due to the distinct discriminative capabilities of specific features in each modal data, a significant imbalance among features may arise.
To better integrate features from hyperspectral imagery and LiDAR data and improve classification accuracy, we propose a dual-branch Transformer feature fusion network. This network focuses on the global information of hyperspectral imagery while considering local neighborhood information. Simultaneously, utilizing a cross-attention mechanism highlights features in hyperspectral images using the attention from LiDAR, achieving complementarity between hyperspectral image and LiDAR data features. Features from both modalities are fused for the classification task. The contributions of this paper are summarized as follows:
(1)
The proposed dual-branch Transformer feature fusion network can capture features from shallow layers and integrate them into deep features, thereby achieving complementary information between different modalities.
(2)
In response to the relatively weak spatial information of hyperspectral images, a Group Embedding Module is proposed to enhance the local information aggregation between different neighborhoods. This module addresses the issue of neglecting the correlation between adjacent keys in the multi-head attention module.
(3)
Considering the physical feature differences between modalities, we utilize mutual mapping of features between modalities to achieve global interaction and improve the performance of joint classification.

2. Materials and Methods

2.1. Dataset Description

This study conducts classification tasks on three publicly available multimodal remote sensing datasets, namely, the Houston2013 dataset [36], MUUFL Gulfport Hyperspectral and LiDAR (MUUFL) [37,38], and the Trento dataset. The following provides detailed introductions to each dataset along with information on the respective classes.
The Houston2013 dataset is supplied by the 2013 IEEE GRSS Data Fusion Challenge. Gathered in 2012 by the National Center for Airborne Laser Mapping, this dataset comprises topographical details of both the University of Houston campus and the neighboring city. The HSI data consists of 144 spectral bands, while the LiDAR data provides a single band recording elevation information. The image size is 349 × 1905 pixels, with a spectral resolution ranging from 0.38 to 1.05 μm and a spatial resolution of 2.5 m. The dataset comprises 15 land cover categories. Figure 1 displays the pseudo-colored composite image of the HSI data, the grayscale image of the LiDAR data, and the corresponding ground truth map.
The MUUFL dataset was acquired in November 2010 within the campus area of the Gulf Park campus of the University of Southern Mississippi using the Reflective Optics System Imaging Spectrometer. In the MUUFL dataset, the HSI data comprises 72 spectral bands ranging from 0.38 to 1.05 μm, and the LiDAR data consists of two wavelengths at 1.06 μm. Due to excessive noise, the first 8 and last 8 bands were removed. The dataset consists of 325 × 220 pixels and includes a total of 11 different land cover categories. Pseudo-colored composite images of the HSI data, grayscale images of the LiDAR data, and the ground truth map are shown in Figure 2.
The Trento dataset was collected in southern Trento, Italy, and includes both HSI (Hyperspectral Imaging) and LiDAR DSM (Digital Surface Model) data. The spatial dimensions are 166 × 600, with a spatial resolution of 1 m. The HSI data comprises 63 available spectral bands. The dataset encompasses six object categories, totaling 30,214 sample pixels. Figure 3 displays the pseudo-colored HSI image and LiDAR DSM image of the dataset.
The land cover categories for the three datasets, along with the configuration of training and testing samples, are presented in Table 1.

2.2. Methods

The proposed Dual-branch Transformer feature fusion network is illustrated in Figure 4. The network adopts different processing methods for the information differences between different modalities. It emphasizes spectral features for hyperspectral images and spatial information for LiDAR data. Finally, the information from both modalities is fused for classification.
Based on the outstanding modeling capability of CNN for contextual features, it demonstrates good performance in classification tasks. We first utilize CNN for shallow feature extraction from data of two modalities and control the depth of the output feature maps. Subsequently, we perform feature embedding. This is an indispensable step in entering the Transformer encoding layer.
For different modalities, we undergo distinct serialization processes and then, addressing the characteristics of each modality, respectively enhance the self-attention in different branches of the Transformer layer to extract deep features.
Let HSI be denoted as X H R m × n × l , and LiDAR data of the same geographical area as X L R m × n , where m and n represent the spatial dimensions, and l corresponds to the number of spectral bands in HSI. From the normalized data, we construct spectral-spatial cubes X P H R s × s × l and X P L R s × s for each pixel, where s × s represents the patch size.
To handle pixels at the image boundaries, padding is applied, and the central pixel of each patch serves as a sample label, forming pairs of samples for the two modal-ties.

2.2.1. Feature Extraction from Hyperspectral Image

For hyperspectral images, we employ convolutional layers to locally model the high-dimensional spectral information of HSI, reducing the dimensionality of the spectral information while maintaining the consistency of the sequence length. Here, we set the sequence length to 64, resulting in an output layer size of ( s , s , 64 ) .
When using one-dimensional positional encoding, the Transformer encoder may lose some spatial information, making it challenging to directly capture the positional relationships of data in a two-dimensional space. In the process of self-attention computation, the rich contextual information between neighboring keys is not fully utilized. Therefore, to address high-spectral images, we introduce a Group Embedding Module (GEM). The computational diagram is shown in Figure 5. This module leverages neighborhood information among input keys to guide self-attention learning. Firstly, GEM captures static spatial contextual relevance among adjacent keys, focusing on the layout or feature distribution of nearby keys in the input. Subsequently, weight coefficients are generated through convolution with queries to explore dynamic spatial contextual relevance. The specific computational process is outlined below:
We first transform it into Query (QH) and Value ( V H ) through a learnable embedding matrix.
Q H , V H = C o n v ( X H W q ) , C o n v ( X H W v )
where W q , W v is a learnable embedding matrix. Unlike the 1 × 1 convolution used in self-attention mechanisms to generate Key (K), GEM employs a k × k channel convolution to extract spatial neighborhood information, obtaining K * R s × s × 64 , which reflects contextual information between neighborhoods. Subsequently, K * is concatenated with Q , and the attention matrix is computed through two 1 × 1 convolutions.
K H = [ K * , Q ] W θ W δ
The resulting attention matrix K H obtained in this way contains rich contextual information, unlike traditional attention mechanisms where the attention is isolated to Query-Key pairs. Subsequently, self-attention computation is carried out.
A t t e n t i o n ( Q H , K H , V H ) = S o f t max ( Q H K H T d k ) V H
By introducing GEM, we incorporate local correlations, while the depth wise convolution captures local spatial information. Combined with the global correlations of the Transformer, this strengthens the model’s capacity to effectively capture HSI data.

2.2.2. Feature Extraction from LiDAR Images

Regarding LiDAR data, we use two 2D convolutional layers to extract its elevation information. The input LiDAR data tensor of size undergoes convolutional operations with 32 and 64 filters, each with a size of 3 × 3 . The convolutional layers with padding produce an output of size ( s × s × 64 ) . Similar to the hyperspectral image, after the convolutional layers, the LiDAR image also generates 64 two-dimensional feature maps. Additionally, for regularization and to expedite the training process, batch normalization and ReLU activation layers are applied after the convolutional layers.
Next, it is input into a Transformer encoder based on Spatial Attention (SA). As shown in Figure 6, this attention module is designed to learn representative spatial features by capturing short and long-range pixel interactions from the input feature maps. For an input feature map with dimensions ( s × s × 64 ) , it is transformed into Query (Q), Key (K), and Value (V) through a learnable embedding matrix.
Q L , K L , V L = C o n v ( X H W q ) , C o n v ( X H W K ) , C o n v ( X H W v )
Through a 1 × 1 convolutional layer, the channels of K L and Q L are down-sampled by a factor of 8, reducing their channel count to 1/8 of the original. This is done to better capture spatial relationships. By decreasing the channel count, the model focuses more on learning important spatial features. Subsequently, the down-sampled K L and Q L undergo matrix multiplication to form an attention mask of size ss × ss. The attention mask is then subjected to the softmax activation function. The obtained attention mask is multiplied and added to V L in a residual manner, resulting in a spatially attentive output feature map. The final output feature map has dimensions ( s × s × 64 ) .
Finally, following the same procedure as the HSI processing, attention computation is conducted to complete the aggregation of spatial information.

2.2.3. Feature Fusion of Two Modalities

The extraction of features and the interaction of information in multimodal data are crucial for joint classification tasks. We employ a cross-attention module, allowing the model to weight the features of one modality based on the feature representation of another modality, achieved by exchanging keys between two branches of Transformer layers. By computing attention weights to determine the degree of focus between the two modalities, these weights are then applied to the value vectors of the data, achieving feature fusion and interaction. Leveraging the correlations between different modal data enhances the overall feature representation capability.
F = W λ M H A ( Q H , K L , V H ) + ( 1 W λ ) M H A ( Q L , K H , V L )
where Q H , K H , and V H represent the feature embeddings of HSI. Q L , K L , and V L represent the feature embeddings of LiDAR. W λ denotes the weight coefficients, which are obtained through operations such as linear transformations applied to the shallow features of the two modalities, as shown in Figure 7. These weights are used to calculate the fusion weights for HSI and LiDAR data and can be learned and adjusted through parameter updates during the training process. F represents the fused features that enter the classification layer.
The introduction of weight coefficients is due to the unequal importance of hyperspectral and LiDAR data. Hyperspectral imagery occupies the primary features, while LiDAR serves as a supplementary source for spatial information and provides elevation details. After the interaction of information from both modalities, the data proceeds to the classification layer to accomplish the classification task. The following presents the entire algorithmic process of the model (Algorithm 1).
Algorithm 1 
Input: The raw HSI data XH, LiDAR data XL, and ground truth XR
Output: Classification result of each pixel is compared with the overall classification map.
1: Conduct shallow feature extraction on HSI to reduce dimensionality. LiDAR is then mapped to the same dimension as HSI through two-dimensional convolution.
2: Trim datasets for two modalities, dividing them into training sample pairs, validation sample pairs, and test sample pairs.
3: Perform GEM module on hyperspectral data to highlight its spectral information.
4: Perform Spatial Attention to LiDAR data to emphasize spatial information.
5: The cross-attention effectively integrates or aggregates information from two modalities
6: Fusing features using adaptive weight allocation coefficients.
7: Classify the fused features using fully connected layers.
8: Utilizing the trained model to classify the test set and subsequently generate a classification map.

3. Experimental Results and Analyses

3.1. Experimental Setup and Evaluation Metrics

For the experimental setup, both our method and the comparative methods were executed on the PyTorch 1.10.0 framework under the Ubuntu 20.04 system. The hardware configuration includes an RTX 2080 Ti (11 GB) GPU, a CPU with 12 vCPUs (Intel(R) Xeon(R) Platinum 8255C CPU @ 2.50 GHz), and 40 GB of RAM.
For the network hyperparameters, we set the number of attention heads to 8, and initialized the learning rate to 1.0 × 10−4, utilizing weight decay for optimization during training. The batch size during the training phase was set to 64, and the model was trained for a total of 150 epochs. We employed the Adam optimizer for network optimization.
To assess the classification performance of the proposed framework and other existing frameworks, three widely used quantitative analysis metrics were employed: Overall Accuracy (OA), Average Accuracy (AA), and Kappa coefficient (Kappa).

3.2. Experimental

To validate the effectiveness of the proposed method, experiments were conducted by comparing it with five other multimodal data fusion classification methods using the same training and testing datasets: EndNet [39], MFT [33], MGA [40], Coupled CNN [22], and HCT [34]. Table 2, Table 3 and Table 4 show the Overall Accuracy (OA), Average Accuracy (AA), Kappa, and class accuracies obtained using different methods on the Houston2013, MUUFL, and Trento datasets.
EndNet adopts an encoder–decoder network architecture, employing a mandatory fusion functionality to sequentially reconstruct multimodal inputs, thereby enhancing cross-modality neuron activation. MFT changes the Transformer’s CLS by incorporating features from one modality, leveraging additional information sources for better generalization, and learning unique representations in a simplified and stratified feature space. MGA utilizes a triple-branch architecture to learn the spectral features, spatial features of hyperspectral images, and elevation information from LiDAR data, respectively. It strengthens the feature interaction of each branch through multi-level feature fusion. Coupled CNN consists of two convolutional neural networks, which are coupled together through a shared parameter strategy. It employs both feature-level and decision-level fusion methods to fully integrate these heterogeneous features. HCT also adopts a dual-branch architecture similar to MFT, fusing multisource heterogeneous information through a cross-token attention fusion encoder.
During the experiment, we randomly selected 50 samples from each land cover type as training samples, with the remaining samples used for testing. Subsequently, training and testing were carried out across various methods, ultimately yielding the classification results for each method. This process was repeated five times, and the final results were obtained by calculating the average.

3.2.1. Setting the Size of Image Patches

The patch size will affect the range of the neighborhood that the network attends to around the central pixel. The setting of this parameter is crucial. To find the optimal patch size for our experiments, we conducted trials using five different sizes. As shown in Figure 8, the classification performance on three datasets indicates that, for the proposed network, the best-performing patch size is 11 × 11. Consequently, all subsequent experiments were conducted based on this patch size.

3.2.2. Experimental Analysis of the Houston2013 Dataset

Table 2 presents the experimental results of the Houston2013 dataset using our method and various comparative methods, including the classification accuracy for each land cover type, the Average Accuracy (AA), as well as the Overall Accuracy (OA), and Kappa coefficient under different classification methods. The results indicate that the final classification accuracy OA increased to 96.55% using the proposed method, and the Kappa coefficient improved to 96.27. Compared to CCNN and HCT, which also employ a dual-branch architecture, the overall accuracy increased by 1.45% and 0.94%, respectively. In fifteen land cover classes, eight classes achieved optimal performance. Figure 9 shows the classification maps of each method, where it is noticeable that Healthy Grass on the right side of the classification map is easily misclassified as Stressed Grass. Due to the dispersed nature of the samples in the Houston2013 dataset and the presence of a lot of background, it is difficult to discern the misclassification in other areas of the classification map. However, in terms of the three performance indicators, the model proposed here outperforms the others.

3.2.3. Experimental Analysis of the MUUFL Dataset

Table 3 presents the experimental results on the MUUFL dataset using our method and various comparative approaches. As shown in the table, the proposed method achieved a final classification accuracy (OA) of 90.51% and a Kappa coefficient of 87.57 on the MUUFL dataset. Among the eleven land cover categories, six categories reached optimal performance. The average accuracy across all categories also reached 91.10%, which is a significant improvement compared to other methods. Figure 10 shows the classification maps for each method, revealing that in the top-right section of the map, despite the presence of numerous region categories, the proposed method still exhibits commendable classification performance, with fewer misclassifications for Mixed Ground Surface. However, the Buildings Shadow category is prone to being misclassified as Mixed Ground Surface. This could be due to the network’s slightly weaker capability to differentiate features between these two land cover types.

3.2.4. Experimental Analysis of the Trento Dataset

Table 4 presents the experimental results on the Trento dataset using our method and various comparative approaches. The Trento dataset is overall very orderly, with a regular distribution of land cover types, hence the overall classification performance is generally good. As shown in the table, the proposed method achieved a final classification accuracy (OA) of 99.46% and a Kappa coefficient of 97.67 on the Trento dataset. Among the six land cover categories, three categories reached optimal performance. The average accuracy across all categories also reached 98.94%. From the classification maps (Figure 11), we can roughly observe that the comparative methods often misclassify at the edges of different land cover types, such as Ground being misclassified as Apple Tree in the central part of the map, which is especially evident in the EndNet method. However, the method proposed in this paper shows slightly reduced misclassification at the edges.
Based on the overall analysis of the three datasets, it is observed that the proposed model demonstrates superior performance in terms of Overall Accuracy (OA), Average Accuracy (AA), and Kappa coefficient. Additionally, it is noted that models with a dual-branch processing approach, such as CCNN and HCT, tend to perform better. The lower classification performance of the comparative models can be attributed to the limited number of training samples chosen, lack of utilization of spatial information, or relatively simple fusion strategies.
On the other hand, our proposed model takes into account neighborhood information at each stage and integrates features from both modalities comprehensively. Therefore, even with scattered sample distributions, this model can better differentiate various land cover categories.
Figure 9, Figure 10 and Figure 11 represent the classification results of each model on the test set. Due to the scattered nature of Houston’s test samples, specific differences are not discernible. However, it can be observed from the MUUFL classification map that the proposed models exhibit better performance at the edges of terrain features.

4. Discussion

To investigate the advantages of multi-modal joint classification and the contributions of different modules to performance, a discussion will be conducted for the following scenarios.

4.1. Impact of Multimodal Data and GEM Modules

To further assess the performance of GEM and the complementary effects between modalities, we conducted comparative experiments using a baseline network that combines CNN with a Transformer encoder. We initially evaluated the classification performance of single-modal data with both a baseline model based on ViT and the currently proposed method. Subsequently, we performed classification experiments using a dual-branch network that fused LiDAR data. Finally, the GEM module was integrated into the dual-branch network for experimentation. The classification performance obtained on three datasets is shown in Table 5.
According to the data presented in Table 5, it is evident that using only LiDAR data for classification tasks results in poor performance. It is not difficult to understand, since LiDAR data only records elevation information of objects, making it challenging to differentiate between different types of objects based solely on elevation information and edge features. This is particularly evident for the Houston2013 and MUUFL datasets, where the overall accuracies are 60.34% and 45.35%, respectively. In contrast, for the Trento dataset with a simpler distribution of objects and concentrated samples, the classification task can be well accomplished using LiDAR data, achieving an overall accuracy of 81.67%.
When comparing solely using hyperspectral images for classification tasks with the proposed network that integrates multimodal features, significant differences are observed. The network exhibits higher overall accuracies by 0.07%, and 0.96% for the Houston2013 and MUUFL datasets, respectively. Therefore, although hyperspectral images, with their rich spectral information, can distinguish object categories, collaborative classification using multimodal remote sensing images has proven to yield a slight improvement in performance, especially in complex scenarios.
Furthermore, by integrating GEM to emphasize spatial relationships within neighborhoods, the proposed network framework’s classification performance is further enhanced. The accuracy on the Houston2013, MUUFL, and Trento datasets reaches 96.55%, 90.51%, and 99.46%, respectively. Simultaneously, both AA (Average Accuracy) and Kappa values also experience significant improvements, confirming the effectiveness of the GEM module.

4.2. Impact of Fusion Weight Coefficients

To comprehensively assess the performance of the feature weighting module, comparative experiments were conducted by varying the fusion coefficients. Five sets of manually set hyperspectral weighting coefficients (W) were established as 0.6, 0.7, 0.8, 0.9, and 1 (using only hyperspectral image data). Additionally, a classification task was performed using a learnable fusion coefficient weighting scheme. The detailed classification performance results for each set are provided in Table 6.
Observing the results, it can be noted that with the increase in the weight of the hyperspectral branch, the performance initially shows an upward trend across the three datasets. However, when only hyperspectral images are used, i.e., in the case of single-modal classification, the performance slightly decreases. This phenomenon is more pronounced for the Houston2013 and MUUFL datasets, while the classification performance for the Trento dataset shows less fluctuation. This is because hyperspectral images, due to their rich spectral information, dominate in the classification task, achieving satisfactory accuracy levels. When hyperspectral imaging is combined with LiDAR data for classification, the spatial and elevation information provided by LiDAR complements hyperspectral images, leading to a slight improvement in classification performance. The use of weight coefficients based on shallow features for feature fusion results in optimal performance. Therefore, employing learnable weight coefficients enhances the rationality of feature fusion.

5. Conclusions

In this paper, for the joint classification task of hyperspectral imaging (HSI) and Light Detection and Ranging (LiDAR) data, we propose a dual-branch transformer feature fusion extraction network to extract and fuse features from both modalities. This network combines the feature learning methods of Transformers with Convolutional Neural Networks (CNN), fully leveraging their respective strengths.
For data from different modalities, we propose a shallow feature mapping mechanism that reduces the spectral dimension of HSI and allows for better expression of spatial features in LiDAR data.
For HSI, we introduce an improved self-attention method called GEM, which uses the aggregative abilities of convolutional networks to address the loss of positional information caused by Transformer serialization. For LiDAR, we employ a spatial attention mechanism to enhance the expression of its spatial information.
Finally, in contrast to traditional linear fusion methods, we employ cross-attention fusion strategies and dynamic fusion strategies to enhance the complementarity of information from the two modalities. Experimental validation on three multimodal remote sensing datasets confirms the feasibility and effectiveness of the proposed model.

Author Contributions

Conceptualization, Q.W. and B.Z.; methodology, Q.W. and B.Z.; software and experiments, Q.W. and B.Z.; validation, Q.W. and B.Z.; writing—original draft preparation, Q.W. and B.Z.; writing—review and editing, Q.W., B.Z., J.Z. and Y.W.; funding acquisition, Q.W., J.Z., J.X. and Y.W. All authors contributed to the results analysis and reviewed the manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work is partly supported by National Natural Science Foundation of China (62271171) and Heilongjiang Province Outstanding Young Teacher Basic Research Support Program (YQJH2023262).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zhu, L.; Chen, Y.; Ghamisi, P.; Benediktsson, J.A. Generative adversarial networks for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2018, 56, 5046–5063. [Google Scholar] [CrossRef]
  2. Chen, Y.; Lu, X. A deep hashing technique for remote sensing image-sound retrieval. Remote Sens. 2019, 12, 84. [Google Scholar] [CrossRef]
  3. Ghamisi, P.; Yokoya, N.; Li, J.; Liao, W.; Liu, S.; Plaza, J.; Rasti, B.; Plaza, A. Advances in hyperspectral image and signal processing: A comprehensive overview of the state of the art. IEEE Geosci. Remote Sens. Mag. 2017, 5, 37–78. [Google Scholar] [CrossRef]
  4. Hong, D.; He, W.; Yokoya, N.; Yao, J. Interpretable hyperspectral artificial intelligence: When nonconvex modeling meets hyperspectral remote sensing. IEEE Geosci. Remote Sens. Mag. 2021, 9, 52–87. [Google Scholar] [CrossRef]
  5. Wang, Z.; Menenti, M. Challenges and opportunities in Lidar remote sensing. Front. Remote Sens. 2021, 2, 641723. [Google Scholar] [CrossRef]
  6. Dalponte, M.; Ørka, H.O.; Gobakken, T.; Gianelle, D.; Næsset, E. Tree species classification in boreal forests with hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2012, 51, 2632–2645. [Google Scholar] [CrossRef]
  7. Matsuki, T.; Yokoya, N.; Iwasaki, A. Hyperspectral tree species classification of Japanese complex mixed forest with the aid of LiDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 2177–2187. [Google Scholar] [CrossRef]
  8. Li, W.; Gao, Y.; Zhang, M.; Tao, R.; Du, Q. Asymmetric feature fusion network for hyperspectral and SAR image classification. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 8057–8070. [Google Scholar] [CrossRef]
  9. Heiden, U.; Heldens, W.; Roessner, S.; Segl, K.; Esch, T.; Mueller, A. Urban structure type characterization using hyperspectral remote sensing and height information. Landsc. Urban Plan. 2012, 105, 361–375. [Google Scholar] [CrossRef]
  10. Ghamisi, P.; Höfle, B.; Zhu, X.X. Hyperspectral and LiDAR data fusion using extinction profiles and deep convolutional neural network. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 10, 3011–3024. [Google Scholar] [CrossRef]
  11. Cao, Q.; Zhong, Y.; Ma, A.; Zhang, L. Urban land use/land cover classification based on feature fusion fusing hyperspectral image and LiDAR data. In Proceedings of the IGARSS 2018—2018 IEEE International Geoscience and Remote Sensing Symposium, Valencia, Spain, 22–27 July 2018; IEEE: New York, NY, USA, 2018; pp. 8869–8872. [Google Scholar]
  12. Pedergnana, M.; Marpu, P.R.; Dalla Mura, M.; Benediktsson, J.A.; Bruzzone, L. Classification of remote sensing optical and LiDAR data using extended attribute profiles. IEEE J. Sel. Top. Signal Process. 2012, 6, 856–865. [Google Scholar] [CrossRef]
  13. Ghamisi, P.; Souza, R.; Benediktsson, J.A.; Zhu, X.X.; Rittner, L.; Lotufo, R.A. Extinction profiles for the classification of remote sensing data. IEEE Trans. Geosci. Remote Sens. 2016, 54, 5631–5645. [Google Scholar] [CrossRef]
  14. Liao, W.; Pižurica, A.; Bellens, R.; Gautama, S.; Philips, W. Generalized graph-based fusion of hyperspectral and LiDAR data using morphological features. IEEE Geosci. Remote Sens. Lett. 2014, 12, 552–556. [Google Scholar] [CrossRef]
  15. Rasti, B.; Ghamisi, P.; Gloaguen, R. Hyperspectral and LiDAR fusion using extinction profiles and total variation component analysis. IEEE Trans. Geosci. Remote Sens. 2017, 55, 3997–4007. [Google Scholar] [CrossRef]
  16. Jia, S.; Zhan, Z.; Zhang, M.; Xu, M.; Huang, Q.; Zhou, J.; Jia, X. Multiple feature-based superpixel-level decision fusion for hyperspectral and LiDAR data classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 1437–1452. [Google Scholar] [CrossRef]
  17. Deng, X.; Dragotti, P.L. Deep convolutional neural network for multi-modal image restoration and fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 43, 3333–3348. [Google Scholar] [CrossRef]
  18. Gómez-Chova, L.; Tuia, D.; Moser, G.; Camps-Valls, G. Multimodal classification of remote sensing images: A review and future directions. Proc. IEEE 2015, 103, 1560–1584. [Google Scholar] [CrossRef]
  19. Rao, D.; De Deuge, M.; Nourani-Vatani, N.; Williams, S.B.; Pizarro, O. Multimodal learning and inference from visual and remotely sensed data. Int. J. Robot. Res. 2017, 36, 24–43. [Google Scholar] [CrossRef]
  20. Chen, Y.; Li, C.; Ghamisi, P.; Jia, X.; Gu, Y. Deep fusion of remote sensing data for accurate classification. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1253–1257. [Google Scholar] [CrossRef]
  21. Xu, X.; Li, W.; Ran, Q.; Du, Q.; Gao, L.; Zhang, B. Multisource remote sensing data classification based on convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2017, 56, 937–949. [Google Scholar] [CrossRef]
  22. Hang, R.; Li, Z.; Ghamisi, P.; Hong, D.; Xia, G.; Liu, Q. Classification of hyperspectral and LiDAR data using coupled CNNs. IEEE Trans. Geosci. Remote Sens. 2020, 58, 4939–4950. [Google Scholar] [CrossRef]
  23. He, X.; Chen, Y.; Lin, Z. Spatial-spectral Transformer for Hyperspectral Image Classification. Remote Sens. 2021, 13, 498. [Google Scholar] [CrossRef]
  24. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7132–7141. [Google Scholar]
  25. Xu, Q.; Liang, Y.; Wang, D.; Luo, J.B. Hyperspectral image classification based on SE-Res2Net and multi-scale spatial-spectral fusion attention mechanism. J. Comput.-Aided Des. Comput. Graph 2021, 33, 1726–1734. [Google Scholar] [CrossRef]
  26. Roy, S.K.; Manna, S.; Song, T.; Bruzzone, L. Attention-based adaptive spectral–spatial kernel ResNet for hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2020, 59, 7831–7843. [Google Scholar] [CrossRef]
  27. Wang, X.; Girshick, R.; Gupta, A.; Kaiming, H. Non-local neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 7794–7803. [Google Scholar]
  28. Haut, J.M.; Paoletti, M.E.; Plaza, J.; Plaza, A.; Li, J. Visual attention-driven hyperspectral image classification. IEEE Trans. Geosci. Remote Sens. 2019, 57, 8065–8080. [Google Scholar] [CrossRef]
  29. Khan, S.; Naseer, M.; Hayat, M.; Zamir, S.W.; Khan, F.S.; Shah, M. Transformers in vision: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–41. [Google Scholar] [CrossRef]
  30. Du, X.; Zheng, X.; Lu, X.; Doudkin, A. Multisource remote sensing data classification with graph fusion network. IEEE Trans. Geosci. Remote Sens. 2021, 59, 10062–10072. [Google Scholar] [CrossRef]
  31. Qing, Y.; Liu, W.; Feng, L.; Gao, W. Improved transformer net for hyperspectral image classification. Remote Sens. 2021, 13, 2216. [Google Scholar] [CrossRef]
  32. Hong, D.; Han, Z.; Yao, J.; Gao, L.; Zhang, B.; Plaza, A.; Chanussot, J. SpectralFormer: Rethinking hyperspectral image classification with transformers. IEEE Trans. Geosci. Remote Sens. 2021, 60, 5518615. [Google Scholar] [CrossRef]
  33. Roy, S.K.; Deria, A.; Hong, D.; Rasti, B.; Plaza, A.; Chanussot, J. Multimodal Fusion Transformer for Remote Sensing Image Classification. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1560–1578. [Google Scholar] [CrossRef]
  34. Zhao, G.; Ye, Q.; Sun, L.; Wu, Z.; Pan, C.; Jeon, B. Joint classification of hyperspectral and lidar data using a hierarchical cnn and transformer. IEEE Trans. Geosci. Remote Sens. 2022, 61, 5500716. [Google Scholar] [CrossRef]
  35. Zhang, Y.; Peng, Y.; Tu, B.; Liu, Y. Local Information Interaction Transformer for Hyperspectral and LiDAR Data Classification. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2022, 16, 1130–1143. [Google Scholar] [CrossRef]
  36. Debes, C.; Merentitis, A.; Heremans, R.; Hahn, J.; Frangiadakis, N.; Kasteren, T.; Liao, W.; Bellens, R.; Pižurica, A.; Gautama, S.; et al. Hyperspectral and LiDAR data fusion: Outcome of the 2013 GRSS data fusion contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 2405–2418. [Google Scholar] [CrossRef]
  37. Gader, P.; Zare, A.; Close, R.; Aitken, J.; Tuell, G. Muufl Gulfport Hyperspectral and Lidar Airborne Data Set; Tech. Report 2013, REP-2013-570; University of Florid: Gainesville, FL, USA, 2013. [Google Scholar]
  38. Du, X.; Zare, A. Technical Report: Scene Label Ground Truth Map for MUUFL Gulfport Data Set; University of Florida: Gainesville, FL, USA, 2017. [Google Scholar]
  39. Hong, D.; Gao, L.; Hang, R.; Zhang, B.; Chanussot, J. Deep encoder–decoder networks for classification of hyperspectral and LiDAR data. IEEE Geosci. Remote Sens. Lett. 2020, 19, 5500205. [Google Scholar] [CrossRef]
  40. Zhang, T.; Xiao, S.; Dong, W.; Qu, J.; Yang, Y. A mutual guidance attention-based multi-level fusion network for hyperspectral and LiDAR classification. IEEE Geosci. Remote Sens. Lett. 2021, 19, 5509105. [Google Scholar] [CrossRef]
Figure 1. Houston 2013 dataset. (a) Hyperspectral image (b) LiDAR image. (c) Ground truth land cover map.
Figure 1. Houston 2013 dataset. (a) Hyperspectral image (b) LiDAR image. (c) Ground truth land cover map.
Sensors 24 00867 g001
Figure 2. MUUFL dataset. (a) Hyperspectral image. (b) LiDAR image. (c) Ground truth land cover map.
Figure 2. MUUFL dataset. (a) Hyperspectral image. (b) LiDAR image. (c) Ground truth land cover map.
Sensors 24 00867 g002
Figure 3. Trento dataset. (a) Hyperspectral image (b) LiDAR image. (c) Ground truth land cover map.
Figure 3. Trento dataset. (a) Hyperspectral image (b) LiDAR image. (c) Ground truth land cover map.
Sensors 24 00867 g003
Figure 4. The proposed dual-branch Transformer feature fusion network.
Figure 4. The proposed dual-branch Transformer feature fusion network.
Sensors 24 00867 g004
Figure 5. Improvements and differences between enhanced GEM and self-attention: (a) Self-attention module computation flow, (b) Calculation process of the Group Embedding Module incorporating neighborhood information.
Figure 5. Improvements and differences between enhanced GEM and self-attention: (a) Self-attention module computation flow, (b) Calculation process of the Group Embedding Module incorporating neighborhood information.
Sensors 24 00867 g005aSensors 24 00867 g005b
Figure 6. Calculation process of spatial attention. Down-sampling the channels helps to capture the spatial distribution patterns of geographical features more effectively.
Figure 6. Calculation process of spatial attention. Down-sampling the channels helps to capture the spatial distribution patterns of geographical features more effectively.
Sensors 24 00867 g006
Figure 7. Fusion weight coefficients based on shallow features are used to allocate feature weights for the dual branches.
Figure 7. Fusion weight coefficients based on shallow features are used to allocate feature weights for the dual branches.
Sensors 24 00867 g007
Figure 8. The impact of different spatial patch sizes as network inputs on OA and AA across three datasets.
Figure 8. The impact of different spatial patch sizes as network inputs on OA and AA across three datasets.
Sensors 24 00867 g008
Figure 9. Classification maps by different methods on the Houston2013 dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Figure 9. Classification maps by different methods on the Houston2013 dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Sensors 24 00867 g009
Figure 10. Classification maps by different methods on the MUUFL dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Figure 10. Classification maps by different methods on the MUUFL dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Sensors 24 00867 g010
Figure 11. Classification maps by different methods on the Trento dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Figure 11. Classification maps by different methods on the Trento dataset (a) EndNet, (b) MFT, (c) MGA, (d) CCNN, (e) HCT, (f) our proposed method.
Sensors 24 00867 g011
Table 1. Land cover categories of the three datasets and the number of training and test samples.
Table 1. Land cover categories of the three datasets and the number of training and test samples.
No.Houston2013MUUUFLTrento
Class NameTrainTestClass NameTrainTestClass NameTrainTest
1Healthy Grass501201Trees5023,196Apple Tree503984
2Stressed Grass501204Mostly Grass504220Buildings502853
3Synthetic Grass50647Mixed Ground Surface506832Ground50429
4Trees501194Dirt and Sand501776Wood509073
5Soil501192Road506637Vineyard5010,451
6Water50275Water50416Roads503124
7Residential501218Buildings Shadow502183
8Commercial501194Buildings506190
9Road501202Sidewalk501335
10Highway501177Yellow Curb50133
11Railway501185Cloth Panels50219
12Parking Lot1501183
13Parking Lot250419
14Tennis Court50378
15Running Track50610
Total75014,279Total55053,137Total30029,914
Table 2. Classification results of different methods for land cover classes in the Houston2013 dataset (best results are bolded).
Table 2. Classification results of different methods for land cover classes in the Houston2013 dataset (best results are bolded).
NO.ClassEndNetMFTMGACCNNHCTProposed
1Healthy Grass96.8486.0997.5892.6493.9297.33
2Stressed Grass95.1891.3685.7995.8794.3698.92
3Synthetic Grass99.8599.84100.0099.3998.5799.84
4Trees94.5594.1399.8396.5898.2694.30
5Soil100.0095.9710099.3099.30100
6Water98.9188.3697.8190.3690.2597.09
7Residential95.4894.9090.8094.8695.7896.14
8Commercial97.0691.1287.7792.6894.2797.48
9Road91.8494.0979.2890.4591.7692.67
10Highway76.4685.9891.0795.1794.3795.41
11Railway95.5289.2896.9698.3697.2398.56
12Parking Lot181.4895.9488.3392.0192.0491.71
13Parking Lot2100.0097.6195.7091.8698.6791.40
14Tennis Court100.00100.00100.0099.6899.74100.00
15Running Track100.0099.50100.0098.2699.89100.00
OA (%)-93.6892.8992.9195.1095.6196.55
AA (%)-94.8893.6194.0695.1695.8996.72
K × 100-93.1692.3192.3395.1195.2496.27
Table 3. Classification results of different methods for land cover classes in the MUUFL dataset (best results are bolded).
Table 3. Classification results of different methods for land cover classes in the MUUFL dataset (best results are bolded).
NO.ClassEndNetMFTMGACCNNHCTProposed
1Trees91.0587.6593.4787.1589.6393.59
2Mostly Grass89.9072.9674.7986.5887.2079.14
3Mixed Ground Surface63.1868.4177.3878.9679.9683.79
4Dirt and Sand97.3592.0097.0793.4194.7197.74
5Road88.5386.4088.8389.7682.1593.29
6Water100.00100.00100.0099.0599.6598.55
7Buildings Shadow89.6991.4388.6890.2889.1288.13
8Buildings89.7089.1190.9090.2190.3590.64
9Sidewalk76.3276.7770.0378.9680.2782.69
10Yellow Curb96.2483.4593.9894.1894.5595.48
11Cloth Panels99.0899.5499.5498.4697.9899.08
OA (%)-86.8184.1988.4587.0287.3890.51
AA (%)-89.1986.1688.6189.7289.5991.10
K × 100-82.7779.6484.9184.6585.6787.57
Table 4. Classification results of different methods for land cover classes in the Trento dataset (best results are bolded).
Table 4. Classification results of different methods for land cover classes in the Trento dataset (best results are bolded).
NO.ClassEndNetMFTMGACCNNHCTProposed
1Apple Tree88.5691.2697.6999.2798.2699.10
2Buildings87.9096.5998.5496.6597.6198.95
3Ground97.1895.2810098.2698.3498.23
4Wood98.3597.8498.86100100100
5Vineyard92.5398.6599.2499.8699.1599.96
6Roads86.8990.9692.7496.5297.6997.40
OA (%)-92.8096.3798.1898.4299.1499.46
AA (%)-90.2394.4396.4496.5698.5198.94
K × 100-90.5392.5393.5694.2896.4797.67
Table 5. Classification performance of the three datasets under different cases.
Table 5. Classification performance of the three datasets under different cases.
CasesHouston2013MUUFLTrento
OAAAKappaOAAAKappaOAAAKappa
Only HSI95.0694.5694.1087.6587.7385.1296.8297.0496.28
Only LiDAR60.3462.5960.5245.3547.2945.6381.6780.3680.94
HSI+ LiDAR (No GEM)95.1395.5695.4288.6188.2985.9296.6197.1696.53
HSI+ LiDAR (GEM)96.5596.7296.2790.5191.1087.5799.4698.9497.67
Table 6. Classification performance under different weighting coefficients.
Table 6. Classification performance under different weighting coefficients.
WHouston2013MUUFLTrento
OAAAKappaOAAAKappaOAAAKappa
0.689.7290.4390.2579.4680.4180.2796.2496.5794.34
0.791.9691.6891.2683.2483.6783.2698.6398.8296.24
0.894.4393.8793.9686.9587.4386.8798.9699.0296.85
0.996.4195.2495.8787.4288.2187.1599.2798.5697.46
195.0694.5694.1087.6587.7385.1296.8297.0496.28
W λ 96.5596.7296.2790.5191.1087.5799.4698.9497.67
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, Q.; Zhou, B.; Zhang, J.; Xie, J.; Wang, Y. Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer. Sensors 2024, 24, 867. https://doi.org/10.3390/s24030867

AMA Style

Wang Q, Zhou B, Zhang J, Xie J, Wang Y. Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer. Sensors. 2024; 24(3):867. https://doi.org/10.3390/s24030867

Chicago/Turabian Style

Wang, Qingyan, Binbin Zhou, Junping Zhang, Jinbao Xie, and Yujing Wang. 2024. "Joint Classification of Hyperspectral Images and LiDAR Data Based on Dual-Branch Transformer" Sensors 24, no. 3: 867. https://doi.org/10.3390/s24030867

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop