笔记:基于点云的语义分割的小样本学习
筆記:Few-shot learning for tackling open-set generalization:
-
基于點云的語義分割的應用:場景理解,給點云中每一個點賦予特點的語義標簽。(如自動駕駛)
-
小樣本學習的意義:解決太過于依賴大量標定數據,減少成本;可以提高泛化能力,識別未曾見過的目標。
-
paper1:Few-shot 3D Point Cloud Semantic Segmentation
-
提出問題:
- rely on large amounts of labeled training data, so they are time-consuming and expensive to collect.
- follow the closed set assumption.(訓練集和測試集取自同一label space) ,泛化能力差。
-
解決:
-
multi-prototype transductive inference method.
- transductive inference: 轉導推理;是一種通過觀察特點的樣本,進而預測特定的測試樣本的方法,是一種從特殊到特殊的推理,適合于小樣本推理。不同于歸納推理,先從訓練樣本中學習規則,再用規則判斷測試樣本。
-
architecture:
-
embedding network:
- three properties:1.local geometric features; 2.global geometric features; 3. adapt to different few-shot tasks.
- DGGNN: the backbone of feature extractor.(local)
- SAN(self-attention network): generate semantic feature.(global)
- MLP: adapt to different few-shot tasks.
-
multi-prototype generation:
- It samples a subset of n seed points from a set of support points in one class using the farthest point sampling based on the embedding space.(對support set的每一類樣本點farthest points sample,抽取n個seed point)
- The farthest points represent different perspectives of one class. (farthest points sample保證足夠的感受野)
-
transductive inference:
-
use transductive label propagation to construct a graph on the labeled multi-prototypes and the unlabeled query points.(用k-NN建立相關類的圖)
-
-
label propagation
-
cross-entropy loss function(交叉熵損失函數):
- compute the cross-entropy loss with ground truth labels.
-
-
-
-
paper2:What Makes for Effective Few-shot Point Cloud Classification?
-
提出問題:
- they require extensive data collection and retraining when dealing with novel classes never seen before.
- It is hard to study from existing 2D methods when migrating to the 3D domain.
- point clouds are more complex and have unorder structure in European space.
-
3D point cloud classification
- projection-based: It first converts the irregular points into a representation like voxel, pillar, and then apply typical 2D or 3D CNN to extract features.
- point-based: It can learns point-wise features with multilayer perceptron(MLP) and aggregates global feature with a symmetric function implemented by a max-pooling layer.
-
2D few-shot learning
- Metric-based: It focus on learning an embedding space where similar samples pairs are closer, or designing a metric function to compare the feature similarity of samples.
- Optimization-based: It regards meta-learning as an optimization process.
-
State-of-the-art 2D FSL on Point Cloud
- compare the metric-based methods and optimization-based methods, and concludes that metric-based methods outperform the optimization-based methods in point cloud scenario.
-
Influence of Backbone Architecture on FSL
- select three types of current state-of-the-art 3D point-based networks including Pointwise-based, Convolution-based, Graph-based(DGCNN). One can conclude that the graph-based network DGCNN achieves higher classification accuracy than other networks on these two datasets.
-
Cross Instance Adaption (CIA) module
-
CIA can be inserted into existing backbones and learning frameworks to learn more discriminative representations for the support set and query set.
Embedding module把support-set和query-set作為輸入分別進行特征提取得到他們的prototype,然后再通過CIA模塊更新support-set和query-set,然后在特征空間計算每個class prototype和query examples的歐氏距離,最后便可得到損失函數并進行優化。
-
Self-Channel Interaction Module: address the issues of subtle inter-class differences.
- 先從embedding space分別由兩個線性系數φ和γ得到q向量和k向量,然后通過CIM的雙線性變換得到一個channel-wise relation score map - R, 然后進行softmax操作得到權重矩陣R’,最后得到更新的向量v是有R’與開始的特征向量加權和得到,vi越大說明特供信息越大,有利于區分class之間的細小差別。
-
Cross-Instance Fusion Module: address high intra-class variances issues
- 首先將support feature和query feature 連結起來得到Z,然后用兩個卷積層來解碼連結后的特征得到W,將W進行softmax操作得到權值矩陣后與Z點乘來更新support feature和query feature。
-
-
本文還提供了兩個適用于3D FSL的數據集:ModelNet40-FS,ShapeNet70-FS
-
總結
以上是生活随笔為你收集整理的笔记:基于点云的语义分割的小样本学习的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Springer期刊投稿的部分(late
- 下一篇: 宇宙简史尔雅答案_今天是10/10/10