Feature representation using deep autoencoder for lung nodule image classification (Q1649497)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Feature representation using deep autoencoder for lung nodule image classification |
scientific article; zbMATH DE number 6899109
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Feature representation using deep autoencoder for lung nodule image classification |
scientific article; zbMATH DE number 6899109 |
Statements
Feature representation using deep autoencoder for lung nodule image classification (English)
0 references
6 July 2018
0 references
Summary: This paper focuses on the problem of lung nodule image classification, which plays a key role in lung cancer early diagnosis. In this work, we propose a novel model for lung nodule image feature representation that incorporates both local and global characters. First, lung nodule images are divided into local patches with Superpixel. Then these patches are transformed into fixed-length local feature vectors using unsupervised deep autoencoder (DAE). The visual vocabulary is constructed based on the local features and bag of visual words (BOVW) is used to describe the global feature representation of lung nodule image. Finally, softmax algorithm is employed for lung nodule type classification, which can assemble the whole training process as an end-to-end mode. Comprehensive evaluations are conducted on the widely used public available ELCAP lung image database. Experimental results with regard to different parameter setting, data augmentation, model sparsity, classifier algorithms, and model ensemble validate the effectiveness of our proposed approach.
0 references
lung nodule image classification
0 references
lung cancer early diagnosis
0 references
bag of visual words
0 references
softmax algorithm
0 references
0.8102352619171143
0 references
0.7502330541610718
0 references
0.7245469689369202
0 references