An improved Segformer Method for Polyp Segmentation in Digestive Endoscopy

Xue Li*, Lianliang Li, Xingguang Duan, Changsheng Li

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Accurate and real-time segmentation of polyps in digestive endoscopy is critical for improving the diagnosis and treatment of gastrointestinal diseases. Although Transformer-based models like SegFormer achieve competitive performance in terms of accuracy and computational efficiency, they often fall short in capturing fine-grained polyp boundaries and handling complex morphological variations. This paper presents an enhanced SegFormer framework that integrates a cross-stage attention mechanism, and a UPerHead-based decoder. These improvements facilitate robust multi-scale feature fusion and refined edge localization. Extensive experiments on the Kvasir-Seg dataset demonstrate that the proposed model outperforms the baseline SegFormer in segmentation accuracy. The method also shows strong adaptability on unseen datasets such as CVC-300 and CVC-ClinicDB, indicating its potential for real-world clinical application.

Original languageEnglish
Title of host publicationRCAR 2025 - IEEE International Conference on Real-Time Computing and Robotics
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages340-344
Number of pages5
ISBN (Electronic)9798331502058
DOIs
Publication statusPublished - 2025
Event2025 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2025 - Toyama, Japan
Duration: 1 Jun 20256 Jun 2025

Publication series

NameRCAR 2025 - IEEE International Conference on Real-Time Computing and Robotics

Conference

Conference2025 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2025
Country/TerritoryJapan
CityToyama
Period1/06/256/06/25

Fingerprint

Dive into the research topics of 'An improved Segformer Method for Polyp Segmentation in Digestive Endoscopy'. Together they form a unique fingerprint.

Cite this