Browsing by Author "Yang B"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
- ItemDiffusionDCI: A Novel Diffusion-Based Unified Framework for Dynamic Full-Field OCT Image Generation and Segmentation(IEEE Access, 2024) Yang B; Li J; Wang J; Li R; Gu K; Liu B; Militello CRapid and accurate identification of cancerous areas during surgery is crucial for guiding surgical procedures and reducing postoperative recurrence rates. Dynamic Cell Imaging (DCI) has emerged as a promising alternative to traditional frozen section pathology, offering high-resolution displays of tissue structures and cellular characteristics. However, challenges persist in segmenting DCI images using deep learning methods, such as color variation and artifacts between patches in whole slide DCI images, and the difficulty in obtaining precise annotated data. In this paper, we introduce a novel two-stage framework for DCI image generation and segmentation. Initially, the Dual Semantic Diffusion Model (DSDM) is specifically designed to generate high-quality and semantically relevant DCI images. These images not only serve as an effective means of data augmentation to assist downstream segmentation tasks but also help in reducing the reliance on expensive and hard-to-obtain large annotated medical image datasets. Furthermore, we reuse the pretrained DSDM to extract diffusion features, which are then infused into the segmentation network via a cross-attention alignment module. This approach enables our network to capture and utilize the characteristics of DCI images more effectively, thereby significantly enhancing segmentation results. Our method was validated on the DCI dataset and compared with other methods for image generation and segmentation. Experimental results demonstrate that our method achieves superior performance in both tasks, proving the effectiveness of the proposed model.
- ItemPotential rapid intraoperative cancer diagnosis using dynamic full-field optical coherence tomography and deep learning: A prospective cohort study in breast cancer patients(Elsevier B V on behalf of the Science China Press, 2024-06-15) Zhang S; Yang B; Yang H; Zhao J; Zhang Y; Gao Y; Monteiro O; Zhang K; Liu B; Wang SAn intraoperative diagnosis is critical for precise cancer surgery. However, traditional intraoperative assessments based on hematoxylin and eosin (H&E) histology, such as frozen section, are time-, resource-, and labor-intensive, and involve specimen-consuming concerns. Here, we report a near-real-time automated cancer diagnosis workflow for breast cancer that combines dynamic full-field optical coherence tomography (D-FFOCT), a label-free optical imaging method, and deep learning for bedside tumor diagnosis during surgery. To classify the benign and malignant breast tissues, we conducted a prospective cohort trial. In the modeling group (n = 182), D-FFOCT images were captured from April 26 to June 20, 2018, encompassing 48 benign lesions, 114 invasive ductal carcinoma (IDC), 10 invasive lobular carcinoma, 4 ductal carcinoma in situ (DCIS), and 6 rare tumors. Deep learning model was built up and fine-tuned in 10,357 D-FFOCT patches. Subsequently, from June 22 to August 17, 2018, independent tests (n = 42) were conducted on 10 benign lesions, 29 IDC, 1 DCIS, and 2 rare tumors. The model yielded excellent performance, with an accuracy of 97.62%, sensitivity of 96.88% and specificity of 100%; only one IDC was misclassified. Meanwhile, the acquisition of the D-FFOCT images was non-destructive and did not require any tissue preparation or staining procedures. In the simulated intraoperative margin evaluation procedure, the time required for our novel workflow (approximately 3 min) was significantly shorter than that required for traditional procedures (approximately 30 min). These findings indicate that the combination of D-FFOCT and deep learning algorithms can streamline intraoperative cancer diagnosis independently of traditional pathology laboratory procedures.