Paper 5
Oyelade, O. N., & Ezugwu, A. E. (2022). A novel wavelet decomposition and transformation convolutional neural network with data augmentation for breast cancer detection using digital mammogram. Scientific Reports, 12(1). https://doi.org/10.1038/s41598-022-09905-3
Oyelade and Ezugwu (2022) introduce an innovative fusion of wavelet decomposition and Convolutional Neural Network (CNN) techniques for the purpose of breast cancer detection in digital mammograms. Their study draws upon the valuable datasets of the Mammographic Image Analysis Society (MIAS) and the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM). The method comprises several key components, including data augmentation facilitated by a Generative Adversarial Network (GAN) to address the challenge of limited data diversity in medical imaging. Additionally, wavelet-based image pre-processing is employed to enhance critical features crucial for precise cancer detection. At the heart of their approach lies a CNN uniquely adapted to work with wavelet-transformed inputs, enhancing the network’s capability to extract and classify relevant features from mammogram images effectively. The study underscores the potency of wavelet transforms in advancing CNN architectures with data augmentation using GAN for medical image analysis where it achieves remarkable results, with an accuracy, recall, precision, specificity, and F1-score all at or near 0.99. This research holds significant potential for shaping the development of more accurate breast cancer detection methods, particularly in the realm of improving feature extraction and classification processes within CNN models.
Paper 6
Nagwan Abdel Samee, Alhussan, A. A., Vidan Fathi Ghoneim, Ghada Atteia, Reem Alkanhel, Kim, T.-S., & Kadah, Y. M. (2022). A Hybrid Deep Transfer Learning of CNN-Based LR-PCA for Breast Lesion Diagnosis via Medical Breast Mammograms. Sensors, 22(13), 4938–4938. https://doi.org/10.3390/s22134938
Nagwan Abdel Samee et al. (2022) introduce a pioneering hybrid deep learning methodology that combines Convolutional Neural Networks (CNN) and Logistic Regression-Principal Component Analysis (LR-PCA) for breast cancer detection. Their research leverages the INbreast and mini-Mammographic Image Analysis Society (mini-MIAS) datasets, aiming to enhance both data diversity and model generalizability through data augmentation techniques, including flipping and rotation. Notably, they introduce pseudo-coloured images to enrich the information content for deep learning models. Furthermore, the study harnesses transfer learning from the well-established VGG16 CNN architecture, renowned for its effectiveness in image recognition tasks. By capitalizing on VGG16’s extensive feature recognition capabilities acquired through pre-training, the research achieves heightened accuracy and efficiency in diagnosing breast lesions from mammogram images. To optimize feature selection and reduce dimensionality, LR-PCA is employed, focusing on the most pertinent information for breast lesion diagnosis. The study yields impressive accuracy rates of 98.60% and 98.80% using the INbreast and mini-MIAS datasets, respectively, underscoring the strength of their innovative hybrid approach in mitigating multicollinearity and enhancing feature selection. This approach holds substantial promise for enhancing the accuracy of lesion classification in mammograms, making it a valuable asset for my project.
Paper 7
Lydia Bouzar Benlabiod, Harrar, K., Lahcen Yamoun, Mustapha Yacine Khodja, & Akhloufi, M. A. (2023). A novel breast cancer detection architecture based on a CNN-CBR system for mammogram classification. Computers in Biology and Medicine , 163, 107133. https://doi.org/10.1016/j.compbiomed.2023.107133
Lydia Bouzar Benlabiod et al. (2023) introduce a novel method that combines Convolutional Neural Networks (CNN) and Case-Based Reasoning (CBR) for the classification of mammograms. Their research leverages the Curated Breast Imaging Subset of the Digital Database for Screening Mammography (CBIS-DDSM) dataset and incorporates a comprehensive set of data pre-processing techniques. During the initial data pre-processing stage, the images undergo resizing and enhancement using Contrast Limited Adaptive Histogram Equalization (CLAHE) to improve image contrast by redistributing pixel intensities. Additionally, a median filter is applied to reduce noise and replace outlier pixels with nearby values, further enhancing image quality. Data quality is further enhanced through manual re-segmentation of images, improving dataset quality, and image rotation to augment dataset diversity. The CNN is employed to extract intricate features from mammogram images by processing them through multiple layers designed to identify various image characteristics, such as edges, textures, and patterns. Following feature extraction, the CBR system takes over for image classification. CBR operates by comparing new mammogram cases with a library of historical cases with known outcomes, enabling accurate classification based on similarities with past cases and offering a transparent reasoning process for each diagnosis. The CNN-CBR hybrid system ensures both accurate and explainable classification, achieving an impressive accuracy of 86.71% and a recall rate of 91.34%. This study’s innovative integration of CNN with CBR for mammogram classification demonstrates strengths in both accuracy and explain ability. Such an approach can significantly benefit my project by providing a model that combines the efficiency of deep learning with the interpretability of case-based reasoning, which is essential for medical diagnostics.
Paper 8
Lou, Q., Li, Y., Qian, Y., Lu, F., & Ma, J. (2022). Mammogram classification based on a novel convolutional neural network with efficient channel attention. Computers in Biology and Medicine, 150, 106082. https://doi.org/10.1016/j.compbiomed.2022.106082
Lou et al. (2022) presents a pioneering approach aimed at improving mammogram classification accuracy by seamlessly integrating the Efficient Channel Attention (ECA) module into the ResNet50 architecture. This innovative research leverages the INbreast dataset, which undergoes meticulous pre-processing using the Breast Database Pre-process (BDP) method. The pre-processing pipeline includes critical steps such as converting DICOM images to PNG format, denoising, cropping, and applying Contrast Limited Adaptive Histogram Equalization (CLAHE) to enhance image contrast. The heart of their method lies in the development of an enhanced CNN model based on ResNet50, enriched with the ECA module. ECA models interdependencies between channels, dynamically recalibrating channel-wise feature responses, which in turn leads to improved performance in image classification tasks. ResNet50, known for its depth of 50 layers, mitigates the vanishing gradient problem that plagues deep networks and excels in image recognition tasks. To address the challenge of class imbalance and hard-to-classify samples, the study employs focal loss. Furthermore, the model benefits from the incorporation of pretrained weights from ImageNet, further tailoring it to the specific demands of mammography image classification. The culmination of these innovative techniques yields exceptional results, with an impressive area under the curve (AUC) of 0.960 and an accuracy rate of 92.9%. This holistic approach not only exemplifies the adept utilization of channel attention but also effectively addresses data category imbalance, ultimately serving as a significant advancement in the realm of mammogram classification. The findings hold substantial potential to benefit my projects by offering sophisticated feature extraction techniques.