Embarking on a preliminary literature review is crucial for gaining comprehensive insights into methodologies and challenges associated with developing Convolutional Neural Network (CNN) frameworks for breast cancer detection. This exploration serves as a foundational knowledge base, providing a nuanced understanding of existing approaches, potential pitfalls, and intricacies in this specific domain. The study aims to derive valuable insights from Q1 and Q2 papers, informing subsequent stages in constructing an effective and robust CNN tailored for accurate breast cancer detection.
Challenges and Methodologies
As suggested by Azour and Boukerche (2022), developing customized CNN models for breast cancer detection offers advantages, especially in tailoring architectures to specific dataset sizes. However, challenges arise, particularly with limited datasets, making training intricate due to the need for extensive data and the risk of overfitting. Challenges are mitigated through data augmentation techniques like flipping and rotation, essential for enhancing model diversity. Cross-validation ensures robust performance, yet selecting hyperparameters such as layer count, dropout rates, filter sizes, learning rates, regularization parameters, and activation functions remains intricate, demanding significant time and GPU processing power. The study navigates through these challenges, striving for an optimized CNN model tailored for effective breast cancer detection.
Methods Proposed by Key Researchers
1. Tavakoli et al. (2019): Comprehensive Breast Cancer Detection Approach
Tavakoli et al. introduced a method consisting of three crucial stages for breast cancer detection. In the preprocessing phase, irrelevant areas were effectively eliminated, and contrast was enhanced using techniques such as breast region extraction, pectoral muscle suppression, mask creation, and contrast enhancement. The CNN processing stage employed a well-structured architecture with four convolutional layers and specific filter sizes, activation functions, and dropout mechanisms, contributing to the model’s robustness. The decision mechanism, finalizing the labeling process based on a threshold value, determined the normality or abnormality of Regions of Interest (ROIs). Their method demonstrated exceptional performance on the MIAS dataset, achieving an accuracy of 94.68% and an Area Under Curve (AUC) of 0.95, highlighting its efficacy in breast cancer classification.
2. Razali et al. (2023): Textural Information Representation in Breast Cancer Detection
Razali et al. presented a comprehensive three-stage approach using the INbreast dataset. Their preprocessing stage involved various operations, including DICOM to .png conversion, morphological operations, pectoral muscle cropping, resizing, enhancement, normalization, YOLO-based mass detection, and breast tissue patching. The second stage focused on extracting optimal features using CNN, Wavelet Scattering (WS), and Gray Level Co-occurrence Matrix (GLCM). Remarkably, WS features outperformed GLCM and CNN features in efficiently representing textural information in breast tissue images. The final stage incorporated a parallel feature extraction scheme for image patches, leading to binary classifications of background tissues and mass tissues. This methodology demonstrated significant accuracy improvements, showcasing its efficiency in breast cancer detection.
3. Ranjbarzadeh et al. (2022): Multi-Route CNN Architecture for Breast Cancer Detection
Ranjbarzadeh et al. proposed an advanced methodology for breast cancer detection, featuring a multi-route CNN architecture. Evaluated meticulously on the Mini-MIAS and DDSM datasets, their method showcased superior performance compared to state-of-the-art techniques. The multi-route CNN architecture included three distinct convolutional layers and fully-connected layers, designed to capture intricate breast and tumor features, ensuring resilience to size and shape variations in tumors. The method addressed class imbalance through stochastic gradient descent and employed a cross-entropy loss function for accuracy enhancement. Despite achieving commendable results, a significant challenge arose in accurately discerning malignant tumors located near the pectoral muscle, prompting consideration for refining strategies in future investigations.
4. Raaj (2023): Hybrid CNN Architecture for Mammogram Image Classification
Raaj introduced a robust methodology for classifying mammogram images into normal, benign, and malignant categories, leveraging an innovative hybrid CNN architecture. The methodology integrated a radon transform for generating time-frequency variation images, a data augmentation module for dataset diversification, and a hybrid CNN module comprising convolutional layers, linear rectification layers, and pooling layers. Evaluations on the MIAS and DDSM datasets revealed remarkable Mammogram Detection Index (MDI) percentages, showcasing its potential to advance breast cancer detection. The methodology’s incorporation of rotation, flip, and scaling techniques in data augmentation significantly enriched the dataset, contributing to enhanced breast cancer detection.
Synthesis and Future Directions
Collectively, these methodologies underscore the potential of CNNs in breast cancer detection. Each approach addresses specific challenges, from preprocessing techniques to innovative architectures, showcasing their effectiveness in diverse datasets. Despite these successes, challenges such as dataset limitations and hyperparameter selection persist, emphasizing the need for ongoing investigation. Future research should focus on refining strategies for accurately discerning malignant tumors, especially those near the pectoral muscle, and explore novel approaches to optimize model training efficiency. Additionally, there is a growing need for standardized datasets and evaluation metrics to facilitate fair comparisons across different methodologies. This synthesis provides a comprehensive understanding of the current landscape and informs future directions in this vital area of research.
References
Azour, F., & Boukerche, A. (2022). Design Guidelines for Mammogram-Based Computer-Aided Systems Using Deep Learning Techniques. IEEE Access, 10, 21701–21726. https://doi.org/10.1109/access.2022.3151830
Tavakoli, N., Karimi, M., Norouzi, A., Karimi, N., Samavi, S., & Soroushmehr, S. M. R. (2019). Detection of abnormalities in mammograms using deep features. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-019-01639-x
Raaj, R. S. (2023). Breast cancer detection and diagnosis using hybrid deep learning architecture. Biomedical Signal Processing and Control, 82, 104558. https://doi.org/10.1016/j.bspc.2022.104558
Ranjbarzadeh, R., Tataei Sarshar, N., Jafarzadeh Ghoushchi, S., Saleh Esfahani, M., Parhizkar, M., Pourasad, Y., Anari, S., & Bendechache, M. (2022). MRFE-CNN: multi-route feature extraction model for breast tumor segmentation in Mammograms using a convolutional neural network. Annals of Operations Research. https://doi.org/10.1007/s10479-022-04755-8
Razali, N. F., Isa, I. S., Sulaiman, S. N., A. Karim, N. K., & Osman, M. K. (2023). CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomedical Signal Processing and Control, 83, 104683. https://doi.org/10.1016/j.bspc.2023.104683