Annotated Bibliography and Reflection (Paper 1 – Paper 4)

Paper 1

Tavakoli, N., Karimi, M., Norouzi, A., Karimi, N., Samavi, S., & Soroushmehr, S. M. R. (2019). Detection of abnormalities in mammograms using deep features. Journal of Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-019-01639-x

Tavakoli et al. (2019) presents a Convolutional Neural Network (CNN) model designed for the analysis of mammographic images, targeting the enhancement of breast cancer detection accuracy using deep learning techniques. Utilizing a dataset of 322 mammographic images from the Mammographic Image Analysis Society (MIAS), the study emphasizes the importance of pre-processing in image analysis, incorporating steps like breast region extraction and contrast enhancement to improve model performance. The CNN architecture is meticulously designed, catering specifically to the nuances of mammogram image characteristics and analysis requirements. The results of this study are particularly noteworthy, with the model achieving an accuracy of 82.67% and an Area Under the Curve (AUC) of 0.91. These results underscore the model’s efficacy in accurately distinguishing between normal and abnormal breast tissues. The study stands out for its innovative approach in applying deep learning to medical imaging, particularly in the realm of breast cancer detection. However, a potential limitation of this research is its dependence on a singular dataset from the MIAS database, which might raise questions about the model’s generalizability to broader datasets and varying image qualities. This study offers significant insights for my study where it can guide me in the development of effective pre-processing techniques and the design of a specialized CNN architecture. Additionally, the methodology and results of this study could serve as a valuable benchmark for assessing the performance and effectiveness of my proposed model in the field of mammogram analysis.

Paper 2

Razali, N. F., Isa, I. S., Sulaiman, S. N., A. Karim, N. K., & Osman, M. K. (2023). CNN-Wavelet scattering textural feature fusion for classifying breast tissue in mammograms. Biomedical Signal Processing and Control, 83, 104683. https://doi.org/10.1016/j.bspc.2023.104683

Razali et al. (2023) present an innovative approach that melds Convolutional Neural Network (CNN) and wavelet scattering textural features for the classification of breast tissue types in mammograms. This study, leveraging 112 images from the INbreast dataset, distinguishes between fatty, fibro glandular tissues and benign, malignant masses. The methodology described in the paper entails a sophisticated process of feature extraction and classification. Initially, it utilizes CNNs to distil deep learning features from the mammogram images. The approach is further enriched by the incorporation of wavelet scattering, aimed at capturing intricate textural features, thus offering a more nuanced analysis of the image data. These disparate features are subsequently amalgamated into a composite robust feature set. For the classification task, the study employs an ensemble k-nearest neighbour classifier, harnessing the synergy of the combined features to precisely categorize breast tissues in mammograms. The model exhibits remarkable accuracy, with a 99.3% success rate in 10-fold cross-validation, underscoring the efficacy of feature fusion in medical image analysis. This paper delineates a successful convergence of CNN and wavelet scattering in mammogram analysis, marked by high accuracy and robustness in discerning various breast tissue types. However, the study’s applicability might be constrained by its reliance on a specific dataset. Nonetheless, this research offers valuable insights into the amalgamation of diverse analytical techniques for enhanced mammogram classification, potentially aiding your project in developing a CNN model for cancer detection.

Paper 3

Ranjbarzadeh, R., Tataei Sarshar, N., Jafarzadeh Ghoushchi, S., Saleh Esfahani, M., Parhizkar, M., Pourasad, Y., Anari, S., & Bendechache, M. (2022). MRFE-CNN: multi-route feature extraction model for breast tumor segmentation in Mammograms using a convolutional neural network. Annals of Operations Research. https://doi.org/10.1007/s10479-022-04755-8

Ranjbarzadeh and their team (2022) present a ground-breaking Convolutional Neural Network (CNN)-based approach for precise breast tumour segmentation in mammograms, capitalizing on the wealth of data from the Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM datasets. Their innovative method, known as Multi-Route Feature Extraction (MRFE) within a CNN, marks a significant advancement in the realm of medical image analysis. It ingeniously combines feature extraction from multiple CNN routes with sophisticated contrast enhancement processes, thereby amplifying the capacity to differentiate between normal and abnormal tissue regions. This seamless integration culminates in remarkable accuracy rates, with the model achieving 93.6%, 89.0%, and 87.1% accuracy on the DDSM dataset, and 94.4%, 91.5%, and 89.2% on the Mini-MIAS dataset for normal, benign, and malignant regions, respectively. These exceptional results not only underscore the model’s prowess in accurately categorizing diverse breast tissue types but also hold broader implications. Beyond the realm of medical imaging, this research offers invaluable insights into the evolution of cutting-edge CNN architectures and feature extraction techniques, promising to revolutionize accuracy not only in medical image analysis but also across a spectrum of deep learning applications. The potential impact extends to enhancing the accuracy of breast tumour detection in my project, showcasing the far-reaching significance of this pioneering work.

Paper 4

Raaj, R. S. (2023). Breast cancer detection and diagnosis using hybrid deep learning architecture. Biomedical Signal Processing and Control, 82, 104558. https://doi.org/10.1016/j.bspc.2022.104558

Raaj (2023) introduces an inventive approach to categorizing mammogram images into normal, benign, and malignant classifications through the utilization of a hybrid Convolutional Neural Networks (CNN) architecture. Leveraging the Mammographic Image Analysis Society (MIAS) and Digital Database for Screening Mammography (DDSM) datasets, this study adopts a unique methodology that incorporates radon transform for data augmentation, effectively addressing the challenge of limited data diversity in medical imaging. What sets this approach apart is the amalgamation of various CNN models, strategically harnessed to exploit their individual strengths for enhanced feature extraction and classification. Furthermore, a morphological segmentation algorithm is employed to identify cancerous pixels, facilitating the isolation of pertinent regions for in-depth analysis and classification. Impressively, the CNN architecture attains remarkable performance metrics, achieving 97.91% sensitivity, 97.83% specificity, 98.44% accuracy, and 98.57% Jaccard’s Index on the DDSM dataset, and 98% sensitivity, 98.66% specificity, 99.17% accuracy, and 98.07% Jaccard’s Index on the MIAS dataset. While this study excels in its innovative architecture and exceptional performance, it predominantly focuses on interior cancer pixels, potentially constraining detection accuracy. Nevertheless, this research presents an opportunity to advance automated breast cancer detection, offering valuable insights for further exploration in my study.

hayongjoe1994

Leave a Reply

Your email address will not be published. Required fields are marked *