Analysis and Classiﬁcation of Bone Fractures Using Machine Learning Techniques

. Human bones are the hard organs that protect vital organs such as the heart, lungs, and other internal organs. Fractures of the bones are a prevalent issue among humans. Bone fractures may develop from an accident or another circumstance when there is great pressure on the bones. It may be di ﬃ cult and time-consuming to determine the site of a fracture in a patient who is su ﬀ ering discomfort. The manual examination of fractures during radiological interpretation is a time-consuming and error-prone process. This may result in erroneous detection, poor fracture healing, and an extensive procedure. So, this research proposed an e ﬀ ective approach to rectifying bone fractures with the inclusion of the latest technologies. The solution is proposed by employing a Deep learning model. Moreover, a novel concept of classiﬁcation is also incorporated. Firstly; the MURA dataset was collected from Stanford. Secondly; The proposed model used techniques like DCNN (Deep Convolution Neural Network) and use Alex Net model. Bones are classiﬁed into fractured or non-fractured through a clas-siﬁcation approach. The proposed model was created using Google Colab. The proposed model was trained by repeating several experiments. The performance was evaluated based on accuracy. The suggested model results were compared with baseline algorithms as well. Consequently, the ﬁndings of this work will be useful for the medical industry. Keywords: Deep convolution neural


Introduction
Fractures are either entire or partial breaks in the bone. The primary cause of fracture is the great influence or force applied to a bone that it is structurally capable of supporting. Bone fractures in humans are frequently caused by trauma and stress. Stress fractures are common among sports (such as acrobats, dancers, and long-distance runners) and military people and are caused by repetitive load-carrying strain on a healthy bone. Traumatic fractures are caused by car accidents, serious falls, or purposeful causes such as physical abuse. Fractures can also happen for several other reasons, such as osteoporosis (a disease that weakens bones), cancer, or the brittle bone condition known as ontogenesis imperfect [1].
The human skeleton occurs in many forms, including weight support and protection. With their particular purposes, various types of bones have different shapes. The skeleton includes 5 different bone types: sesamoid, irregular, flat, long, and short; shown in the following figure 1. Bone fractures are categorized into two six kinds [2].
• Transverse Fracture: The simplest kind of fracture is one in which the bone breaks in a straight line.
• Oblique Fracture: In this breakage form when the bone break extends diagonally and is brought on by an external force or rotation.
• Spiral Fracture: In this form of breakage seen in twisting injuries the location where the bone's break wraps around it.
• Comminuted Fracture: In this kind of fracture in which the bone splits into multiple fragments.
• Greenstick Fracture: This form of partial fracture in which the cracked bone has not fully split.
• Impacted Fracture: It is a type of fracture in which the bone breaks, but the two shattered ends are forced together. Errors in fracture diagnosis were associated with incorrect fracture identification in 41% to 80% of cases [3]. According to studies, the examiner's lowered capacity to spot abnormalities may be brought on by fatigue from interpreting several images of the musculoskeletal system by oneself. For instance, computer vision systems might be able to swiftly recognize suspected fracture instances and offer a reliable second opinion. Traditionally, low-level techniques like noise reduction, segmentation, and feature extraction have been used to accurately predict human bone fractures. Classifiers such as decision trees and k-nearest neighbors are used to detect and classify leg fractures after a breakpoint is located in the image [4].
Because of the advancements in technology and innovations in software, the discipline of medical image processing is gaining widespread recognition in the healthcare business. When it comes to determining treatment choices, it aids doctors in the diagnosis of illness and the betterment of patient care. Human organs may now be created digitally using a variety of cutting-edge machines [5]. Computed Tomography Scans, Magnetic Resonance Imaging (MRI), Ultrasound (US), Positron Emission Tomography (PET), and Single Photon Emission Computed Tomography (SPECT) are some examples of these types of technologies (SPECT).
The use of automatic detection systems in the area of medicine is crucial for the management of illnesses. Here numerous illness identification and classification methods are available nowadays. Machine learning, a subset of artificial intelligence (AI), uses various sets of rules to automatically detect, classify, and diagnose illness [6].
Deep learning, computer vision, and general imaging have recently emerged as the top AI tools in the medical industry [7]. Artificial neural networks with numerous layers are the foundation of deep learning techniques, which improve performance. X-rays, Magnetic Resonance Imaging, Computed Tomography, and ultrasound scanners are just a few of the medical imaging tools that can be used to capture images of abnormalities. But due to their accessibility and low cost, X-rays are the most often used method for diagnosing bone fractures [8].
In this research, we offer a deep learning model for the Alex net. To find and categorize bone fractures, our suggested method consists of four steps. In the first phase, we automatically preprocessed the data before using Alex's net Deep Learning Model and retraining the model's top layer. Finally, the suggested model's performance according to identification and classification is assessed. The proposed method uses a bounding box to identify the fractured part in the bone and determine whether or not there is a fracture. The fracture is accurately detected and classified dynamically by the model.

Motivation
The main reason for selecting this topic for our thesis is that we live in a developing country with few resources to solve any problem. Artificial intelligence (AI) is the process of reproducing human intelligence in robots that have been trained to think and behave like people. The modern applications of AI within the healthcare industry include image processing, disease analysis, and diagnosis, development of drugs, patient monitoring, and surgery. These applications enable quick, cheaper, and reliable diagnosis and health treatment, resulting in improved quality of life. This, we believe, must change since the success of artificial intelligence in prolonging human life and supporting physicians in better understanding complicated conditions.
Radiology is a vital diagnostic tool, giving essential information for routine injury and disease prevention and evaluation. It employs a range of imaging techniques, each of which has its own set of physical principles and degrees of complexity.
Hence, doctors recommend advanced imaging scans such as CT or MRI to further analyze the problem. CT scans are more expensive than X-rays and are not easily accessible in all hospital facilities. Overall, X-Ray is beneficial in human healthcare in terms of financial cost and diagnostic performance. Therefore, the dataset used in this work consists of X-ray images.

Problem Statement
Increased demand in radiology work caused workload, errors in diagnosis, and delay in results. To minimize it and to make the process cost-effective, there is a need to enhance workflow management. This can be done by providing clinicians with a quick, reliable second opinion in radiograph analysis.

Review of Literature
Johari and Singh, (2018) proposed the Canny Edge Detection method for bone fracture detection. According to the results, Canny's algorithm is the best method for identifying edges with impulsive thresholds and low error rates. Thanks to this framework, doctors were able to get more accurate results in less time and with less effort. Real-world data has been used to test out the system's capabilities.
Basha et al. (2018) stated fracture detection methods, noise reduction, adaptive histogram equalization, statistical feature extraction, and classification using an artificial neural network are all steps. Classifying radiographs using probabilistic neural networks and backpropagation neural networks is an important part of interpreting X-ray images. The X-ray image classification system described here achieved a classification accuracy of 92.3%, proving its usefulness for X-ray image classification.  2019) proposed a Convolutional neural network for object detection that is capable of detecting and locating fractures on wrist radiographs. Total wrist radiography scans from 7356 patients were obtained using an image archiving and transmission system from a hospital. The bounding boxes of all radius and ulna fractures were marked by radiologists. The dataset was separated into training (90%) and validation (10) sets to create fracture localization models for frontal and lateral images. The study used a deep learning model identified as Inception-ResNet Faster R-CNN architecture. Each fracture, image (or view), and study had a sensitivity and specificity value that was determined.
Abbas et al. (2020) conducted out R-CNN deep learning model to locate lower leg bone fractures. Traditional methods of fracture detection have struggled to locate lower leg bone fractures. With the help of the R-CNN deep learning model, these issues can be addressed more quickly (RPN). Using 50 x-ray images, the model's top layer was also retrained using an inception v2 (version2) network architecture. After 40k steps, the model was completed when the loss remained at just 0.0005 percentage points. To see if the proposed model could identify and classify anomalies, it was tested. The method was used to classify x-ray images of bone fractures into two categories: fracture and non-fracture. This method has a 94 percent overall accuracy rate for classification and detection.
Li et al. (2021) discovered that artificial intelligence may now be used to diagnose osteoporotic fractures, such as those of the hip, distal radius, and proximal humerus. It has not yet been determined whether artificial intelligence can also find vertebral fractures on straightforward lateral spine radiographs. The artificial intelligence model diagnosed vertebral fractures with good accuracy, sensitivity, and specificity for osteoporotic fractures of the lumbar vertebrate.

Dataset
Stanford Hospital provided the MURA dataset, a sizable public collection of radiographs of the musculoskeletal system. The MURA dataset was the source of the research's data. The dataset is made up of seven different skeletal bones: the elbow, finger, forearm, hand, humerus, shoulder, and wrist. Each category has a binary label that indicates whether or not the picture shows a shattered bone. The dataset includes 40,000 images in total. There are training and test sets for the dataset.

Experimental Setup
For the Google Colab implementation of this research, we use Python code. The simplest way to process images using Python code is to utilize Google Colab, which is simple to use. Significant influence is made by Google Colab. A machine learning model was created using it. Its GUI and interface are user-friendly. Even if your computer satisfies the minimum system requirements and specifications, installing some packages and fixing some installation issues can be a hassle. You can utilize TPU and GPUs to tackle any problem, free from Google like the Jupiter laptop.

Splitting Dataset
When our model reaches full access to our Dataset it will start a procedure that splits our Data. This procedure is user dependent where a piece of the dataset may be chosen by the user which is used to Train our Model while the rest is used to test our Model. In our model, we decide to give it a ratio of 8:2 given Dataset. This means that we use 80% images to train our model and 20% images to test our model.

Proposed model
We mention our automated system's approach to identifying bone fractures. Firstly, we take input images from the dataset then we apply some preprocessing techniques to remove noise. We use different preprocessing techniques like color transformation, noise removal, and image enhancement. Then we apply a data augmentation technique on the dataset images to enhance the size of the dataset. Then we apply a Deep neural Network for detection and classification. A DCNN model has been created in the suggested work. Convolution, pooling, flattening, and dense layers are all present [9]. CNN uses a fully connected layer to extract the features automatically from the input image and identify them as either fracture or nonfracture bone. Features are taken out of the image through the convolution layer (CL) and pooling layer. A reasonable size of 3x3 is used for each convolution and pooling layer to reduce noise. After that, the dense layer performs classification.
(1) Image preprocessing The goal of pre-processing is to improve the critical material in the clinical image while minimizing distortion brought on by noise interference. Implementing color transformation techniques like RGB to Gray, obtaining data from the red layer, and improving image contrast are the key goals [10].
(2) Proposed model alex net design A deep learning Alex net model has been created for the research that is being presented. Convolutional, max pooling, flatten, and fully connected layers are all present. The Alex net model has a total of eight layers, including three fully connected layers and five convolutional layers shown in figure 2. Fully connected layers are used to automatically extract the features of the input images and categories the bones into fractured and non-fractured bones using CNN. Convolutional and pooling layers take features out of the input image. The input image that we used in our model is 400*400 resolution. We implement Alex Net Model by setting the dropout value and lambda Value. We set Dropout p=0.8 and L2 Regularization lambda=1e-4.
Convolution layers (CL) : In this work, we have applied five convolution layers. The convolutional layer of 16 feature map with filter size 3*3; CL of 32 features map with filter size 3*3; the CL of 64 features map with filter size 3*3; the CL of 128 feature map with filter size 3*3 and the CL of 256 features map with filter size 3*3. By using filters, the convolutional layer extracts feature from the input image.
Max-Pooling Layer: It has been applied to each convolution layer to decrease the size of the filtered image. In light of this, this layer concentrated on the image's most important and appealing elements. Max-pooling layers of size 2*2 have been applied at each convolution layer in the suggested investigation.
Flatten layers: The 2-dimensional feature vector has been condensed by this layer into an array that has then fed to a fully linked layer.
Fully Connected Layer: It is also referred to as the fully dense layer. The suggested model forecasts whether a bone fracture or not. Each layer has been the activation function Relu implemented. Softmax activation functions have been applied in the dense layer.

Conclusion
The model is applying which has 8GB RAM of memory Laptop. The operating system that we first tried to use is Windows 10 and i7 processor. It approximately it took around 5 Minutes to train our dataset using this configuration. We used Google Colab and along with me,t we used CNN which helped us with a lot of libraries.

Experiment
In this experiment, the Alex Net model has been trained using 80% of the sample, and testing has been done using the remaining 20%. The activation function softmax has been utilized for 100 epochs. We implement our model by using ALEX Net then we check our model accuracy. We apply 100 epochs to our dataset. We get our model accuracy is shown in figure 3. The blue line is showing the training accuracy and the red line is showing the red line valid accuracy.

Comparative analysis
Yang and Cheng have employed contour-based feature selection and ANN to categorize the long bone. By establishing the cluster, the features are chosen using PCA. The method's accuracy was 82.98%. Gray-Level Co-Occurrence Matrix GLCM was utilized by Chai et al. to extract textural features for the identification of long bone fractures. The method's accuracy was 86.67%. SVM was utilized by Tripathi et al. to categorize human bone into fractured and non-fractured bone. The model's accuracy was 84.7%. Our proposed model is very improved than other approaches [11].
Our approach is contrasted with Speeded Up Robust Features (SURF) and Multi-layer perceptron (MLP)-based BPNN. BPNN based on MLP provides an accuracy of 85%, a sensitivity of 87%, and a specificity of 86%, while SURF combined with BPNN provides an accuracy of 85%, a sensitivity of 82%, and a specificity of 80% [12].

Conclusion
In this research, a deep learning-based system for bone fracture identification and classification has been created. The experiment has been conducted using an X-ray image of a human bone with a fracture and non-fracture bone. It has been done with the Mura dataset. The data set was increased in size to address the issue of deep learning being overfitted on the small sample size. For both healthy and injured bones, the model has a 95% classification accuracy. We have used a variety of deep learning techniques, including VGG NET 16, Resnet, Des Net, and Alex Net. Compared to previous approaches, Alex's Net accuracy is significantly higher. By choosing a multi-modulation approach, the model's accuracy can be further increased.