Maize Disease Detection Using Convolutional Neural Network

. The necessity for accurate and early identification of crop diseases is one of the primary difficulties facing the agricultural industries. Diseases have an impact on crop quality and have the potential to destroy hectares of crop yield, resulting in significant losses for farmers. Current diagnostic approaches are time intensive and necessitate the presence of highly skilled professionals to study the damaged plants, comprehend the symptoms, identify the disease, and offer appropriate treatments. Maize diseases can cause a significant reduction in both the quality and quantity of agricultural products. Visual inspection is the main approach adopted in practice for the detection and identification of maize diseases. However, this necessitates continuous oversight by experts, which can result in substantial expenses. The limitations of such techniques have created the need to look for alternative techniques which can detect and classify diseases at an early stage. In this study, models were trained using an open-source library of around 5000 pictures, including healthy plant samples. The convolutional neural network (CNN) outperformed the other established models, obtaining an amazing total accuracy of 97%. This achievement satisfies the need for a reliable and effective categorization model. Furthermore, these findings were then turned into a complete maize disease identification mobile application that is ready for real-world deployment. This application has the potential to provide the agricultural community with the means to promptly diagnose and address issues, reducing the reliance on professional expertise.


Introduction
Maize is an important food crop, feed crop, and industrial raw material crop in most countries.The maize plant possesses a simple stem of nodes and internodes.A pair of large leaves extend off of each internode and the leaves total 8-21 per plant.The leaves are linear or lanceolate (lance-like) with an obvious midrib (primary vein) and can grow from 30 to 100 cm (11.8-39.4in) in length.The male and female inflorescences (flower bearing region of the plant) are positioned separately on the plant.The male inflorescence is known as the 'tassel' while the female inflorescence is the 'ear'.The ear of the maize is a modified spike and there may be 1-3 per plant.The maize grains, or 'kernels', are encased in husks and total 30-1000 per ear.The kernels can be white, yellow, red, purple or black.Maize is an annual plant, surviving for only one growing season and can reach 2-3 m (7-10 ft) in height.Maize may also be referred to as corn or Indian corn and is believed to originate from Mexico and Central America [1].
With the development of maize production, there are many kinds of maize diseases, most of which are caused by fungi, bacteria, and viruses.How to diagnose maize diseases quickly and accurately and take corresponding control measures is of great significance to maize production [2].If only by human visual observation and experience judgement, it is easy to be misdiagnosed, time-consuming, laborious, and consumable, and maize disease cannot be diagnosed and treated in time, resulting in low maize production efficiency.Many factors influence the development of the disease in Maize, including genetics of hybrids/varieties, age of plant at the time of infection, environment (e.g.soil, climate), conditions weather (eg heat), temperature, rain, wind, hail, etc.), mixed infection and genetic pathogen populations.Due to the inherent variability of these factors, the diagnosis of maize diseases can be as difficult in the early stages of individual plant disease as in the early stages of an epidemic.However, the symptoms become diagnostic at some stage in disease development, and a reasonable degree of confidence can be placed in diagnosis based on these symptoms.
According to statistics, there are more than 80 maize diseases worldwide.At present, some diseases such as sheath blight, rust, northern leaf blight, curcuma leaf spot, stem base rot, head smut, etc., occur widely and cause serious consequences.Among these diseases, the lesions of sheath blight, rust, and northern leaf blight are found in maize leaves, whose characteristics are apparent.For these diseases, rapid and accurate detection is critical to improve yields, which can help monitor the crop and take timely action to treat the diseases [3].With the continuous development of computer technology, using image recognition algorithms for disease diagnosis and detection has become an important research direction of crop disease diagnosis and detection.Deep learning methods have achieved good results in the research of maize disease identification.However, in order to implement the spraying operation in the field in real time, in addition to the recognition accuracy, the recognition time of a single picture should be guaranteed.
Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure.Maize diseases are not only a threat to food security but can also have disastrous consequences for smallholder farmers whose livelihoods depend on healthy crops.The necessity for accurate and early identification of crop diseases is one of the primary difficulties facing the agricultural businesses.Diseases have an impact on crop quality and have the potential to wipe out hectares of crop yield, resulting in significant losses for farmers.
Current diagnostic approaches are time intensive and necessitate the presence of highly skilled professionals to study the damaged plants, comprehend the symptoms, identify the disease, and offer appropriate treatments.Due to the limits of such procedures, it has become necessary to seek for new methods for detecting and classifying diseases at an early stage.

2
Literature Review

Challenges faced by farmers in maize disease detection
In recent times, drastic climate change and lack of crop immunity have led to a significant increase in the development of crop diseases.This causes large-scale destruction of crops, reduced agricultural production, and ultimately leads to financial losses for farmers.Due to the rapid development of many diseases and the complete knowledge of farmers, the identification and treatment of diseases have become a major challenge.
Maize diseases can cause a significant reduction in both the quality and quantity of agricultural products.Visual inspection is the main approach adopted in practice for the detection and identification of plant diseases.However, this requires constant supervision by professionals, which can be very expensive.

Leaf Doctor
Leaf Doctor performs quantitative assessments for plant diseases on plant organs such as leaves.Users collect or submit photographs of diseased plant organs and calculate the percentage of diseased tissue.Through a user's touching of the device screen, the algorithm employs user-specified values for up to eight colours of healthy tissues in the photograph.The colour of each pixel is then evaluated for its distance from the healthy colours and assigned a status of either healthy or diseased.Users may slide a threshold bar until satisfied that diseased tissues are represented accurately before the percentage calculation.The assessment data and photographs may be sent by e-mail to any recipients [4].
After conducting some tests on this app, it has been figured out that the app presents some inaccuracies.The experiment conducted was to confuse the app with a human picture and surprisingly the app provided results saying the crop is not diseased despite the fact that it was not a plant picture.The Leaf Doctor app has no specific crops that it detects, the application was designed to test all kinds of crops which should be a major challenge of this application.

Plantix
Plantix turns your Android phone into a mobile crop doctor with which you can accurately detect pests and diseases on crops within seconds.Plantix serves as a complete solution for crop production and management [5].
The best thing about this application is that it detects every object captured and tests only crops and seems to give accurate results.Apart from this, the application provides a description of the crop and its diseases.Users are not required to login, they can conduct their tests without being registered in the application.The issue about this application is that it does not give the user more help when it comes to a situation when the image captured is not visible.Plantix is an amazing application, it detects a specific number of crops.This app performs really great but it still needs some improvements.

Gaps in the existing solutions
After analysing these existing solutions, it was discovered that all of them may have some inaccuracies when images are blurry.All of these solutions necessitate the use of an internet connection by the application's user.Most farmers may not have constant access to the internet, which may make it difficult for them to use the application.Instead, the application should operate without requiring an internet connection, allowing farmers to test images whenever they want whether they are connected to the internet or not.

3
Development Methodology

Introduction
This section of this paper discusses the software methodology used in the execution of the project.It also examines the technologies and frameworks used in the implementation.This section also highlights the analysis and design concept used in the implementation.

Requirements Elicitation
The basic principle is to develop a minimum version of the application containing all the basic features and then integrate the additional features through an iterative process.This iterative process consists of a series of instructions to repeat as many times as needed.For this project, these repeated instructions include the development of the application, unit testing, code improvement, and code integration.

Convolutional Neural Network
Convolutional neural networks are distinguished from other neural networks by their superior performance with image, speech, or audio signal inputs.They have three main types of layers, which are: convolutional layer, pooling layer, fully connected layer.

Convolutional Neural Network Architecture
A deep CNN model is made up of a finite number of processing layers that can learn various features of input data (such as images) at various levels of abstraction.The initiatory layers learn and extract high-level features (with less abstraction), while the deeper layers learn and extract low-level features (with higher abstraction).Figure 1 depicts the basic conceptual model of CNN, with different types of layers that make up the CNN which are the convolutional layers, pooling layers, fully-connected (FC) layers and output layer.Here the input has l=32 feature maps as inputs, k =64feature maps as outputs and filter size is n=3∧m=3.It is important to understand, that we don't simply have a 3*3 filter, but actually, we have3 * 3 * 32 filter, as our input has 32 dimensions.
And as an output from first conv layer, we learn 64 different 3 * 3 * 32 filters which total weights is n * m * k * l.Then there is a term called bias for each feature map.So, the total number of parameters are (n * m * l+1) * k .
If we have an input of size W x W x D and D is the number of kernels with a spatial size of F with stride S and amount of padding P, then the size of output volume can be determined by equation ( 1): (1)

Pooling layer
There are no parameters you could learn in the pooling layer.This layer is just used to reduce the image dimension size.
If we have an activation map of size W x W x D, a pooling kernel of spatial size F, and stride S, then the size of output volume can be determined by equation (2): (2)

Fully-connected layer
In this layer, all input units have a separable weight to each output unit.For "n" inputs and "m" outputs, the number of weights isn * m.Additionally, this layer has the bias for each output node, so (n+1) * mparameters.

Output layer
This layer is the fully connected layer, so (n+1) mparameters, when n is the number of inputs and m is the number of outputs.

4
System Implementation and Testing

Hardware and software specifications
For the developed solution, the hardware environment is as follows: Intel (R) core (TM) i5-4005u CPU @ 1.90 g; GPU uses NVIDIA geforce 940mx; 4 GB video memory; and Windows 10 64 bit operating system.The software specifications required the use of Android devices with highperforming cameras ( more than 5 Megapixels capability).

Model Evaluation Metrics
This study adopted a classification metric with a well-established framework based on four fundamental elements, utilised for measuring performance, namely, accuracy.precision recall and f-1 score.
The first metric is accuracy, a de-facto standard for evaluating the efficiency of classification-based models.It represents the number of accurately predicted samples by the model over the total number of samples, indicating the percentage of test samples correctly classified.
The second metric is precision, which expresses the ratio of accurately predicted test data to the sum of correct predictions and those that the model falsely assumes to be correct.It is mathematically calculated as shown in equation (3).
Next is recall, which denotes the percentage of correctly predicted cases over the sum of correct predictions and incorrectly classified cases, as shown in equation (4).
Finally, the F1-score, representing the harmonic mean of precision and recall values, assesses the model's overall performance and provides a robust comparison between different models.Mathematically, it is calculated as shown in equation (5).

Recall=
True Positive True Positive + False Negative

Welcome module
In this module the user views a brief description of maize and some types of diseases that the crop suffers from.Figure 3 represents the user interface.

Disease detection module
This module is the core of the project, from this module the user can either take a picture using the camera or load a picture from the gallery, after that the user should be able to test the leaf that was selected from the gallery or the image taken using the camera.This takes into consideration the main function of the application from choosing or taking pictures of the plant to the disease detection functionality.First, use the camera as shown on Figure 4:

Disease detection module -Gray leaf spot
Gray Leaf spot is one of the diseases that maize suffers from and was analysed in one of the instances which shows the disease description, prevention and treatment.Figure 7 is the representation.the fourth representing typical, healthy leaves.Our proposed approach is capable of detecting and classifying a variety of maize illnesses.We used an android application environment to implement the solution, which was the CCN model, and we ran various experiments to test and assess its performance.The correctness of our Developed solution was 98.7 percent.

Recommendations
According to our findings, the produced solution had various flaws, such as the model's ability to detect non-maize disease crops and categorize them as one of the maize illnesses with a poor accuracy.The model should be modified to detect other objects, or the user should receive an error message stating that the program can only detect maize illness.The disease treatment module of the solution should also be upgraded so that the user may be confident in the solution obtained in the application.

E3SFig. 1 :
Fig. 1: Conceptual model of CNN Consider a convolutional layer which takes "l" feature maps as the input and has "k" feature maps as output.The filter size is n * m.Here the input has l=32 feature maps as inputs, k =64feature maps as outputs and filter size is n=3∧m=3.It is important to understand, that we don't simply have

5 ) 4 . 3
E3S Web of Conferences 469, 00015 (2023) ICEGC'2023 https://doi.org/10.1051/e3sconf/202346900015Model performance As the research is based on classification, metrics in section 4.2 are used to evaluate model performance.Additionally, Figure 2 shows the training and validation accuracy and the training and validation loss obtained from the use of the model.

Fig. 4 :
Fig. 4: Camera image Access the image from the phone photo gallery as shown in Figure 5:

Fig. 5 :
Fig. 5: Photo gallery Finally, detect the specific underlying diseases as shown in Figure 6: