Elsevier

Biosystems Engineering

Volume 100, Issue 4, August 2008, Pages 484-497
Biosystems Engineering

Research Paper: AE—Automation and Emerging Technologies
Classification of crops and weeds extracted by active shape models

https://doi.org/10.1016/j.biosystemseng.2008.05.003Get rights and content

Active shape models (ASMs) for the extraction and classification of crops using real field images were investigated. Three sets of images of crop rows with sugar beet plants around the first true leaf stage were used. The data sets contained 276, 322 and 534 samples, equally distributed over crops and weeds. The weed populations varied between the data sets resulting in from 19% to 53% of occluded crops. Three ASMs were constructed using different training images and different description levels. The models managed to correctly extract up to 83% of the crop pixels and remove up to 83% of the occluding weed pixels. Classification features were calculated from the shapes of extracted crops and weeds and presented to a k-NN classifier. The classification results for the ASM-extracted plants were compared to classification results for manually extracted plants. It was judged that 81–87% of all plants extracted by ASM were classified correctly. This corresponded with 85–92% for manually extracted plants.

Introduction

Conventional farming today uses herbicides to control weeds, while organic farming depends on alternative methods such as flaming, steaming and cultivation. Most of these alternative methods treat only the area between the crop rows, leaving the area within the rows requiring hand weeding. To reduce chemical use and the time and cost of hand weeding, alternative approaches to control weeds must be developed.

One approach is to use computer-based signal processing and sensor technology such as computer vision to recognise and localise crops and weeds (Åstrand, 2005). The system can control either precision sprayers to apply herbicide only to the weeds, or guide mechanical weeding tools to physically remove the weeds. In an on-line system, such as used with an autonomous robot, the vision system needs to automatically determine where the crops and the weeds are. A classifier can determine the plant type by examining the shape and colour of features extracted from the plant.

The major problems that can occur when using computer vision are occlusion and variations in plant appearance. Occlusion is where leaves from crops and weeds partially overlap, resulting in two or more plants in the image being interpreted as one plant (Fig. 1).

The variations in plant appearance are variations between different samples of the same plant species. The leaves do not face the same direction and are different in size. Furthermore, variations in the soil conditions and the amount of sunlight can result in colour variations and varying differences in size between crops and weeds.

Moreover, when the image is segmented into plants and soil, the resulting plant objects can be single leaf objects or multiple leaf objects (Fig. 2). The multi-leaf objects can contain leaves from the same plant, but it is also very likely that leaves from different plants are included (occlusion). When the objects are merged into plants it can be difficult to discern which objects contain leaves belonging to the same plant.

Extraction of the plants can be carried out in different ways and the choice of the extraction method depends on the application and the plant species. Woebbecke et al. (1995) used morphological shape features to identify plants from different weed species. Onyango and Marchant (2003) used colour and the planting grid to extract cauliflower plants from weeds and background. They found that for cauliflower 82–96% of pixels were classified correctly and for weeds this range was 68–92%. The planting grid was also used in Åstrand (2005) where the plant material in colour images of sugar beets was extracted from the soil using a linear discriminant in the normalised RGB (red, green, blue) colour space. To distinguish between sugar beets and weeds, features including colour, area and shape were calculated from the extracted plants and combined with a model of the distance between the sown crops. The total classification rate was 90–98% when occluded plants were excluded with the tested data sets containing 5–24% occluded plants. Alchanatis et al. (2005) extracted plant material from the soil using the difference in multi-spectral images at 660 nm and 800 nm. The cotton plant was extracted from the weed using the local inhomogenity of the pixel values. For cotton 86% of the pixels were classified correctly and the same was true for the weed pixels. Tang et al. (2000) used a genetic algorithm to find colour boundaries for extracting plant material from soil in the HIS (hue, intensity, saturation) colour space. The algorithm did not separate crop pixels from weed pixels. Approximately 90% of the plant and background pixels were classified correctly. These methods include colour or spectral information but when colour or spectral information is used for plant extraction the difficulty is that conditions may change from one field to another depending on crop health and soil conditions.

Instead of looking at its colour, the size-independent shape of the plant can be used. Franz et al. (1991) used Fourier descriptors to describe the leaf contour of soybean, velvet leaf, ivy-leaf morning glory and giant foxtail. Depending on the amount of occlusion, 74.5–100% of the plants were correctly identified. Mahn et al. (2001) used a deformable template based on the parametric formula of a skeleton and envelope functions. The image was searched for pointy leaf tips; for each leaf tip, the model was placed over it and deformed to match the whole leaf. Out of 600 green foxtail leaves the model extracted 84% correctly. These methods analysed only one leaf at a time; if the plant consisted of several leaves it was necessary to combine them afterwards. Instead of analysing the leaves separately, Søgaard (2005) considered the whole plant by using active shape models (ASMs) (Cootes and Taylor, 1992; Cootes et al., 1995, Cootes et al., 1994), to extract and distinguish between different weed species. Three weed species were tested using three different models, one model per species. Each plant was tested with all three models, and the model that had the best fit was assumed to be the correct one. The classification rates for the three species were 77%, 65% and 93%. The plants in the test were seedlings with up to two true leaves and no mutual overlapping.

Section snippets

Objective

The objective of the work reported in this paper was to investigate whether ASMs could be used for extraction of plants. The objective was also to investigate whether the ASM-extracted plants could be classified as crops or weeds by comparing the classification results for the ASM-extracted plants to classifiers based on manually extracted plants. ASMs were chosen to study the size-independent shape information of plants instead of the colour or size information, since colour and size can vary

Active shape models

ASMs are deformable templates that can only deform according to criteria defined by a set of training images. The method was presented by Cootes and Taylor (1992) and is suitable for matching objects that can vary in shape, such as hands, insects, plants and medical organs (Cootes and Taylor, 1992; Cootes et al., 1994, Cootes et al., 1995). The method is based on building a statistical model from training images and then using the model to search for similar objects.

Data

Three sets of colour images were sampled using the autonomous robot described in Åstrand and Baerveldt (2002). The image size was 480 by 640 pixels and the resolution was 0.39 mm pixel−1. The images covered an area of 190 by 250 mm and contained both sugar beets and weeds. The colour images were converted to grey by extracting the excessive green component, I3=2×GRB (Ohta et al., 1980), where G, R and B are the green, red and blue components of each pixel. The excessive green component has been

Pixel classification

For a good pixel classification result, both the crop ratio and the weed ratio should be high. Table 5 shows the mean crop and weed ratio per plant for all the three data sets and all three models. The crop ratio was calculated using all sugar beet plants in each data set while the weed ratio was calculated using only the occluded sugar beet plants in each data set. Fig. 14 shows some examples of the ASM search results for occluded sugar beets.

Model 1 gave the highest crop ratio, 77–83%, for

Conclusions

The method of using active shape models (ASMs) for extraction and classification of plants was presented. One ASM was used to extract both crops and weeds, and the extracted plants were used in a classifier to distinguish crops from weeds. Three different ASMs of a sugar beet plant were tested on three sets of images including both occluded and non-occluded plants. The differences between the models were the set of training images and the description level. The three data sets contained 276,

Acknowledgements

This work, which owes much to Albert-Jan Baerveldt, was carried out as part of the Mech-Weed project. The project was sponsored by SJV, the Swedish Board of Agriculture and SLF Stiftelsen Lantbruksforskning, the Swedish Farmers’ Foundation for Agricultural Research.

References (19)

There are more references available in the full text version of this article.

Cited by (53)

  • Late fusion of multimodal deep neural networks for weeds classification

    2020, Computers and Electronics in Agriculture
    Citation Excerpt :

    Various applications that rely on physical features to classify weeds, such as the Naive Bayes and Gaussian mixture models, were adopted in De Rainville et al. (2014). Conversely, Persson and Åstrand (2008) applied Active shape models to extract weeds from images and k-nearest neighbors (KNNs) for the classification. However, the handcrafted features were overly simplified and could not emphasize the characteristics of the weeds in real-world applications.

  • Transfer learning for the classification of sugar beet and volunteer potato under field conditions

    2018, Biosystems Engineering
    Citation Excerpt :

    This classification step traditionally involves two aspects: selection of the discriminative features as well as selection of the classification techniques (Suh, Hofstee, IJsselmuiden, & Van Henten, 2016). Regarding the features used for discrimination, many studies have used colour, shape (biological morphology) and texture on an individual basis or as a combination of multiple features (Ahmed, Al-Mamun, Bari, Hossain, & Kwan, 2012; Gebhardt & Kühbauch, 2007; Persson & Åstrand, 2008; Pérez, López, Benlloch, & Christensen, 2000; Slaughter, Giles, & Downey, 2008; Swain, Nørremark, Jørgensen, Midtiby, & Green, 2011; Zhang, Kodagoda, Ruiz, Katupitiya, & Dissanayake, 2010; Åstrand & Baerveldt, 2002). However, these features have shown poor performance under widely varying natural light conditions (Suh, Hofstee, IJsselmuiden, & Van Henten, 2018).

View all citing articles on Scopus
View full text