In modern poultry production, thousands of chickens are raised in a confined housing system, which tends to have animal disease or well-being issues.  Early disease detection is essential to limit spread and economic loss. Poultry diseases threaten animal welfare and productivity, especially in cage-free systems where communal environments increase disease transmission risks. Traditional diagnostic methods, though accurate, are often labor-intensive, time-consuming, and not suitable for continuous monitoring. Fecal characteristics analysis is a visual inspection method used to identify gastrointestinal diseases. According to WIXBIO (2025; https://www.wixbio.com/articles/poultry/chicken-manure-health-guide-five-colors-reveal-flock-health-crises/Figure 1), healthy chicken manure shows a flock’s good physical condition.

Figure 1. Manure of Cage-free Hens (photo credit: WIXBIO).

The normal manure can be determined based on five characteristics of manure shape, texture, color, composition, and odor. The shape, texture, and color information can be quickly detected with modern imaging technologies such as machine vision. For example, Coccidiosis often causes blood-stained, yellow, watery, or dark brown droppings, while healthy birds’ feces are generally brown with white areas. While manual identification is possible, automated systems provide scalable and consistent monitoring for large-scale operations. Since deep learning systems can learn complex patterns from large and unstructured datasets such as images, videos, or audio, using layers of neural networks, fecal images can be used as an input for disease detection systems.

Researchers at the University of Georgia developed a disease screening tool (Figure 2) based on machine vision technology to analyze poultry fecal images for fast disease detection.

Figure 2. User interface and the components of the web-based application.

The dataset used in this study was obtained from a published chicken image database, which consists of poultry fecal images annotated with their corresponding polymerase chain reaction (PCR)-verified disease status classifications. This widely used dataset consists of poultry fecal images from poultry farms in Tanzania between September 2020 and February 2021. The dataset includes four categories based on disease status: Coccidiosis (2,103 images), Healthy (2,057 images), NCD (376 images), and Salmonella (2,276 images). Fecal matters within each image were labeled with bounding box annotations in YOLO format. The aspect ratios of images (width/height) ranged from 0.45 to 2.22, representing a good variation in the dataset. The dataset captures a wide range of real-world conditions, including variations in lighting, flooring types, and feces accumulation, reflecting the challenging environments typical of poultry farms. However, unavailability of complex farm environments such as occlusions and images representing a single region may potentially limit the models’ capability to achieve good performance in different farm conditions. Augmentation was applied to the training dataset for both the detector and classifier to improve the model’s ability to generalize unseen manure images while maintaining realistic evaluation during validation and testing. This approach ensures that the validation and test sets remain representative of real-world conditions and prevents model overfitting to augmented patterns. The resulting training datasets after augmentation included 3,186 images per class for the classifier and 6,372 total images for the detector, reflecting the larger overall sample size required for robust bounding box localization in the detection task. Some results of manure image augmentation are shown in Figure 3.

Figure 3. Results of image augmentation. A variety of augmentation techniques were applied to the training dataset using AutoAugment and ImageNet policy.

For manure image classification, recent advancements in YOLO11 have introduced new convolutional blocks, C3k2 and C2PSA, which improve feature extraction and multi-scale performance while reducing computational complexity, resulting in enhanced detection accuracy and efficiency. Five YOLO11 variants (YOLO11-n, -s, -m, -l, and -x) were trained for 150 epochs on the training dataset consisting of single-class label for feces. Prior to final training, a preliminary run of 100 epochs was conducted to monitor signs of overfitting and guide model selection. If the model results show no signs of overfitting or increased validation loss, the number of epochs can be increased with an early stopping parameter. Hence, each model was trained for 150 epochs, with early stopping parameter set to 50, such that model training would stop if there were no improvements in the performance for consecutive 50 epochs. The resulting models were evaluated based on their ability to accurately detect poultry feces under diverse real-world conditions. For each model training, the images were resized into 640 × 640 pixels, batch size of 16, and a learning rate of 0.01. For both detector and classifier training, NVIDIA RTX 4000 Ada Generation graphical processing unit (GPU) (NVIDIA, Santa Clara, CA, USA) with 20 gigabytes of memory was used. In this study, both YOLO11n and YOLO11x models achieved an mAP@0.5 of 0.881 (Figure 4) for the feces class, indicating well-balanced detection at a 0.5 IoU threshold.

Figure 4. Feces detection on the test dataset by YOLO11n. The blue detection area shows the class name (feces) of the detected fecal region followed by the confidence of detection.

The integrated web application was built and deployed using Streamlit (Version 1.41.0), which is an open-source Python language-based framework that does not require complex web development skills. This allowed us to focus on implementing the study results in a user-friendly manner. In this study, a publicly available dataset consisting of 6812 PCR-verified images categorized into Coccidiosis, Newcastle Disease (NCD), Salmonella, and Healthy from commercial farms in Tanzania was used in this study. Augmentation was used to address the imbalance present in the dataset, with NCD underrepresented (376 images) compared to other classes (>2000 images). Five YOLOv11 detection models were trained, with YOLO11n selected due to its high mean average precision (mAP@0.5 = 0.881). For classification, EfficientNet-B0 was chosen over the EfficientNet-B1 variant because of its high accuracy (99.12% vs. 98.54% for B1). Despite high class imbalance, B0 had higher precision than B1 for the underrepresented NCD class (0.88 for B1 vs. 1.00 for B0). The system achieved an average total inference time of 25.8 milliseconds, demonstrating real-time capabilities. Field testing, expanding datasets across different regions, and incorporating additional diseases is required to further validate and enhance the robustness of the system.

Further reading:

Dhungana A, Yang X, Paneru B, Dahal S, Lu G, Chai L. An Integrated Deep Learning Approach for Poultry Disease Detection and Classification Based on Analysis of Chicken Manure Images. AgriEngineering. 2025; 7(9):278.

https://doi.org/10.3390/agriengineering7090278