| |
Last updated on December 11, 2019. This conference program is tentative and subject to change
Technical Program for Friday December 6, 2019
|
FT7T1 Regular Session, Heritage Ballroom |
Add to My Program |
Harvesting and Sensing |
|
|
|
09:00-09:20, Paper FT7T1.1 | Add to My Program |
Investigation of Optimal Network Architecture for Asparagus Spear Detection in Robotic Harvesting |
Peebles, Matthew Christopher Scott (University of Waikato), Lim, Shen Hin (The University of Waikato), Duke, Mike (University of Waikato), McGuinness, Benjamin (University of Waikato) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Automation and Robotics in Agriculture
Abstract: The University of Waikato, in collaboration with Robotics Plus Limited have developed a robotic asparagus harvester that utilises a convolutional neural network for spear detection. This paper serves as a starting point for selecting an optimal network architecture for this purpose. Specifically, this paper compared the performance of Faster RCNN (FRCNN) and Single Shot Multibox Detector (SSD) on a dataset collected by the harvesters camera systems during field trials in California. Additionally, the effect of labelling the dataset using both a single-class and multi-class paradigm were evaluated. It was found that FRCNN, trained using a single-class paradigm, had the best performance of the tested networks. This was characterized by a F1 score of 0.73, approximately 38% higher other networks tested. Multi-class labelling paradigms were found to result in approximately 27% reduction in F1 score than Single-class labelling paradigms for both FRCNN and SSD. Based on these results we conclude that FRCNN based detectors are better suited for asparagus detection than SSD based detectors.
|
|
09:20-09:40, Paper FT7T1.2 | Add to My Program |
A Perception Pipeline for Robotic Harvesting of Green Asparagus |
Kennedy, Gerard (Australian National University), Ila, Viorela (Brno University of Technology), Mahony, Robert (Australian National University) |
Keywords: Machine Vision and Robotics for Crop Harvesting
Abstract: The global population is expected to pass 9 billion by 2050 requiring ongoing improvements in food production methods. Robotic harvesting offers part of the solution to this challenge and has spurred research in the use of agricultural robots (AgBots) for harvesting horticultural crops over the past several decades. While there has been significant progress in automation of harvest of many crops, robotic systems for crops that require selective harvesting remain far from mature. This paper presents the first steps toward a perception pipeline for a selective green asparagus harvesting robot. We show that a novel single-view representation of information from a multi-camera system can be combined with simple temporal filtering to reliably localise asparagus spears in real-time in lab experiments and difficult outdoor conditions.
|
|
09:40-10:00, Paper FT7T1.3 | Add to My Program |
Instance Segmentation and Localization of Strawberries in Farm Conditions for Automatic Fruit Harvesting |
Ge, Yuanyue (Norwegian University of Life Sciences), Xiong, Ya (Norwegian University of Life Sciences), From, Pål Johan (Norwegian University of Life Sciences) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Automation and Robotics in Agriculture
Abstract: Accurate detection and localization of fruits is essential for strawberry harvesting robots. However, segmentation of strawberries in clusters and determination of ripeness remain challenging. Also, occlusions can result in inaccurate localization of fruits. This paper presents a method for detection, instance segmentation and better localization of strawberries, based on a deep convolutional neural network (DCNN). Four classes, including three for different ripeness levels of strawberries and one for deformed strawberries, were defined in the DCNN model. Results show that ripe strawberries are the easiest to be identified among the four classes. A bounding box refinement method was then proposed to improve the localization accuracy by detecting occluded fruits and recovering the actual fruit sizes using bounding boxes. The width to height ratio (WHR) of output masks was used to detect occlusions, and a corresponding refinement method based on the solidity of the mask shape was proposed to find the occluded side of the fruit. The refinement of occluded side is the final step, where we used the mean WHR of unoccluded strawberries to compensate the occluded part. The refinement method was assessed on the strawberry variety of `Lusa', which shows it can estimate and recover the actual sizes. Comparison experiment shows that the bounding box overlap between the refined and ground truth is 0.87, while the overlap between raw detected and ground truth is 0.68. The result indicates that the refinement method can locate fruits more accurately.
|
|
10:00-10:20, Paper FT7T1.4 | Add to My Program |
Canopy Segmentation Using ResNet for Mechanical Harvesting of Apples |
Zhang, Xin (Washington State University), FU, LONGSHENG (Northwest A&f University, Washington State University), Karkee, Manoj (Washington State University), Whiting, Matthew (Washington State University), Zhang, Qin (Washington State University) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Automation and Robotics in Agriculture, Crop Systems/Canopy Architectures, Breeding and Genetics for Precision and Automated Agriculture
Abstract: Fresh market apple is the number one premium fruit crop in Washington state, accounting for more than 60% of U.S. national production every year. With the rapid increment of agricultural labor cost and the decrement of labor availability, mechanical apple harvesting is considered as an alternative solution. To further improve the efficiency of the harvester by automatically locating the target tree trunk and/or branch, the tree canopy under the full foliage condition needs to be analyzed. In this study, convolutional neural network (CNN)-based semantic segmentation method was adopted to segment the tree canopy using a pre-trained and modified ResNet-18 implementation of CNN. In total, 253 images were acquired using a Kinect V2 camera in a commercial “Fuji” apple orchard (trained in the formal architecture with V-axis) during 2018 harvest season near Prosser, Washington. Among those images, 152 (60%), 51 (20%), and 50 (20%) were used for network training, validating, and testing, respectively. Three different classes of pixels were defined including ‘trunk/branch’, ‘apples’, and ‘leaves’ (background) for each image. Then, three commonly adopted evaluation measures were employed to examine the performance of the model; i) normalized confusion matrix (per-class accuracy), ii) intersection over union (IoU), and iii) boundary-F1 score (BFScore) on the image basis. Test results showed that all three classes achieved reasonably high per-class accuracies of 94.8% (trunk/branch), 97.5% (apples), and 94.5% (leaves), respectively. In addition, IoUs for each class in the same sequence were 0.408, 0.717, and 0.944 whereas BFScore for each class were 0.761, 0.915, and 0.887, respectively. Among all the three classes, a generally poor result was achieved for ‘trunk/branch’ compared to the same for other two classes, which was primarily due to proportionally smaller number of pixels in each image. The results from this study indicated that the efficacy of the mechanical harvesting technique of apples could be potentially improved by automatically locating and shaking the trunk/branch under full foliage canopy during the harvest seasons.
|
|
10:20-10:40, Paper FT7T1.5 | Add to My Program |
Effect of Shaking Amplitude and Capturing Height on Mechanical Harvesting of Fresh Market Apples |
Fu, Han (South China Agricultural University), Duan, Jieli (South China Agricultural University), Karkee, Manoj (Washington State University), He, Long (Pennsylvania State University), Xia, hongmei (South China Agricultural University), Li, Jun (South China Agricultural University), Zhang, Qin (Washington State University) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Automation and Robotics in Agriculture
Abstract: This study evaluated the effect of shaking amplitude and capturing height on mechanical harvesting of fresh market apples in trellis trained trees. A linear-forced limb shaker with adjustable shaking amplitude and frequency was designed and fabricated. Shaking amplitudes of 20, 25, 30, 35 and 40 mm were accessible. A catcher filled with 50 mm thickness of peanut foam underneath a piece of cotton was developed. The shaker and the catcher mounted on a movable lifting platform were integrated into a shake-and-catch harvesting system. The approximate middle of a targeted limb was selected as the shaking point and detached fruits were captured underneath the targeted section. The overall combinations of five levels of shaking amplitude and two levels of capturing height were tested for ‘Pink Lady’ apple trees trellis trained in a vertical fruiting wall architecture. Shaking frequency with 20 Hz and duration with 5 s were used in all tests. Fruit removal efficiency and fruit quality (USDA standard) were adopted to evaluate the quality of the harvesting system. Statistical analysis shows that fruit removal efficiency was significantly improved with increase of amplitude at a certain range; the capturing height significantly affected the percentage of Extra Fancy grade fruit. The results indicated that shaking amplitude with ~30 mm is sufficient to remove majority of fruits in the tested variety; capturing fruits that are much closer to the targeted limb is promising to obtain more Extra Fancy grade fruit.
|
|
FT7T2 Regular Session, Barnet Room |
Add to My Program |
Aerial Vehicle Sensing |
|
|
|
09:00-09:20, Paper FT7T2.1 | Add to My Program |
3D Vision for Precision Dairy Farming |
O' Mahony, Niall (Institute of Technology Tralee), Campbell, Sean (Institute of Technology Tralee), Carvalho, Anderson (Institute of Technology Tralee), Krpalkova, Lenka (Institute of Technology Tralee), Riordan, Daniel (Institute of Technology Tralee), Walsh, Joseph (Institute of Technology Tralee) |
Keywords: Sensing and Automation in Animal Farming, Automation and Robotics in Agriculture, Robust Control Systems for Agriculture
Abstract: 3D vision systems will play an important role in next-generation dairy farming due to the sensing capabilities they provide in the automation of animal husbandry tasks such as the monitoring, herding, feeding, milking and bedding of animals. This paper will review 3D computer vision systems and techniques that are and may be implemented in Precision Dairy Farming. This review will include evaluations of the applicability of Time of Flight and Streoscopic Vision systems to agricultural applications as well as a breakdown of the categories of computer vision algorithms which are being explored in a variety of use cases. These use cases range from robotic platforms such as milking robots and autonomous vehicles which must interact closely and safely with animals to intelligent systems which can identify dairy cattle and detect deviations in health indicators such as Body Condition Score and Locomotion Score. Upon analysis of each use case, it is apparent that systems which can operate in unconstrained environments and adapt to variations in herd characteristics, weather conditions, farmyard layout and different scenarios in animal-robot interaction are required. Considering this requirement, this paper proposes the application of techniques arising from the emerging field of research in Artificial Intelligence that is Geometric Deep Learning.
|
|
09:20-09:40, Paper FT7T2.2 | Add to My Program |
Individual Cattle Identification Using a Deep Learning Based Framework |
Qiao, Yongliang (The University of Sydney), Su, Daobilige (Australian Centre for Field Robotics, the University of Sydney,), Kong, He (University of Sydney), Sukkarieh, Salah (The Univ of Sydney), Lomax, Sabrina (The University of Sydney), Clark, Cameron (The University of Sydney) |
Keywords: Sensing and Automation in Animal Farming
Abstract: Individual cattle identification is required for precision livestock farming. Current methods for individual cattle identification requires either visual, or unique radio frequency, ear tags. We propose a deep learning based framework to identify beef cattle using image sequences unifying the advantages of both CNN (Convolutional Neural Network) and LSTM (Long Short-Term Memory) network methods. A CNN network was used (Inception-V3) to extract features from a rear-view cattle video dataset and these extracted features were then used to train an LSTM model to capture temporal information and identify each individual animal. A total of 516 rear- view videos of 41 cattle at three time points separated by one month were collected. Our method achieved an accuracy of 88% and 91% for 15-frame and 20-frame video length, respectively. Our approach outperformed the framework that only uses CNN (identification accuracy 57%). Our framework will now be further improved using additional data before integrating the system into on-farm management processes.
|
|
09:40-10:00, Paper FT7T2.3 | Add to My Program |
Use of Unmanned Aerial Vehicles for Livestock Monitoring Based on Streaming K-Means Clustering |
LI, XIAOHUI (The University of New South Wales), XING, LI (The University of Melbourne) |
Keywords: Automation and Robotics in Agriculture, Sensing and Automation with UAVs, Wireless Sensor Network
Abstract: Unmanned Aerial Vehicle (UAV) as a tool of farming has attracted the interests of an increasing number of researchers. In this paper, we study the problem of deploying a group of UAVs to track and monitor the livestock such as cattle and sheep in a pasture. We assume all targeted animals have been fitted with GPS collars, and the mobility of each targeted animal cannot be ignored. We further assume the number of UAVs is sufficient for covering the entire pasture, and we aim to find the optimal UAVs’ deployment for minimizing the average UAV-animal distance. We first introduce a procedure of performing sweep coverage by UAVs. By deploying UAVs to achieving sweep coverage for the entire pasture, the initial locations of all targeted animals can be acquired. Then, determine and update the UAVs’ deployment by streaming k-means clustering based on the initial locations and received updated locations from the GPS collars. We demonstrate that our solution can always yield a lower average UAV-animal distance compared with a standard K-Means clustering algorithm without considering targeted animals’ mobility.
|
|
10:00-10:20, Paper FT7T2.4 | Add to My Program |
Detection of Chlorophyll Content in Maize Canopy from UAV Imagery |
qiao, lang (China Agriculture University), Zhang, Zhiyong (China Agriculture University), Chen, Longsheng (China Agricultural University), Sun, Hong (China Agricultural University), Li, Minzan (China Agricultural University), Li, Li (China Agricultural University), ma, junyong (Dry-Land Farming Institute of Hebei Academy of Agricultural And) |
Keywords: Sensing and Automation with UAVs, Crop Monitoring, Crop Systems/Canopy Architectures, Breeding and Genetics for Precision and Automated Agriculture
Abstract: Chlorophyll is an important indicator for the evaluation of plant photosynthesis ability and growth status. In order to obtain the spatial distribution of chlorophyll content in field crops quickly and non-destructively, the chlorophyll content detection of maize canopy was carried out based on UAV image processing. In this paper, the RGB (red, green, blue) images of the maize canopy were measured in the Hengshui, Hebei province. The processing method was proposed to estimate the chlorophyll content in the field. Firstly, the image was segmented based on the HSV (hue, saturation, value) color model to remove soil background. The parameters were extracted related to the color feature and the texture feature in the image. On the one hand, there were10 color parameters were involved including the red, green, blue, green and red differences, normalized red and green differences, and so on. On the other hand, the texture parameters were calculated with mean, standard deviation, smoothness, third moment, etc. The detection model of maize chlorophyll content was established and discussed based on BP neural network. The experiment results showed that: (1) The detecting accuracy of chlorophyll content was increased by the image parameter combination of color and texture features. Compared with the color feature, the determination coefficient of the model was increased from 0.6987 to 0.7246 by involving the texture feature. (2) The segmentation of canopy could help to improve the estimation accuracy due to the influence elimination of soil background, and the determination coefficient of model increased from 0.7246 to 0.7564, meanwhile, the root mean square error (RMSE) decreased from 4.4659 mg·L-1 to 4.4425 mg·L-1. The chlorophyll content of maize canopy was calculated at pixel level to indicate the field statues. The distribution map of chlorophyll content in field maize canopy was drawn based on pseudo-color technique. It provided a tool to visually distinguish the field road and canopy area, showing the difference in chlorophyll distribution of the plot. The UAV imagery could help to measure the content and distribution of maize chlorophyll non-destructively, and provide a support for crop evaluation and precision ma
|
|
10:20-10:40, Paper FT7T2.5 | Add to My Program |
Methodology for Stress Identification in Crop Fields Using 4D Height Data |
Byrnes, Walker (Georgia Tech Research Institute), Ahlin, Konrad (Georgia Institute of Technology), Rains, Glen (University of Georgia), McMurray, Gary (Georgia Tech Research Institute) |
Keywords: Crop Monitoring, Crop Yield Estimation/Monitoring/Mapping, Automation and Robotics in Agriculture
Abstract: This paper discusses a technique for identifying stress in plants using 4D models to determine stressed areas based on height and growth rates. The developed algorithm segments the field into low, medium, and high stress regions to provide warnings to the grower that an issue might exist. This algorithm was tested on a field of peanut plants and showed that high stressed regions produced 20% lower yield than low stressed regions on average. This metric can provide the grower with an early warning that regions of relatively high stress exists within their field that will have an impact on yield.
|
|
FT8T1 Regular Session, Heritage Ballroom |
Add to My Program |
Range Sensing |
|
|
|
11:10-11:30, Paper FT8T1.1 | Add to My Program |
An Evaluation of an Apple Canopy Density Mapping System for a Variable-Rate Sprayer |
Hu, Mengying (UNSW), Whitty, Mark (University of New South Wales) |
Keywords: Crop Systems/Canopy Architectures, Breeding and Genetics for Precision and Automated Agriculture, Precision Agriculture and Variable Rate Technologies, Decision Support Systems
Abstract: This paper proposes methods for evaluating an apple canopy density mapping system as an input for a variable-rate sprayer for both trellis-structured (2D) and standalone (3D) apple orchards. The mobile terrestrial system used in this paper consists of a 2D LiDAR (Light Detection and Ranging), three RGB-D cameras and a GPS-RTK module. A 3D point cloud was generated for 2D or 3D tree row, then converted to a 2D matrix with density distribution information for the variable-rate sprayer. Quad frames were placed in the trees to gain ground truth data for GPS validation and canopy density. They were extracted from the 3D point cloud by intensity thresholding and RANSAC along with their locations and timestamps. Five evaluation methods are discussed in this paper to validate the robustness and repeatability of the canopy density mapping system: Quad locations will be used to evaluate GPS accuracy; Quad density deviation and overall deviation in multiple passes will evaluate the repeatability. Then manual classied quad density will be compared with automatic density extraction to evaluate correlation; The misalignment of the point cloud from both sides of a tree row exists but this paper compares between two sides of the same row. The result indicates that it might be sucient to scan from one side only to halve the eld work. The proposed system will help the decision making in a variable-rate sprayer. The evaluation methods are practical, easy to be applied in similar approaches to map canopy density.
|
|
11:30-11:50, Paper FT8T1.2 | Add to My Program |
Studies on Parameter Extraction and Pruning of Tall-Spindle Apple Trees Based on 2D Laser Scanner |
Bai, Jing (China Agricultural University), Xing, Haiqian (China Agricultural University), Ma, Shaochun (China Agricultural University), Wang, Menglong (China Agricultural University) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Agricultural Machinery Guidance and Control, Crop Monitoring
Abstract: With the increasing planting area of tall-spindle apple trees, manual tree pruning has become a labor-intensive and costly part of orchard production. Thus, it is meaningful to develop mechanical pruning. In this study, the parameter extraction of tall-spindle fruit trees based on the modern laser scanning technology was carried out and two methods of tree pruning (overall tree pruning and main branch pruning) were proposed. The pruning results showed that the relative error of the DBH, tree height, and crown width were 9.27 %, 4.35 %, and 7.44 % respectively, and the RMSE were 8.78 mm, 143 mm, 116 mm respectively. In addition, the fruit tree pruning level based on the two pruning methods were quantified by tree height, crown width, branch length and space. This research was expected to provide technical support for mechanical fruit tree pruning.
|
|
11:50-12:10, Paper FT8T1.3 | Add to My Program |
Improving Monocular Depth Prediction in Ambiguous Scenes Using a Single Range Measurement |
Brown, Jasper (Australian Centre for Field Robotics, the University of Sydney,), Sukkarieh, Salah (The Univ of Sydney) |
Keywords: Machine Vision and Robotics for Crop Harvesting, Soil, Plant and Environment Sensing, Sensing and Automation in Animal Farming
Abstract: Depth maps are widely used in robotics, with numerous applications in agricultural tasks. Methods for estimating these from monocular images currently exist, but this is an ill posed problem which requires assumptions about object scale and camera focal length. These assumptions may not always be reasonable and how to deal with them has not been sufficiently explored in the current literature. For example, scenes in agriculture frequently violate the assumption of having a single scale per object class and represent a failure case for these methods. To avoid these assumptions when estimating depth maps, we present an approach where a single actual distance measurement is fused with a monocular image. Our results indicate that this method can outperform an image-only baseline, provided the distance measurement is sampled according to a projective model. We also found that a single measurement can significantly improve accuracy on simulated variable scale versions of two common public datasets. A hardware implementation of this approach was tested in an agricultural setting, though results were poor. Software and hardware designs are made available.
|
|
12:10-12:30, Paper FT8T1.4 | Add to My Program |
Pose Estimation and Bin Picking for Deformable Products |
Joffe, Benjamin (Georgia Institute of Technology), Walker, Tevon (Georgia Institute of Technology), Gourdon, Remi (Georgia Institute of Technology), Ahlin, Konrad (Georgia Institute of Technology) |
Keywords: Automation and Robotics in Agriculture, Sensing, Automation and Robotics in Plant Factory, Protected Cultivation and Greenhouses, Sensing, Automation and Robotics for Post-Harvest/Processing
Abstract: Robotic systems in manufacturing applications commonly assume known object geometry and appearance. This simplifies the task for the 3D perception algorithms and allows the manipulation to be more deterministic. However, those approaches are not easily transferable to the agricultural and food domains due to the variability and deformability of natural food. We demonstrate an approach applied to poultry products that allows picking up a whole chicken from an unordered bin using a suction cup gripper, estimating its pose using a Deep Learning approach, and placing it in a canonical orientation where it can be further processed. Our robotic system was experimentally evaluated and is able to generalize to object variations and achieves high accuracy on bin picking and pose estimation tasks in a real-world environment.
|
|
FT8T2 Regular Session, Barnet Room |
Add to My Program |
Modelling and Robot Supervision |
|
|
|
11:10-11:30, Paper FT8T2.1 | Add to My Program |
Augmented Reality for Supervising Multirobot System in Agricultural Field Operation |
Huuskonen, Janna (Aalto University), Oksanen, Timo (Aalto University) |
Keywords: Automation and Robotics in Agriculture, Agricultural Machinery Guidance and Control, Robust Control Systems for Agriculture
Abstract: Agriculture is shifting from farmers manually operating machines to monitoring autonomous machines. Thus, the task of a farmer is fleet management and taking care of safe operation. Situational awareness during the operation is important. Augmented reality (AR) is a powerful tool for visualizing information in real-time. To demonstrate the use of AR in agricultural fleet management, in this paper we present a novel AR system to help the farmer supervise the operation of two autonomous agricultural machines. The paper discusses the requirements for AR application, and we present the architecture of the system and the results of a demonstration carried out in a test field.
|
|
11:30-11:50, Paper FT8T2.2 | Add to My Program |
Control of Large Vehicle-Manipulators with Human Operator |
Varga, Balint (FZI Research Center for Information Technology), Shahirpour, Arash (FZI Research Center for Information Technology), Schwab, Stefan (Karlsruhe Institute of Technology), Hohmann, Soeren (KIT) |
Keywords: Automation and Robotics in Agriculture, Robust Control Systems for Agriculture, Agricultural Machinery Guidance and Control
Abstract: This paper addresses a control algorithm for mid-sized heavy-duty vehicles with a working manipulator to support the human operator. Such vehicle-manipulators (VM) are used for ditch cleaning, landscape maintenance or other farming works, like grass mowing. The operation of such systems is challenging, as the human operator has to deal with a dual task: the vehicle has to stay on its path, and the manipulator has to follow its trajectory to fulfil its task. This task is physiologically and physically demanding and requires an expert human worker. To increase the efficiency, to relieve the human worker and to reduce the training time of novice operators, we propose a control concept to automate the vehicle's steering, in such a way that the vehicle supports the operator. Another advantage of this concept is the lack of the necessity of additional sensors to observe the state of the manipulator. The concept requires the inputs solely from the operator, which is suitable for a real-world application. The dynamic equation of the system is an extended state-of-the-art vehicle model in Frenét-frame. An additional system state is derived from the dynamic model of the vehicle-manipulator to take the human actions into account. This additional state influences the motion of the manipulator when it is required. Such assistance is beneficial if the manipulator cannot follow its trajectory fast enough. The control concept is tested and verified in simulations, where the human is modelled as an optimal controller, to emphasize the advantages of the proposed concept.
|
|
11:50-12:10, Paper FT8T2.3 | Add to My Program |
Tractor Fuel Rate Modeling and Simulation Using Switching Markov Chains on CAN-Bus Data |
Paraforos, Dimitrios S. (University of Hohenheim), Griepentrog, Hans W. (University of Hohenheim) |
Keywords: Internet of Things, Automation and Robotics in Agriculture
Abstract: Agricultural machinery communication data besides the offered diagnostics functionality are a valuable source for optimizing the efficiency of the operations performed. An important factor is the consumed fuel during these operations, which can be obtained from the tractor’s CAN-Bus (Controller Area Network). Thus, methodologies that can model and simulate in-field fuel rates are increasingly important for machine manufacturers. In this study, fuel rate data during plowing with a mounted reversible moldboard plow were collected by a CAN-Bus data logger. Georeferencing of the data was performed by a low-cost DGNSS (Differential Global Navigation Satellite System) receiver. The data were modeled and simulated using Markov chains that proved capable of also modeling the operating mode switching that takes place during headland turning. Based on the calculated Markov transition probability matrices, 10,000 Monte Carlo simulations were performed to produce different realizations of the examined scenarios. Considering all Monte Carlo simulations, the methodology achieved to predict the total fuel consumption with a mean difference of 0.9% and 3.7% standard deviation, compared to the observed total fuel consumption.
|
|
12:10-12:30, Paper FT8T2.4 | Add to My Program |
Soil Moisture Forecasting for Irrigation Recommendation |
Brinkhoff, James (University of New England), Hornbuckle, John (Deakin University), Ballester Lurbe, Carlos (Deakin University) |
Keywords: Soil, Plant and Environment Sensing, Crop Monitoring, Decision Support Systems
Abstract: This study integrates measured soil moisture sensor data, a remotely sensed crop vegetation index, and weather data to train models, in order to predict future soil moisture. The study was carried out on a cotton farm, with wireless soil moisture monitoring equipment deployed across five plots. Lasso, Decision Tree, Random Forest and Support Vector Machine modeling methods were trialed. Random Forest models gave consistently good results (mean 7-day prediction error from 8.0 to 16.9 kPA except in one plot with malfunctioning sensors). Linear regression with two of the most important predictor variables was not as accurate, but allowed extraction of an interpretable model. The system was implemented in Google Cloud Platform and a model was trained continuously through the season. An online irrigation dashboard was created showing previous and forecast soil moisture conditions, along with weather and normalized difference vegetation index (NDVI). This was used to guide operators in advance of irrigation water needs. The methodology developed in this study could be used as part of a closed-loop sensing and irrigation automation system.
|
|
FI3T1 Plenary Session, Heritage Ballroom |
Add to My Program |
Invited Talk 3: Associate Professor Timo Oksanen |
|
|
|
13:30-14:10, Paper FI3T1.1 | Add to My Program |
Optimization of Autonomous Vehicle Deployment |
Oksanen, Timo (Aalto University) |
Keywords: Automation and Robotics in Agriculture
Abstract: In the vision of automated agricultural production, with autonomous vehicles are considered smaller than the current tractors and combine harvests. To achieve the same operational efficiency, multiple autonomous vehicles need to be deployed on way or the other. The simplest way is to divide a field into subfields of equal size and assign identical vehicles to each of those. However, even if this works well in simple tillage operations, the approach meets challenges when the vehicles seed, plant, spread, spray or harvest something. In addition, simultaneous and sequentially dependent operations in the same field introduce even more degrees of freedom for problem formulation like mowing, raking and baling of forage and hay. The combined system of fleet management, coverage path planning, collision avoidance and overall optimization of process is not computationally trivial in general cases, even if some shortcuts are available by heuristics. The presentation discusses about these problems.
|
|
FT9T2 Regular Session, Barnet Room |
Add to My Program |
Sensing |
|
|
|
14:10-14:30, Paper FT9T2.1 | Add to My Program |
Purity Detection of Goat Milk Based on Electronic Tongue and Improved Artificial Fish Swarm Optimized Extreme Learning Machine |
Han, Hui (Shandong University of Technology), Wang, Zhiqiang (Shandong University of Technology), Li, Caihong (Shandong University of Technology), Ma, Zeliang (Shandong University of Technology), Yang, Zhengwei (Shandong University of Technology), Ma, Xiyuan (Korea University) |
Keywords: Sensing and Automation in Animal Farming
Abstract: The nutritional value of goat milk is higher than that of cow milk, which is scarce and expensive. Therefore, many bad credit enterprises mix cow’s milk into goat milk and sell it as pure goat milk, which infringes on the rights and interests of consumers. The adulteration of dairy products has a long history. Many techniques have been applied for the detection of dairy products’ adulteration, but no obvious results have been achieved. The electronic tongue system, based on a virtual instrument, can provide fast detection of goat milk’s quality. The qualitative identification of different purities of goat milk by kernel principal component analysis (KPCA) has achieved 100% accuracy. By comparing the four algorithms of the optimal extreme learning machine (ELM), it was determined that the quantitative model for the prediction of adulterated goat milk, established by the improved artificial fish swarm optimized ELM method, had high prediction accuracy. The prediction set coefficient R2 was 0.998, the average absolute error (MAE) was 0.083, and the root mean square error (MSE) was 0.011. According to its unique advantages, the electronic tongue provides a new idea and method for the detection of food adulteration. As a modern intelligent sensory instrument, the electronic tongue has great potential for brand identification as well as the identification of goat milk and goat milk powder adulteration.
|
|
14:30-14:50, Paper FT9T2.2 | Add to My Program |
Classification of Wolfberry from Different Geographical Origins by Using Electronic Tongue and Deep Learning Algorithm |
Yang, Zhengwei (Shandong University of Technology), Wang, Zhiqiang (Shandong University of Technology), Yuan, Wenhao (Shandong University of Technology), Li, Caihong (Shandong University of Technology), Jing, Xiaoyu (Shandong University of Technology), Han, Hui (Shandong University of Technology) |
Keywords: Soil, Plant and Environment Sensing, Automation and Robotics in Agriculture, Agricultural Machinery Guidance and Control
Abstract: Wolfberry is a traditional Chinese food. Its price and function are closely related to its geographical origin. Illegal labeling driven by commercial interests has brought serious food safety problems and damaged consumer confidence. In this study, a voltammetric electronic tongue (VE-tongue) combined with deep learning algorithm was developed to perform recognize of different origins of wolfberry samples. Training of deep learning model (Convolutional Neural Network, CNN) was performed with 260 wolfberry samples which were from 4 different geographical origins samples. To find the best performance CNN model, learning rate, optimizer and minibatch size were modified. The best classification accuracy of CNN was further compared with traditional machine learning method—BPNN with discrete wavelet transform (DWT) as feature extraction method. The classification accuracy of CNN, DWT-BPNN and BPNN are 98.27%, 88.46% and 48.08% respectively. This study provides a novel method for recognition and classification of wolfberry from different geographical origins, which holds great promise for its wide applications in geographical origin traceability for agricultural products.
|
| |