RIASSUNTO
A study is presented comparing the effectiveness of unsupervised feature representations with handcrafted features for cattle behaviour classification. Precision management of cattle requires the interaction of individual animals to be continuously monitored on the farm. Consequently, classifiers are trained to infer the behaviour of the animals using the observations from the sensors that are fitted upon them. Historically, domain knowledge drives the generation of features for cattle behaviour classifiers. When new behaviours are introduced into the system, however, it is often necessary to modify the feature set; this requires additional design and more data. Autoencoders, on the other hand, can skip this design step by learning a common, unsupervised feature representation for training. Whilst stacked autoencoders successfully represent structured data including speech, language and images, deep networks have not been used to model cattle motion. Hence, we investigate using a stacked autoencoder to learn a feature representation for cattle behaviour classification. Experimental results demonstrate that the autoencoder features perform reasonably well in comparison to the statistical features that are selected using prior knowledge of behaviour motion.