Sunday, 2 April 2017

Talk - Bits And Bytes - When Horses Meet Computers

For March's University of Nottingham Public Lecture Series talk, Dr Mandy Roshier, from the School of Veterinary Medicine and Science, and Dr Steve North, from the School of Computer Science, join forces to talk about Bits And Bytes - When Horses Meet Computers. @Gav Squires was there and has kindly written this guest post summarising the event, with some linkage added by NSB.

Mandy and Steve are looking at how horses feel and what they want, using computers to measure their behaviour in an objective manner so that we can improve our understanding and the horses' welfare.

The research has focused on animal-computer interaction. Animals have interacted with our technology throughout human history and they now interact with our computer-based systems, whether they know it or not. The lack of an animal perspective on system design can have a negative effect on both animal users and the purpose for which the technology was developed. Animal-computer interaction is a recent field of study and is still very small.

Prehistoric and more recent attempts at depicting horse motion

However, horses have been interacting with our technology for a long time. Before 4000BCE, horses were only used as food for humans. Then between 4000-3000BCE, humans started to use horses for traction and transport. While we still ate horses during this period, this was the beginning of horses having to interact with our technology. These were "hard" technologies, the saddle, bridle, reigns, bit, halter, whip, working collar, harness, chariot, cart and plough. Then there was the most significant development - the "hot shod" horse shoe. As the horse enable humans to travel great distances, trade, carry cargo and share culture and language, it could be argued that the horse was a primary driver of human technological development.

Horse-computer interaction can include environmental, physical training, health and sport performance analysis and more general health such as pregnancy monitors and web cams. Some of these techniques can be invasive. However, touch screen computers are changing the way that scientists carry out equine research as they can take out the risk of human influence in equine decision making and actions. There is also the Aktiv stable, which controls a horse's environment. Microchips attached to the horse control feeding, watering and social interaction between horses. There are benefits to the horse in that it allows a more natural lifestyle but it reduces human interaction.

How do horses respond to their environment? How does this differ between sports horses, those just kept in regular stables and feral horses? (there are no such thing as wild horses, only feral) Behaviour is a part of welfare, although it is one that vets can often overlook in favour of the physiological side of animal care. In the 70s, the Animal Welfare Council came up with their five freedoms, which included a reference to behaviour:

* Freedom from hunger and thirst
* Freedom from discomfort
* Freedom from pain, injury and disease
* Freedom to express normal behaviour
* Freedom from fear and distress

Animal research can include psychology (the study of the mind), ethology (the study of behaviour) and physiology (for example measuring something like cortisol levels) The physiology and the ethology need to be considered together so that you can tell whether a rise in cortisol levels is due to increased stress or increased excitement for example. Within these three areas, you can record observable actions and interpret them. Computers help here as they can cope with large levels of data. This has led to research such as EquiFACS, which looks at equine anatomy by recording minute movements of muscle. This initially started by looking at humans but has moved on to other animals. Similarly, the Horse Grimace Scale was born from research that started with rodents but now looks at a way of measuring how much pain a horse is in. Meanwhile equine motion has fascinated us for years, from the earliest cave paintings to the present day.

Previously, there have only been cumbersome way of analysing horse motion

Following behaviour is when horses follow each other in a group. There are a number of instances of this:

* Following as reproductive behaviour between a mare and a foal, parenting, early development
* Following as intermale interaction
* Trek as maintenance behaviour - locomotion
* Parallel prance as intermale behaviour
* Chase as either intermale behaviour or as play
* Fleeing in response to a threat or unfamiliar stimulus - stampede

Why is this of interest? If we can learn the intricacies of horse-horse following and horse-human following, it could improve the training of horses. How is movement initiated when horse follows horse? When horses follow and mirror the speed and direction of humans, does the horse think that the human is another horse? Is this imprinting or is it learned?

Mandy and Steve are working on HABIT - the Horse Automated Behaviour Identification Tool. This is considering the use of technology to automatically recognise horse behaviours. The goal is to automatically produce YouTube quality videos of automatic analysis of horse-horse and horse-human behaviours. From this it should be possible to asses equine behaviour from a welfare perspective and answer the questions, "Are horse interacting with humans as if they were other horses?" and "Are horse in training behaving 'normally'?" From here the programme aims to inform, increase people's knowledge base and aid training. There is a focus on low stress handling, welfare and safety and a consideration for species interactions.

There are five bedrocks of HABIT

1.Know your species. Where did it come from originally? How did it evolve? How was it domesticated? What is its social life like?
2.Sensory capabilities. How do they view the world? Humans and horses see colour differently because humans are tri-chromatic but horses are only b-chromatic. Horses also have very mobile ears
3.Communication. Reading body language - what do these expressions mean?
4.Our verbal and body language. What's in a name? Our words can make a difference in how others interact with a horse
5.Consider the individual, both nature and nurture

There are many breeds of horse

Using computer vision and machine learning, the system performs video based behaviour identification. It features helper apps that provide a smooth workflow and which processes longer videos into shorter clips ready for analysis by the main system. Computerised tracking and reliability is difficult in the field, as is automatically identifying different horses.

Computer vision can be inflexible. Real time processing can also cause some issues compared to retrospective processing. Some simple behaviours can be processed in real time but more complex behaviours need to be processed. Longer videos are processed into short ones by first identifying segments that contain horses. The video is then summarised in a similar way to instant highlights in sport. The hardest thing to do is to identify those sections that contain horse action and this happens frame by frame using Haar Cascades.

Haar Cascades were invented by Viola and Jones in 2002, named after Alfred Haar for his work in the 20th century on wavelets. It's a machine learning approach where software is trained from many positive and negative images and then used to detect objects in other images. It combines increasingly more complex classifiers into a cascade. It's used for face recognition, finger print recognition and number plate detection on motorways.

HAAR like features used in image processing

To train the system, the computer moves each member of a set of graphical shapes, called features, across the test image at different scales and orientations. Each of these features consists of contrasting regions of black and white rectangles. At each position, the code checks if the image contains a similar range of contrast to the current feature. The best features in a specific position, scale, etc are retained as useful classifiers. The best classifiers are separated in ever more complex stages, these sequential stages are the cascade. A test image can be rejected at an early stage, using a small number of classifiers, thus saving computing power.

For an efficient detector for HABIT, there needs to be unique features of horses that are readily identifiable and visible from multiple viewpoints. The horse "ear detector" has proven to be quite efficient and when this is combined with detectors for "legs" and "side view", reliability increases.

The Habit Ear Detector, detecting ears.

Next, we need to analyse the clips for behaviours. In the field of computer vision, this is sometimes called action spotting. Firstly, you train a classifier to decide if a behaviour is present in the test video clip. Then you repeat for all horse behaviours, of which there are lots. Then you can test a video clip with all of the classifiers and report one or more behaviours identified. To train the classifier, you have to extract key frames to build up a bag of visual words. From here histograms are built that enable you to identify examples of the behaviour that you're looking for.

The next stages of the research are continuing the development of the HABIT system, especially the behaviour identification module. Then work needs to be done on collecting a video of specific behaviours for training and testing. From there, it might be possible to start applying HABIT-like approaches to behaviour identification in other species.

The Public Lecture Series returns on the 20th of April at 6pm where Professor Philip Moriarty will talk about When the Quantum Uncertainty Principle Goes Up to 11. For more information, visit the Public Lecture Series website: http://www.nottingham.ac.uk/physics/outreach/science-public-lectures.aspx

Image Sources
All via Gav Squires

Talk : Visual illusions reveal that the world is different from what we think

Following on from last year's success, the Nottinghamshire branch of the British Science Association again put on a series of talks at Science in the Park 2017, held at Wollaton Hall. Professor Peter Mitchell from the University of Nottingham gives a talk on "Our eyes deceive us: visual illusions reveal that the world is different from what we think". @Gav Squires was there and has kindly written this guest post summarising the event, with some linkage added by NSB.

How do we mis-perceive that world? The was that we see the world is inaccurate and this is demonstrated in the drawings that we produce of it. However, some people do see the world in a more accurate way and they are able to better represent what's actually there.

Drawing has always been a core human activity. Even as far back as 30,000 years ago, humans were producing art on the walls of caves. Their knowledge of the world influence what they drew. By the Renaissance, artists had learned the laws of perspective by creating an invisible eye line. This enabled the three dimensional world to be represent on a two dimensional surface. The way that we draw the world is influence by the way that we see the world.

Cave Art

When drawing something that we've seen in a photograph, we are guilty of "boundary extension" - putting in details that are outside the frame of the photo. We extrapolate from what we've actually seen. Is this because we want to draw complete object? No, even in the case where there are no cropped items in the photograph, we will still extend the boundary. This is because of our inherent knowledge of the world - we know that there is more than just what is in the photograph.

Despite the invention of perspective, we have difficulty depicting three dimensions. For example, if we are drawing an object from a photograph that we know have 90 degree angles, then we will depict them as such even if they don't appear to be 90 degrees in the photograph. Our knowledge is contaminating our perception. Even someone like Raphael was guilty of this. We default to our knowledge.

What about the drawings of children? These tend to focus on what children find important so when they are drawing a car, they will make a big deal of the boot as that's where they put their toys on journeys, for example. However, there is an example of a schoolchild of 11 drawing the reading room at the British Library. He only looked at the scene for ten minutes and took neither notes nor sketches but drew the whole thing from memory. He had autism, which explains why he could see the world more accurately and with more objectivity. He even drew the inside of the dome with correct perspective, which is incredibly difficult to do.

Library Drawing

If we look at visual illusions, they reveal how we mis-perceive the world. The devil's triangle appears to be an impossible shape because of the way that we perceive 3D cues. The Shepard's table illusion shows two tables that appear to be different sizes but are, in fact, the same. People with autism are fooled less by this illusion that people without.

Visual Illusions

Shepard Illusion

A lot of time and effort has gone into finding out what is "wrong" with autistic people and trying to "fix" them. However, they also have strengths. We should be looking to build on them and help them achieve their potential.

Image Sources
All via Gav Squires

Talk - Photobiology - Effects of UV Radiation on Normal Skin

Graham Harrison formerly of Photobiology Dept at St John's Institute of Dermatology, King's College London and now of the University of Nottingham comes to Café Sci to talk about Photobiology - Effects of UV Radiation on Normal Skin. @Gav Squires was there and has kindly written this guest post summarising the event, with some linkage added by NSB.

Visible light has a wavelength between 400 and 700 nanometers. Around 1000 nanometers, you're into the infrared while down at 100 nanometers, you're into the ultraviolet. The shorter the wavelength, the more energy it contains.

There are three types of ultraviolet radiation - UVA(longer wavelength), UVB and UVC(shorter wavelength). All of the UVC in sunlight is blocked by the atmosphere. Only 5% of the light that reaches the earth is ultraviolet and only 5% of that is UVB. UVA penetrates much deeper into the skin but UVB is responsible for 80% of sunburn.

In fact, UVB is at least 1000 more powerful than UVA when it comes to causing sunburn. While UVB is also responsible for the production of vitamin D, as the ozone thins there is more of it coming through the atmosphere.

We can measure the UV radiation using a radiometer. A broad band one is handheld while a more accurate spectroradiometer costs around £30,000. On the other hand, you can use a biological method and examine skin for levels of sunburn. This is done by examining the Minimum Erythema Dose (MED) - the point at which the skin starts to burn. This MED will change depending on the skin type. There are six skin types in all, ranging from Type-I (white), which has a high risk of sunburn and cancer to Type-VI (black), which has a low risk of both.

Skin Types and their reaction to UV

In all of the interactions between UV and skin, photochemistry precedes photobiology. The sunlight is absorbed by a molecule and its energy changes the molecule. This leads to a multitude of effects from tanning to sunburn to cell death to vitamin D photosynthesis. When the UV reaches the DNA, it causes photodamage. Then it binds to the DNA and can cause a mutation. It's possible to stain for the anti-bodies that are evidence of this damage. There was a time when you would have to do a biopsy to look at the scale of the damage but now it's possible to measure the excretion products in urine.

Photoageing is caused when the tissue is damaged by sun exposure. It's actually damage to the collagen in the skin and is called solar elastosis. It is thought to be a UVA effect. DNA damage is also responsible for tanning. This happens when the pigment producing cells (melanocytes) in the skin are activated. Of course, the more serious outcome is skin cancer. The UV goes into the skin and causes a mutation where the DNA is repaired erroneously. The P53 gene usually stops tumours but if this is mutated you can get abnormal cell growth(dysplasia), then immunosuppression and this can lead to cancer. However, there are a number of factors can play a part in cancer forming, including physical environment, behavioural causes, non-behavioural causes and any prevention measures taken. With skin cancer, melanomas are only around 10% of the total skin cancers but they are the ones that kill you.

The P53 Protein
As well as humans, dolphins can get sunburnt. UV radiation can also damage your eyes, it leads to cataracts. Glass protects against UVB so glasses wearers are partly protected but you can still get a tan standing in a greenhouse. Plastic meanwhile will block all UV radiation. Sand on the other hand, reflects all UV and this is why you can get sunburnt particularly badly at the beach.

UV radiation concentration is greatest at noon as the sunlight has to get through less of the atmosphere. There is also much more UV radiation in the summer. Although you do need to be careful because even on a cloudy day, there is UV damage happening to the skin. Most indoor workers get 50% of their annual UV exposure over a span of just 33 days. This usually includes their summer holiday.

Sunscreens are specifically designed to stop sunburn rather than any of the other effects of UV such as ageing. Hence they are only interested in stopping the UVB. However they are generally not used as they should be. The recommended thickness is 2mg/cm2 of skin. This would require 32g of sunscreen to cover a woman's body and 38g to cover a man's. Since you are supposed to re-apply every three hours, your bottle isn't going to last very long and it's going to get very expensive very quickly.

But what does the Sun Protection Factor(SPF) on a sunscreen actually mean? Well, if you could usually spend 20 minutes in the sun before burning then SPF6 would allow you to spend 6 times as long, 120 minutes, in the sun before you burnt. However, these tests are based on thick applications, which isn't how people use it. However, you will still get some benefit from it and even SPF2 blocks 50% of the light. SPF4 blocks 75% and SPF50 blocks 99%, which is why you can't get a higher SPF than 50. Sunscreen is also tested with a very artificial sunlight - equivalent to sunlight at the top of a mountain at the equator.

Titanium Dioxide, a popular suncreen

But the sun isn't all bad. As well as vitamin D creation, there is a feelgood factor from UV radiation.

Café Sci returns to The Vat & Fiddle on April the 10th at 8pm when Dr Marcos Alcocer will talk on Food Allergies - What Are They And Why Do We Have Them? For more information, visit the Café Sci MeetUp page: https://www.meetup.com/nottingham-culture-cafe-sci/

Image Sources
P53, Tiox, Skin Types via Gav Squires.