Written by, Dominic Weeks, Head of Technology

Could Psycho AI teach us to be better humans?

Sobering stuff over the weekend past with Jane Wakefield’s report for BBC Tech (widely picked up by other titles later in the week) detailing research at MIT Media’s Lab where they have created a “psycho” AI algorithm, aptly named Norman after the title character in Hitchcock’s Psycho. Norman is built to interpret and describe abstract pictures, but in an inventive research twist, he has been trained on the vile spewings of internet trolls and in particular disturbing images of death.

When interpreting the pictures, Norman saw pain and suffering. When shown more positive images, the system (they didn’t give it a name in this case, but I suppose they could have gone with Mary Poppins or something), made much happier interpretations.


The fixation in media coverage here was undoubtedly on this idea that we can create evil AI as well as good, helpful AI. Homicidal killing machines, not just cute canine robot companions and virtual assistants. From the UK Government’s perspective, this is a validation of emphasising ethics within the industrial strategy centred on AI and data. Both the Lords’ Select Committee Report and the Government’s AI Sector Deal identify the opportunity for “this sceptred Isle” – where cricket was invented, a man’s word his bond, and fair play practically trade-marked – to be the global centre for ethical development of AI.

But there’s something much deeper here and actually hugely promising in my view. When we see technology that is built to replicate human thought and action applied and studied, it can act as a mirror. It can also remove some of the biases that plague us at times – that we behave in a certain way because of natural disposition informed by gender, race, class or our natural faculties.

Norman shows us that negative inputs produce dark outputs whereas a similar programme trained with more rose-tinted internet content served up much more positive interpretations. Does that not raise strongly a question about the impact on people being exposed to negative images and experiences when they are children – i.e. when their cognition is developing/being trained?

On a large scale, as we build systems that replicate how humans think and act, the behaviours we observe painted on a blank canvas, can potentially hold up a picture of ourselves. In contrast, when we examine ourselves, the findings are too often a palimpsest.

Amazon to bring more AI jobs to UK

Perhaps appropriately in a week in which one of Britain’s best loved retail names, House of Fraser, decided to shutter 31 stores, Amazon reaffirmed its commitment to the UK by planning 2,500 jobs focused on machine learning and speech science.

That should certainly grease the wheels of the company’s machine learning plans. Pun intended, given that a butter spillage sent Amazon’s warehouse robots into disarray this week.


This is a pleasing coup and promising development. However, while not comparing apples with apples (it may not even be comparing fruit with fruit), it’s interesting to remark that the new Amazon initiative promises 2,500 jobs. The House of Fraser closures cuts 6,000. Arithmetic does not seem to be on our side in creating an AI labour force while maintaining beleaguered employment levels.

Again, the comparison is perhaps trite, but people across the AI landscape should realise that the positive news pieces may not be enough to chip away at underlying fears of technology eating jobs. AI and robotics are the poster children for these misgivings. Of course, winning the macro-level “there’s always been automation, and employment increases”. Amazingly, this remains the case even when the unemployment rate is at nearly a half-century (42 year) low!

News in Brief

error: Content is protected