Police could be able to predict spikes in hate crime and prevent them from happening by using a new computer algorithm, according to a new study.

A link between an increase in online hate speech and crimes against minorities in the physical world could allow police forces to allocate more manpower when they notice a surge in “hate tweets” on social media, it has been suggested.

Artificial intelligence created by Cardiff University’s HateLab project collected data from Twitter and police recorded crime data from London between August 2013 and August 2014 to analyse the link.

The data showed as the number of “hate tweets” – those deemed to be antagonistic in terms of race, ethnicity or religion – made from one location increased, so did the number of racially and religiously aggravated crimes, including violence, harassment and criminal damage.

The AI found 294,361 Twitter posts deemed “hateful” during an eight-month period and 6,572 racially and religiously aggravated crimes were filtered from police figures, which along with census data, were mapped to one of 4,720 geographical areas within London in order to analyse any trends.

Researchers say an algorithm based on their methods could now be used by police to predict spikes of hate crime and stop them from happening by allocating more resources at specific periods.

The director of HateLab, Professor Matthew Williams, said the study was the first in the UK to show a link between Twitter hate speech and racially and religiously aggravated offences happening offline.

He said: “Previous research has already established that major events can act as triggers for hate acts. But our analysis confirms this association is present even in the absence of such events.

“The research shows that online hate victimisation is part of a wider process of harm that can begin on social media and then migrate to the physical world.

“Until recently, the seriousness of online hate speech has not been fully recognised. These statistics prove that activities which unfold in the virtual world should not be ignored.”

Professor Williams said although the data was collected before social media giants introduced stricter hate speech policies, users were now using “more underground platforms”.

He added: “In time, our data science solutions will allow us to follow the hate wherever it goes.”

HateLab was set up to measure and counter the problem of hate speech online and offline across the world, and has received more than £1.7 million in funding from the Economic and Social Research Council (ESRC) and the US Department of Justice.