Babylon, the controversial company behind GP at Hand, which is destabilising primary care in London and set to extend to Birmingham, appears to be keen to cover up the traces of a discredited test of its online triage service last summer.

The company has been hard at work deleting all of the details of what was at first a much-vaunted comparative test, in which the chatbot’s performance was presented as superior to that of real trainee GPs.

At first the company was quick to boast that this test proved that its software was superior to real doctors. But Babylon’s claims immediately came under increasing critical fire from doctors and AI experts, who questioned the validity of the test, and revealed the various ways in which it was skewed to make the chatbot’s performance appear better.

GPs consultants and IT experts also pointed out that, contrary to the incessant rhetoric from Parsa and others, Babylon’s chatbot software is NOT based on AI at all, or even very innovative.

It is built on ‘Bayesian Reasoning’ – a system used to build systems in the 1970s. In other words meaning the chatbot has not been trained on a dataset, and does not “learn”: it only knows what it has been told.

The many errors in its diagnoses which have been reported have only been corrected by human intervention, and by effectively reprogramming the machine.

‘AI News’ has since discovered that the video of the test event has now been deleted from Babylon You Tube account, and all links to the news coverage of the event have been removed from the company’s website.

The link to Babylon’s own conference paper describing the chatbot has also been deleted; in other words all of the company’s boldest claims for the performance of the software now appear to have been quietly dropped.

When questioned about the deletion by AI News, Babylon’s response was simply to add the excuse that “As a fast-paced and dynamic health-tech company, Babylon is constantly refreshing the website with new information about our products and services. As such, older content is often removed to make way for the new.”

So yes, they have deleted the data.

critics have all argued that in real life the chatbot’s results would be nowhere near as good as it appeared in the test, and that in some cases dangerously wrong advice could be given. Now it seems Babylon has given up trying to refute them.

Dear Reader,

If you like our content please support our campaigning journalism to protect health care for all. 

Our goal is to inform people, hold our politicians to account and help to build change through evidence based ideas.

Everyone should have access to comprehensive healthcare, but our NHS needs support. You can help us to continue to counter bad policy, battle neglect of the NHS and correct dangerous mis-infomation.

Supporters of the NHS are crucial in sustaining our health service and with your help we will be able to engage more people in securing its future.

Please donate to help support our campaigning NHS research and  journalism.                              

Comments are closed.