Your editorial on social science research (15 April) highlights the poor replicability of results, and the misuse of this by some to dismiss all social science. As was indicated, in a field as complex as human behaviour, poor replicability can be due to many factors: methodology, misused statistics, variations in sample characteristics and so on.
There is one factor underlying much of this, not much discussed, which is a dearth of observation of human behaviour in everyday environments in the same manner as scientists would observe any other species in order to find out what the behaviour is and so what needs to be understood.
This dearth of prior observation is understandable – our cultures already have a rich abundance of knowledge about what we do, how to act, and the words to describe this. This cultural knowledge has contributed to the success of Homo sapiens. Much social science understandably uses these terms. The trouble is, these are the wrong sort of terms for science. The natural sciences do not have this problem: their subject matter has no language of its own; the scientists have to develop their own terms.
Søren Kierkegaard said in 1843: “Life can only be understood backwards, but must be lived forwards.” Social science too often uses the terms which have evolved for “living forward” as terms for “understanding backwards”. This reduces the motivation to observe the behaviour which needs understanding, as a natural scientist would. The viewpoint and terms of the members of a culture, the “active insiders”, are used when the would-be scientist should be developing their own “outsider” terms from observation.
Cultural terms change over time, differ between cultures and embrace the subjective, as they should in order to communicate between individuals what to think and do. These terms of the active insider evolve to adapt to changing circumstances. So it is unsurprising that research couched in these slippery, malleable, action-oriented terms embracing subjectivity often fails to be replicated.
Dr John Richer
Oxford
Your editorial is right to call for individual studies to be weighed against the wider evidence base, which is why serious policymaking increasingly relies on systematic reviews that offer a transparent synthesis of all relevant evidence, instead of individual studies and individual expert opinions.
It is also right in its optimism. In many ways, we have far more left to learn about human behaviour and societies than we do about the stars, or the oceans, or even the body. I think this will be the last great frontier of discovery, and perhaps the most rewarding. But the tools we have now in social science are closer to Galileo’s telescope than the modern space observatories that generate trillions of observations every day. It is time to upgrade.
Data has been the fuel of the natural sciences, it will be the fuel of the social sciences, and it is the fuel of AI. Language models will have to exist alongside world models and people models to deliver their promise.
For ourselves and for our economy, we need to invest in public data that is orders of magnitude better in coverage, in speed, in volume, and in detail. That data will be the raw material of both scientific progress and better government.
Will Moy
Chief executive, Campbell Collaboration
Thank you for your editorial on social science research. Your piece gives prominence to an issue – the replicability of scientific research – that might seem niche but is in fact fundamental to establishing ground truths. You have added nuance, clarity and context – for example, the piece describes how motivated groups (such as politicians) can highlight low replication rates in the social sciences to delegitimise scientific evidence.
You rightly observe that a key to improving the robustness of published research is “shifting incentives so that existing results are tested”, but I am more optimistic that you on how this can be done. As it stands, scientists are hired and promoted based largely on their authored works and zero weight is afforded their contribution as a peer reviewer.
We could – and should – recognise a researcher’s contribution to science via their peer review activity. Already Web of Science researcher profiles and ORCID allow authors to keep count of their reviewing activity. It would be a simple matter to update that process so that editors give a bonus point to authors of excellent reviews and deny a point to those whose reviews are useless or worse. Those scores incentivise researchers to invest time and effort in peer review and so spot specious results before they enter the literature. The scores would also help editors find appropriate reviewers more easily. That is to everyone’s benefit because double-blind peer review is the least-worst system humanity has yet devised for identifying replicable truths.
Prof David Comerford
Programme director, MSc behavioural science, University of Stirling

2 hours ago
7

















































