Rationale: Why a Failathon?

The learning analytics literature is biased

Publication bias is a well-known problem in many empirical disciplines, notably health and medicine. This is sometimes called the ‘file drawer’ effect. Researchers are incentivized to analyse their data to find positive results; positive results are more likely to be written up; and positive results are more likely to be accepted as publications. Negative results are more likely to languish, unloved, in file drawers.

The EU-funded Learning Analytics Community Exchange (LACE) project is building an Evidence Hub for learning analytics. The LACE Evidence Hub focuses on evidence for and against a set of key propositions about learning analytics. Evidence presented on the site is categorised according to whether it supports or detracts from a particular proposition.

Preliminary results suggest that learning analytics is no exception to the general trend: more than half the evidence in the Hub is classified as positive, and most of the rest is neutral or mixed. (A full account of this work is in preparation.)

Anecdotally, we know that not all learning analytics research yields positive results, and that the majority of large-scale projects encounter at least some serious problems. There are a few examples of these issues being reported in the learning analytics literature, such as Dawson & Macfadyen’s (2012) paper, but such reports remain unusual.

Further, publication bias is a problem for well-conducted studies. Very few outlets exist for those who want to publish accounts of studies that failed to generate interpretable results because of mistakes by the researchers.

Similar considerations apply to projects and activities applying learning analytics: successful work is given far more prominence than failure.

In summary, we tend to publicise our successes, and keep our failures to ourselves.

This prevents effective learning within the community

The printed literature enables us to learn of and from success. Failure can be an extremely rich source of learning. Indeed, some learning theories make explicit use of mistakes and failures in the learning process.

Learning from one’s own mistakes can be a very powerful source of expertise. However, it is more efficient – and less unpleasant – to learn from other people’s mistakes too. But this is difficult without access to information about failure.

Strong pressures keep it that way

Why is it like this?

The incentives that contribute to publication bias are considerable, and hard to change. In the medical field, there are moves such as requiring pre-registration of trials and protocols, but the evidence that these solve the problem is limited to date. Some journals now explicitly welcome negative results, and some devoted purely to negative results have sprung up.

However, the human pressure to publicise success but downplay failure is likely to persist. Organisational pressures on researchers and practitioners seem likely to increase.

A failure workshop can help

Such powerful social and systemic forces cannot be changed quickly or easily.

However, there are opportunities for learning from each other’s failures outside formal routes. The social spaces at LAK provide informal opportunities for sharing these sorts of experience.

This workshop aims to offer a more explicit and structured space for this to happen. We hope to create a space where researchers and practitioners can learn from each other’s mistakes.

Many teachers will be familiar with learners being reluctant to admit mistakes in public. This is one major rationale for the existence of closed discussion forums (Learning Management Systems/Virtual Learning Environments) in education: if the discussion is entirely open to the world, learners may be too reticent about failure to contribute. For this reason, this workshop will be semi-private, held under the Chatham House Rule (see Confidentiality).

Advertisements