Standard of proof: Experimental Data

DOI

Many important regulatory decisions are taken by professionals employing limited and conflicting evidence. An experiment was conducted in a merger regulation setting, identifying the role of different standards of proof, volumes of evidence, cost of error and professional or lay decision making. The experiment was conducted on current practitioners from 11 different jurisdictions, in addition to student subjects. Legal standards of proof significantly affect decisions. There are specific differences because of professional judgment, including in how error costs and volume of evidence are taken into account. The aim of the experiment was to narrow the range of explanations for why professional decision making matters.The ESRC Centre for Competition Policy (CCP) at the University of East Anglia (UEA) undertakes interdisciplinary research into competition policy and regulation that has real-world policy relevance without compromising academic rigour. It prides itself on the interdisciplinary nature of the research and the members are drawn from a range of disciplines, including economics, law, business and political science. The Centre was established in September 2004, building on the pre-existing Centre for Competition and Regulation (CCR), with a grant from the ESRC (Economic and Social Research Council). It currently boasts a total of 26 faculty members (including the Director and a Political Science Mentor), 4 full- and part-time researchers and 23 PhD students.

The dataset was collected using an economic experiment run partially in the experimental laboratory (with university students) and partially online (with university students and competition policy practitioners); the variable 'Treatment' clarifies which experimental treatment each line of data refers to. The experiment was conducted between March and July 2008. The experiment was fully computerized and had three experimental treatments. Treatment A was a standard laboratory experiment with university students. Subjects were randomly seated in the laboratory. Computer terminals were partitioned to avoid communication by facial or verbal means. Subjects read the experimental instructions and answered a preliminary questionnaire, to check understanding of the instructions, before proceeding with the tasks. If they got any answers incorrectly in the ‘understanding check’ questionnaire, they received feedback from the computer screen and could also ask for additional help from the experimental supervisors. We had 57 students participating to Treatment A. Treatment B was a purely online version of the experiment, run again with university students (different from those who participated to Treatment A). Subjects could log in the experiment remotely from their own workstations and do the experiment in their own time. They did the same understanding check questionnaire as in Treatment A and could email the experimental supervisors if anything was unclear (very few did). We had 153 students completing Treatment B online. Treatment C was run online with competition policy practitioners who we had approached through the heads of their agencies. Participating agencies came from Austria, Canada, Denmark, EU, France, Hungary, Ireland, Japan, Netherlands, Norway, Portugal, South Africa, Spain, and the UK (both the UK Competition and the Office of Fair Trading). The experimental protocol was exactly the same as in Treatment B. We had 67 practitioners completing Treatment C online. In each of the treatments, each subjects made 24 choices in relation to each of 24 scenarios presented to them, where we consider the separate influences of different standards of proofs, volume of evidence and cost of error in the context of merger appraisal, and specifically on whether a merger should be referred for further investigation or blocked. The dataset is at the most disaggregated level, i.e. by each individual choice that each subject made in the experiment. It is ordered by treatment, then by subject, then by decision making scenario. There are overall 277 subjects x 24 scenarios = 6,648 observations. Obviously individual-specific variables take the same value for each choice made by the same subject. Certain variables are defined only for competition policy practitioners’ data.

Identifier
DOI https://doi.org/10.5255/UKDA-SN-851702
Metadata Access https://datacatalogue.cessda.eu/oai-pmh/v0/oai?verb=GetRecord&metadataPrefix=oai_ddi25&identifier=2f21e4f2da5455987a5631c8d594c4000c5269dbce346941b3e566f8f1ce8cd5
Provenance
Creator Lyons, B, University of East Anglia; Zizzo, D, University of Newcastle
Publisher UK Data Service
Publication Year 2015
Funding Reference ESRC
Rights Bruce Lyons, University of East Anglia. Daniel Zizzo, University of Newcastle; The Data Collection is available for download to users registered with the UK Data Service.
OpenAccess true
Representation
Resource Type Numeric
Discipline Economics; Social and Behavioural Sciences
Spatial Coverage United Kingdom, but includes subjects from across the globe; United Kingdom