A data driven response to the Independent Review of Administrative Law panel on Judicial Review

The MDRxTECH team recently supported the Politics and Law Group’s response to the Independent Review of Administrative Law panel on Judicial Review.

The response was compiled based on Mishcon’s extensive experience acting in judicial review claims, including in two of the most important constitutional law cases in memory (Miller 1 and Miller 2), and on data analysis undertaken of 10,000 judicial review cases over the past ten years.

The Data Science team, led by Dr Alastair Moore, ensured that the response was driven by data. It was apparent from the data analysis that, contrary to the consultation’s general direction, judicial review in the UK is concerned not only with the decision making of central Government Departments, but in fact is a much broader means by which to ensure that public bodies – whether Government Departments or otherwise – act in accordance with the law and follow proper procedures.

In particular, our data analysis demonstrated that:

  1. across the ten year period assessed, central Government Departments were the defendants in judicial review hearings in 44.5% of all cases (Chart A);
  2. comparing figures year on year, the number of cases brought against central Government Departments in comparison with other defendants has remained relatively stable, i.e. there has not been a sharp increase in cases being brought against Government Departments as assessed against cases being brought against other defendants (Chart B); and
  3. there has been a general decline in judicial review cases being brought since 2013, which is matched both for cases against central Government Departments and against other defendants (Chart C).

We applied a robust methodology to ensure our analysis was rigorous.  The original dataset, produced by vLex Justis comprised: (i) all judgments from 2010 onwards published by the official transcribers for the High Court, Queen’s Bench Division (Administrative Court) (12,987 judgments in total); (ii) all judgments from 2010 onwards published by the official transcribers for the Court of Appeal (Civil Division) (13,360 judgments in total); and (iii) all judgments from 2010 onwards published on the Supreme Court website (1,392 judgments in total).

The initial dataset then underwent an analysis (the Initial Filter) to identify cases potentially related to judicial review to assist in our understanding of the distribution of judicial review cases from 2010 – 2020 in the above-named courts by applying the following conditions (the Conditions):

  • Condition A

Any case with the citation “[year] EWHC number (Admin)” as well as all appeals to the Court of Appeal and the Supreme Court.

  • Condition B

Any case including a government agency/department, public body or authority as a party

  • Condition C

Any case containing any of the following phrases (the “Content Phrases”):

“judicial review”
“mandamus”
“prohibition”
“certiorari”
“mandatory”
“prohibiting and quashing orders”
“intra vires”
“ultra vires”
“mandamus prohibition”
“prohibition order”
“mandatory order”
“order quashing”
“quashing order”
“writ of prohibition”

For the purposes of the Initial Filter, at least two of the Conditions had to be met in relation to cases in the Queen’s Bench Division (Administrative Court) for a case to be included. vLex Justis also included Court of Appeal cases featuring the relevant party names (Condition B) and all Supreme Court and Court of Appeal containing one of the Content Phrases.

Once the Initial Filter had been applied, there were a total of 10,153 judgments. These cases comprised the original JavaScript Object Notation (the “Original JSON”) provided by vLex Justis to this Firm. On receipt, this Firm applied a second filter to the Original JSON so as only to include cases from 2010 – 2019 rather than also include an incomplete year for 2020. These judgments totalled 9,874 (the “Updated JSON”).

The Updated JSON included an “issuer” column. This identified the relevant court giving the judgment: High Court, Queen’s Bench Division (Administrative Court), Court of Appeal (Civil Division) or Supreme Court.

The cases were then grouped by “issuer” and coded to calculate the number of cases per court group.

The graph was then generated as a visual comparison tool to assess the number of cases per court.

The Updated JSON included a “judgmentDate” column in the form YYYY-MM-DD.

The year (“YYYY”) was extracted from the “judgmentDate” column and included in a new column, “Year”. in a new column, “Year”.

These cases were then grouped by year using the new “Year” column and coded to calculate the number of cases per year group from 2010 – 2019.

The graph was then generated as a visual comparison tool to assess the number of cases per year.

All cases were first grouped by “Year” using the new column described at 2.2 above.

Within each year from 2010 -2019, the cases were grouped by “issuer” (i.e. the relevant court) using the column from the Updated JSON.

The results were coded to calculate the number of cases per year per court.

The stacked bar chart was then generated to assist in visualising the distribution of cases by court and year. The total height of each bar corresponds to the total number of cases for each year. The colour of each bar corresponds to the number of cases per court within that year.

The Updated JSON included a “title” column. The Defendant from each case was extracted from this column and a new “Defendant” column was create.

The “Defendant” column was then filtered into two groups: cases with the Government as Defendant (the “Government Group”) and the remaining cases with other Defendants (the “Rest Group”). A list of content phrases was created to capture all variations of names of Defendants (including misspellings) that fell into the Government
Group (“Government Content Phrases”):

“secretary of state”
“secretrary of state for justice”
“secrtary of state for the home department”
“secetary of state for business, innovation and skills”
“secrtary of state for the home department”
“secreatary of state for the home department”
“secretrary of state for the home department”
“the secreatary of state for the home department”
“secrtary of state for communities and local government and
another”
“ecretary of state from the home department”
“ecretary of state for the home department”
“sectretary of state for the home department”
“seretary of state for the home department”
“secretry of state for the home department”
“ecretary of state for the home department”
“lord chancellor”
“prime minister”
“chancellor of the exchequer”
“attorney general’s office”
“sshd”
“home office”
“uk border force”
“secretary of state for the home department”
“ministry”

A new “Gov Defendant” column was then created. This was created to connect to the “Defendant” column. The two columns were then coded as follows:

2.4.1 Any Defendants in the “Defendant” column including one or more of the Government Content Phrases was categorised with the value “Government” in the “Gov Defendant” column.

2.4.2 Any Defendants in the “Defendant” column not including any of the Government Content Phrases was categorised with the value “rest” in the “Gov Defendant” column.

The results were then coded to calculate the number of cases with defendants in the Government Group as compared with the Rest Group.

The pie chart was then created to assist in visualising the percentage distribution of Defendants in the Government Group versus the Rest Group.

All cases were first grouped by “Year” using the new column described at 2.2 above.

Within each year from 2010 – 2019, the cases were grouped by “Gov Defendant” (the new column described at 2.4 above).

The results were then coded to calculate the number of cases in each year with Defendants in the Government Group as compared to the Rest Group.

The line chart was then generated as a visual comparison tool to assess the distribution of cases per year with Defendants in the Government Group versus the Rest Group. The height of the red line on the graph at each year represents the number of cases with Defendants in the Government Group. The blue line represents the number of cases with Defendants in the Rest Group.

The cases were grouped as above at 2.5 but the results were analysed to produce percentage results to show the number of Defendants in the Government Group as compared to the Rest Group in each year from 2010 – 2019.

The Updated JSON included a “justisCategory” column identifying the classification of the case in the Justis Legal Taxonomy. This column included 10-15 tags/ keywords for each case. One of the tags included was “applying for judicial review” (the “Judicial Review Tag”).

A new “ApplicationJR” column was created. This was created to connect to the “justisCategory” column. The two columns were then coded as follows:

2.7.1 Any cases including the Judicial Review Tag in the “justisCategory” were categorised with the value “Application for JR” in the “ApplicationJR” column (the “Judicial Review Group”).

2.7.2 Any cases including the Judicial Review Tag in the “justisCategory” were categorised with the value “Other” in the “ApplicationJR” column (the “Other Group”).

All cases were first grouped by “Year” using the new column described at 2.2 above.

The cases were then grouped using the “ApplicationJR” column and coded to calculate the number of cases in each year in the Judicial Review Group as compared to the Other Group.

The line chart was then generated to assist in visualising the distribution of Judicial Review cases per year as compared to any other cases per year.

Click here to view Mishcon de Reya consultation response in full.

The MDRxTECH regularly advise clients seeking to leverage and derive insights from their datasets, to inform their decision-making and achieve efficiencies. Get in touch to find out more.

Share: