Wednesday, March 12, 2014

Dubious claims about predictive coding for "information governance"



  • Information governance ("IG") basically means defensible deletion.
  • A vendor’s claim to have achieved 90% precision with de minimis document review in an IG proof-of-concept omitted any mention of recall and is therefore suspect, for recall is the touchstone of defensibility.
  • The claim appears to understate by multiple orders of magnitude the number of documents that would have required review in order to verifiably achieve the results claimed.
  • In the IG world of low prevalence, low precision is not an important issue, and higher recall can be achieved at lower cost.
  • Mass culling should not be overlooked as a supplement to predictive coding.
  • Persistent analysis is a fecund field for investigation.


I recently attended a symposium on “information governance” at the University of Richmond Law School, sponsored by the Journal of Legal Technology. Kudos to Allison Rienecker and the JOLT team for a well-run event.

At the symposium, a well-known predictive-coding vendor made some interesting and I daresay misleading claims about an IG “proof of concept” which purportedly would have enabled a corporation safely to discard millions of documents after review of about 1,800 despite prevalence of just four-hundredths of a percent. A screen-capture summary of the POC and the vendor's key claims, and a full video of the presentation, are below. The main discussion of the POC begins at around the 2:36:20 mark of the video and lasts for about 5 minutes.





The presenter boasted of impressive-sounding 90% precision, but said nothing of recall, nor can I fathom how the vendor could have determined recall under the circumstances. Law firms and corporate clients should beware of this claim and of any claim that does not address recall. IG has the potential to be cost-effective, including in the dataset discussed by the vendor.  But the vendor appears to have understated by multiple orders of magnitude the number of documents that would have required review in order to verify the results claimed.

Friday, November 22, 2013

Comment on early EDI - Oracle Study results

The Electronic Discovery Institute yesterday released via Law Technology News some preliminary results of its study on a dataset provided by Oracle related to its acquisition of Sun Microsystems. The study involved multiple providers of technology-assisted review, including Backstop, which categorized documents for three tags: responsive, privilege, and hot. EDI's release is the first step towards what should eventuate in ground-breaking raw data and analysis. While skeletal (we only have ordinal F1 rankings thus far), it affords the basis for some thoughts, including very imperfect cost-adjusted performance measures.  Interestingly, the results show no correlation between cost and accuracy ranking. Backstop and the other study participants are forbidden to identify their own entries, and EDI can only tell a vendor which results are the vendor's own. So, while we are very pleased with our results, we cannot identify them or those of any other participant.  In this post I will share some thoughts on difficulties with F1 as a benchmark for accuracy, then delve into a first attempt at a cost-adjusted performance spreadsheet, which you can sort, view, edit and download.

Wednesday, November 20, 2013

EDI - Oracle Study Preliminary Results Released

Preliminary results have been released from the Electronic Discovery Institute - Oracle study, in which Backstop participated.  See an article at Law Technology News and the chart below.  We are very pleased to see these results and will soon share a preliminary analysis on this blog.  We also look forward to seeing more granular detail (viz., recall and precision figures) in the near future.


Thursday, May 23, 2013

Podcast on the In re Biomet decision

A couple of weeks ago, I discussed here the dubious mathematics underlying the court's approval of pre-predictive coding keyword searches in In re Biomet.  This morning I discussed the case with other e-discovery professionals on an ESI Bytes podcast.

Wednesday, May 8, 2013

Federal court approves pre-predictive coding keyword filtration based on faulty math in In re Biomet

A district court’s recent approval of keyword filtration prior to the use of predictive coding in In re Biomet, No. 3:12-MD-2391 (N.D. Ind. April 18, 2013) rests on bad math and could deprive the requesting party of over 80% of the relevant documents. Specifically, the court ruled that a defendant’s use of predictive coding on a keyword-culled dataset met its discovery obligations because only a “modest” number of documents would be excluded. But a proper analysis of the statistical sampling on which the court relied shows that defendant’s keyword filtration would deprive plaintiffs of a substantial proportion of the relevant documents. The error in the court’s finding regarding the completeness of defendant’s production underpinned and undermines its additional holding that to require the defendant to employ predictive coding on the full dataset would offend Rule 26(b)(2)(C) proportionality. Accordingly, the early chorus of praise which has greeted the decision is unwarranted.

Friday, April 26, 2013

Good luck to FIRST Robotics Team 116!

Backstop is a proud sponsor of Herndon High School FIRST Robotics Team 116, currently competing at the national robotics championships in St. Louis.  This year's competition calls for teams to build robots that can scoop up frisbees and shoot them into goals.  View the team website or follow the national tournament.  We wish Team 116 much success.

Friday, January 27, 2012

Humorous video regarding use of keyword search

In an earlier post, I discussed the focus on keyword search terms in the new Federal Circuit model order for e-discovery in patent cases, specifically, how that focus seems misplaced in light of the availability of predictive coding and other tools which yield superior recall and precision.  Comes now a humorous "text-to-movie" video about the pitfalls of keyword search and the utility of predictive coding.  To view the video, click here.

Additional humorous e-discovery videos can be viewed here and here.