It would be nice to benchmark the text extraction to a baseline method, say with Apache Tika (https://tika.apache.org/).
I would expect the deep learning approach to outperform traditional approaches in terms of accuracy, but it would be good to see accuracy vs. CPU / memory used, etc.
For the use-case of search, you can "cheat" and provide multiple answers for each word that you find in the image. Evernote does this. (It has 2-3 options for each word in its ocr results.) I don't know if tesseract supports this mode of operation, nor if Dropbox is doing this.
I think they already tried commercial off the shelf OCR software (which they didn't name but I would assume it's ABBYY) before they decided to build their own solution:
ABBYY hasn't been all that amazing in my experience. I compared it with Neat Scanner software a few months ago and the latter seemed to do a noticeably better job.
I worked in a very similar system for a very different company and I tend to think that a good reason to implement your own OCR models (if you can afford it) would be optimizing CPU cost. Tesseract can be quite expensive to run in scale, maxing out 100% for a simple page and taking about 5-30 seconds for full page extraction. Also, most Tesseract pipelines take entire PDF files for processing, whilst you could achieve better latency by processing pages in parallel and merge the results, as they suggest in the post.
I would expect the deep learning approach to outperform traditional approaches in terms of accuracy, but it would be good to see accuracy vs. CPU / memory used, etc.