Before we consider what constitutes a good search, we should ask if there is such a thing as “the perfect search”. Could one generate a novelty or validity search that finds absolutely all the relevant documents?
In the normal time allotted to search creation, it is probably not possible: there are too many potential documents and too many different search strategies to test. It’s highly unlikely that under normal circumstances, one could find all relevant documents.
That said, I have once seen a perfect search executed, although it certainly wasn’t under normal circumstances! However, the story should be told, because there’s an important lesson related to good IP and R&D searches. I certainly learned something that we apply at Evalueserve for our own processes.
It is a reality in our still fragmented industry that any head of IP at any large corporate entity receives many unsolicited calls every week from patent search providers looking for commissions.
One head of IP decided to put this reality to work for them. They sent the same validity search request to around 40 reputable patent search providers that had pitched their services to the company. The request in each case was that the search provider should do the search as a free pilot.
In case you’re not familiar with the search type, a validity search has the objective of identifying any prior art that might invalidate a patent. At this point in the prosecution process, patent offices have already performed their searches and found a good selection of obvious documents. Therefore, a validity search must dig deeper than the type of novelty search that is normally done.
Get full control on search recall and precision with Searchstream
Searchstream provides highly transparent, tightly scoped, legally compliant and timely search proposals.
The 40 search providers clearly put their smartest people on the job, investing considerable time and resources, because the consolidated output, which I had the opportunity to see, was really amazing.
Most of the search providers had found some interesting new documents. What was particularly interesting is that many of them had identified one or two valuable results that were unique to their output. Thus, when consolidated, the results were essentially the perfect search, giving all the required information with only a de-duplication exercise needed to clean up the answer set.
I said this ‘perfect’ search is not repeatable. You’ll agree I’m sure that the ethics of the “commission” were questionable. What’s more, the effort is clearly not feasible in any company’s daily routine. But, as I mentioned, we can still learn two important from this.
First, if you give two search professionals freedom to come up with their own strategies, when you put the output together, you may be able to use the information from both strategies and the hits retrieved to further improve the combined output. This is a principle that we have implemented at Evalueserve: we are one of the world’s largest providers of IP and R&D services, with a pool of several hundred research and analytics specialists to draw from. When creating a search strategy, we draw on expertise from multiple professionals, not restricting ourselves to a single point of view.
Second, since the number of documents and the number of possible strategies is too large for most standard searches to produce a 100% retrieval, we should take this into account in the creation of a good search. A good search should always aim for as close to 100% retrieval as possible. How can we achieve this? And how can we determine or measure the quality of search. We will examine this more closely in the coming posts.
Have you ever seen a search that you considered perfect? Have you ever had the opportunity to see the combined output from multiple search strategies? Are there any other learnings we can gain from such combined search strategies? Let us know in the comments.