In economics, market price is usually a good indicator of what value suppliers and buyers allocate to specific goods or services, other things being equal. If you prefer the taste of an exotic pear, you pay more. When a glut of apples is available, you pay less. The same laws of quality begetting more cost, supply and demand, commodity versus rarity, apply to IP or R&D searches (however, in a slightly different way). Let’s consider the typical pricing of patent search, patent landscapes, innovation scouting, and see what we can refer from this to our discussion about quality. In the following four simple cases the ‘searcher’ is a patent counsel/patent attorney.
|Type of search||Market price||Additional search costs (excluding attorney analysis work following the search)|
|Quick novelty search on the internet||Free||Plus labor cost of patent counsel to screen and shortlist relevant results (e.g. 300 USD/hour)|
|Quick novelty search on paid patent database||50 USD||Plus labor cost of patent counsel to screen and shortlist relevant results (e.g. 300 USD/hour)|
|Novelty search done by a service provider or in-house search professional||300 – 1,500 USD||0|
|Validity Search done by a service provider or in-house search professional – used for a post-grant opposition||2,500 –50,000 USD||0|
|Freedom to Operate study||2,500 –50,000 USD||0|
There are many other types of search or intelligence that we can discuss here, such as innovation scouting or competitive intelligence, but for the benefit of a simple analysis these cases are sufficient.
Value correlates with recall and precision
We believe that the value of IP search services – the market price – of each of the scenarios above is directly linked to customer expectations of recall and precision (concepts we have introduced in our last post).
Your time and resources – like apples and pears – are perishable!
A second economic concept is especially relevant to this discussion – opportunity cost (or the cost of the lost opportunity to do something more valuable with your perishable time). The more time the end user needs to spend on the search, the higher is the opportunity cost. From an economic perspective, it doesn’t make sense for patent attorneys to do their own (lower value) prior art searches, as their hourly rates are so high. It is comparable to taking a train to the countryside, and picking your own apples!
What about quality of the resulting insights?
This leads to a third consideration of quality that is often neglected – we could call it quality of insights, or output; or the handover to the next step in the process. A good search will not only be defined by recall and precision, but also by the quality of insights and the means of handover to the end users. Ultimately, this links to the opportunity cost. In the restaurant search example in our previous post, you’d expect a search engine to sort the results in an ordered manner either based on ratings, or distance. This sorting creates insight quality. Similarly, in Evaluserve IPR&D search studies (novelty, invalidity or FTO) you would expect a sorted and annoted list, or a way to navigate the result set to arrive at the most useful result for your requirements. Or very simple a reliable recommendation. All these techniques can be construed as Insight Quality.
Search quality – can it be measured?
Adding this last ‘insight’ we now have the basis for an economic framework for further discussion about search quality.
It is clear that SQI (Search Quality Index) is directly proportional to R(Recall), P(Precision), and I(Insight quality). And it is obvious the Expected Search Quality is different for different use cases, so this will be relative and not absolute.
Below we present an initial, highly-simplified, formula for search quality index (SQI). We (Ashutosh and Urs and others) have debated if such a simplified formula is actually still correct, however we feel it is acceptable model for a first discussion, and that we plan to deepen over the next few months.
SQI = (RA/RE + PA/PE + IA/IE)/3;
with RA,E = Recall actual and expected, PA,E = Precision actual and expected and IA,E = Insights actual and expected.
We can see that SQI can still be 1 (or 100% quality) for cases where only low recall, low accuracy or low insights are expected. SQI will be < 1 when expected levels are not met (= search quality will be bad), and SQI will be > 1 if expected levels are exceeded (while search quality is very good; too many resources were put into the search, the search was too expensive).
What are your boundaries?
While we can clearly measure P (it is quite simple to define what the percentage of relevant documents is in the search output), it seems impossible or at least very difficult to measure R and I. However, R and I are probably the main drivers of search quality for many use cases and probably also the ‘most expensive’ factors, so we will spend quite a bit of time in upcoming posts to explore the factors that influence R and I. And economics will again be helpful to us, this time to measure R!
Going back to the restaurant example and applying the formula, we realize that usually we would be happy with just a very good restaurant in the city, so expected recall could be low while you want high accuracy and quality of insights. So probably in this case, it is better to take one of two approaches: Do a quick search on the internet and take a call or ask a local.
In this spirit, next time you visit either New Delhi or Zurich send us an e-mail and we’ll send you our top-3 recommendations for authentic restaurants. Both Ashutosh and Urs can help you have a great evening out in our cities!