I accept cookies
Continue
We use cookies to ensure that we give you the best experience on our website and to show you advertising that is relevant to you. Read more about “Cookie policy”. By continuing to surf on the website, you explicitly agree on the use of cookies on this website.
Mind+machine, Artificial Intelligence

Reality Bites – Review of an AI Implementation Proposal

Even well before Evalueserve officially adopted the slogan Mind+Machine in 2015, I was preaching the benefits of combining the skills of human minds with the power of machines and I have often argued that Artificial Intelligence (AI) is not yet at a stage where it can be expected to deliver a complete end-to-end analytics solution. In other words, AI tools might be essential for crunching big data sets as part of some use cases and support the minds, but they are not (yet) able to provide truly actionable insight.

Most of my arguments are theoretical and while they are based on my extensive experience in the field, I appreciate that some people might find it easy to argue against my opinion. Therefore, allow me to recount an experience I recently had with a client. (While this post is based on real events for obvious reason I won’t disclose the names of our client or the AI tool.)

The situation

Our client was interested in employing the services of a very prominent AI tool. Once they had received a proposal, they invited me to help them assess it.

The proposed scope was for the AI tool to take over a highly manual and time consuming research task for our client. The research would be conducted on internal and external resources. The objective was for the AI to generate innovative search results and at the same time improve the productivity of the whole department by 2X.

While I have no doubt the actual algorithm used in the AI is very powerful, there were two key issues with this proposal from my perspective:

Issue 1: Missing aspects – workflow, UX, KM, productivity tools, and interoperability

The proposal focused heavily on the abilities of the tool, but completely neglected to consider critical elements of the analytics value chain, which are crucial to generate ROI and to deliver the end user benefits: productivity, time-to-market, quality, and new capabilities.

The AI tool did not address the following topics:

  • Workflow: The client’s department had an extensive, global multi-user workflow where R&D, marketing, regulatory, and legal all need to work together in order to produce the output.
  • UX design: The AI engine simply offered a simplistic user interface that does not allow to manage the research pipeline and resource allocation, and allow to enter decisions and feedback. Moreover, the UX was not able to differentiate between the various roles of decision-makers in the workflow.
  • Knowledge management: The AI engine was able to do efficient NLP-based search, but did not capture the feedback by the people around the globe, nor did it have an integrated KM architecture in the background that was linked to the workflow, able to capture the tacit knowledge.
  • Productivity tools: The AI engine just focused on the search part, but could not deliver on other productivity-enhancing functionalities. Estimates show that most of the productivity benefits come from the workflow and productivity tools, not from AI.
  • Interoperability: Despite being a very well-known platform with very large marketing budgets, the platform does not interoperate with standard office environments.

The solution as it was proposed would have left the client with nothing more than a very clever NLP-based search engine. The AI tool would have become just another tool in the arsenal of the research analysts, whose processes and workflows would essentially stay the same.

Issue 2: Unsubstantiated promises – analytics requires Mind+Machine, not just an AI engine

The AI provider promised to substantially reduce the time spent on searches and improve the accuracy of the results, which would result in an100% increase in productivity of the relevant department. Again, I don’t doubt the performance of the algorithm itself, but I question the extrapolation into general productivity without taking any of the context or workflow into account.

To use an analogy: I can give you Formula1 car and promise to cut your commute in half, but if you take the train instead because road traffic is too bad, my promise is irrelevant.

The ROI claims of the tool are therefore unfounded and it would be a large expense for something that might only have a negligible impact.

Moreover, the tool was initially positioned as ‘cognitive technology’ that would return new insights no human had come up with before. When pushed to contractually commit to this capability, the provider pushed it into a ‘future phase’.

AI engines are only as good as the data they can access. However, we all know that any decisions in business are based on a lot of tacit knowledge as well, which is not accessible to AI engines. Therefore, the claim that AI engines can replace the human mind for creating true insights is fundamentally flawed.

It’s not the tool, it’s how you use it

We keep seeing this behavior quite frequently in many client situations. The problem is not the AI engine itself, the problem is the complete neglect of the rest of the value chain it is to sit in, which of course is very specific to the analytic use case at hand.

The better approach therefore is to have a master plan in place that looks at the overall use case in a comprehensive manner. This should include the actual analytic problem, but also, and at least equally as important, the workflow aspects described above, including the human angle. Only if Minds+Machines work together in an optimally balanced fashion will the ROI be generated and the end user benefits be achieved.

In this particular use case, the client chose to map out the overall workflow and its requirements in an upfront project phase, then start two parallel work streams – one focusing on the workflow with all its components, and one benchmarking various AI engines that can all deliver on NLP-based search, not just the one that was proposed by corporate. Just the benchmarking of AI engines will lead to much lower cost and better integration with the workflow.

Management teams need to be aware that analytics use cases are much broader in scope than NLP-based search engines or other forms of AI that only focus on small parts of the value chain. They should not fall for the AI fallacy of “AI equals Analytics”, and they should not let themselves be impressed by the significant amounts of AI lingo used in the suppliers’ sales pitches. As a recommendation: When you hear the words ‘cognitive technology’, it is time to start asking very critical questions, until you really understand what it can and cannot deliver.

To conclude, I am continuously astonished by the amazing things that can be done with AI nowadays, but having the best tool in the market simply isn’t good enough. Every tool needs to be integrated into the context, otherwise all this amazing power is wasted.

Speak your Mind

Your email address will not be published. Required fields are marked *