5 Ways To Get Started With Generative AI Solutions

5 Ways to Start Small with Generative AI, Despite Evolving Challenges

Recently, like most other analytics providers, we’ve been working with clients on incorporating more AI into their strategies and roadmaps. Generative AI is a large piece of it, but we’re seeing overall more focus from business leaders on using emerging technologies to dig deeper into their data and extract more valuable insights. Many organizations are revisiting how they can leverage technology to boost productivity and drive revenue impact.

In this blog, we lay out some of our early findings about generative AI use cases, its evolving challenges, and 5 ways to get started.

Bigger really is better

With large language models (LLMs), big is better, in almost every instance.

More parameters lead to better outputs, with Falcon-40B (that is, 40 billion parameters) outperforming Falcon-7B and GPT-4 being a top-of-the-line model at over 1 trillion parameters. Also, the larger the pool of training data, the better they perform.

Compared to more narrow forms of AI, flexibility and scalability are baked into the DNA of generative AI models. Given well-designed underlying architecture, and a model fine-tuned for a particular use case, more data equals better outputs. The keyword here is “given.”

More often than not, the best architecture for a large, enterprise solution can be elusive. The most impactful use case is not always the most obvious. Making AI work for your business will probably require more than fine-tuning a model. You’ll need to consider how to reengineer workflows and configure the solution to existing ecosystems.

Better isn’t always enough

There are limitations and quirks inherent to all types of AI, including generative AI. Common AI issues include challenges of alignment, the importance of negotiating privacy legislation, and deciding how human experts should work alongside AI to optimize both parties’ contribution.

In some use cases, these challenges are not dealbreakers. When working on a blog like this, generative AI tools can give you a good starting point. You can use the model to help make decisions around storytelling, and there’s a lot of upside to boosting creativity.

However, some business problems require 100% accuracy. With accounting or insurance underwriting, you can’t just plug in generative AI and hope for the best.

That’s why it’s important to examine the strengths and weaknesses of each technology and grapple with the evolving challenges.

Evolving challenges and business considerations

There are plenty of headlines that criticize generative AI and point out its shortfalls. Hallucinations is a big one. There are also concerns around regulation and the role of government. Lawsuits challenge assumptions around the IP of public data and how it’s used for training.

It’s daunting, yet billions of dollars have already been invested in developing enterprise ready generative AI solutions. We’re at a point where theory and research need to be tested and augmented with real-world experimentation. Companies that want to compete can’t afford to sit back and wait for a competitor to figure it out first.

For businesses to implement generative AI successfully, it’ll take more than using the best model and technical prowess. There are a host of considerations that business stakeholders need to think deeply about, including:

  • Identifying the most appropriate use cases
  • Finding ways to integrate AI into existing workflows
  • Reengineering workflows that optimize productivity and impact
  • What is the accuracy threshold for each use case?
  • Where is it paramount to have consistent and standardized outputs?
  • Incorporating human knowledge workers and domain experts
  • Determining the level of data confidentiality and privacy required for different types of data

Use case identification is probably the hardest question we’re facing right now. Since the technology is new, there is often a failure of imagination. It’s difficult to think beyond each of our day-to-day activities and spot opportunities to reshape functions, drive new revenue streams for your organizations, and even create new markets.

We won’t get there all at once, but it’s important to start experimenting.

5 ways to start small with gen AI

We’ve had the privilege of taking some first steps with clients and seeing some early results.

Starting small is paramount. Big tech projects always have the risk of being a resource drain. Timelines get pushed, the scope morphs, business needs change. It’s never a good idea to go all in without evidence of ROI, but it is crucial to get that evidence asap and get started with proofs of concept as quickly as you can.

Based on the experimentation we’ve conducted with clients, we’ve come up with a few ways to start small:

1.       Virtual expert on top of an existing insights platform, for example Insightsfirst

2.       Virtual expert on top of an existing dashboard, for example this global health dashboard for a large philanthropic organization

3.       Content creation with standardized prompts, for example company profiles for investment banking pitchbooks

4.       Content creation with a narrow set of inputs, such as descriptions of financial data extracted using a deterministic tool like Spreadsmart

5.       Content creation off of enterprise search, for example creating RFP drafts

There are challenges with both virtual experts and content creation. While there are many intuitive use cases to virtual experts, slapping a chatbot on top of every dataset won’t necessarily improve productivity. Content creation, on the other hands, produces tangible deliverables but can suffer from quality issues. Here, we recommend starting with the generation of small chunks of information and treating these chunks as drafts.

In all our services where we’re testing generative AI, we keep human knowledge workers heavily in the loop. They are the ones to review, stitch together the usable pieces, and finalize the ultimate deliverable.

In the upcoming weeks, we’ll continue to share learnings from our experiments with generative AI. Stay tuned as we start scaling the use cases above, develop new ways to handle hallucinations, and develop common architectures and tech stacks for generative AI implementations.

Evalueserve
Posts

Latest Posts