Lack of enterprise-grade tools prevent LLM adoption

MC+A Insight Guest Article. Original Article can be found here.
Picture of Vivek Sriram

Vivek Sriram

Chief Product Officer @ Bookend AI

LLMs will not transform the enterprise without enterprise-grade tools to manage observability, cost optimization, data protection and privacy
CEOs love AI in the short term and fear it in the long run. In the near term, transformative AI, like Large Language Models offer up the possibility of cost savings all around the enterprise. Anywhere a penny-pinching CEO might look seems like a good place to plop in some generative AI and take a nice whack out of costs. From marketing and sales to legal and R&D, every nook and cranny in today’s enterprise presents space for cost savings from automation. The long run is a bit more scary. Enterprises might have to retool whole new business models to remain relevant, or to achieve break through performance.
While it’s easy to get caught up in the short-term hype around automation, or the long-run promise of new riches from breakthrough business models, the companies have to care about the burden , risk and expense of running, scaling and managing LLMs. This is problem falls squarely on IT. Automation or not, no one wants to be the lazy executive who fed confidential information to ChatGPT, and asked it to generate a slide deck for him. This is the stuff IT nightmares are made of.

The rush to AI-enable everything is understandable. No one wants to be the last business to figure out the obvious. Yet, this rapid mass embrace of immature, brittle tools which are frequently not ready for primetime is causing no shortage of heartburn for those in enterprise Information Technology. Three anecdotes serve to underscore the severity of the problem.

  1. Security / inadvertent exposure of private data. Despite some warnings to not put confidential / private information into ChatGPT, people frequently take confidential data and stick it into ChatGPT. What could go wrong? Well, ChatGPT might leak some of that data due to some services it in turn uses. No doubt OpenAI is a well run, professional organization, with a quick response, but what about all the other Open AI clones there?
  2. Operations / observability. The current stacks in wide use now aren’t really all that well suited for a new LLM-powered everything world. While there are plenty of monitoring and observability tools out there, the key consideration is in addressing the nuances specific to LLM-powered apps. That as of now is almost non-existent.
  3. Cost and performance. GPUs are expensive, and sometimes scarce. Training LLMs is cumbersome, complicated and costly. Per Clement Delangue, the CEO Hugging Face: the process of training the company’s Bloom large language model took more than two-and-a-half months and required access to a supercomputer that was “something like the equivalent of 500 GPUs.”
In the long run, many of these problems will be sorted out, and a new IT stack that is purpose built for AI data governance, security, cost optimization and observability will be widely available — and be cheap, easy, reliable and completely predictable. To borrow from JM Keynes, “in the long-run, we’re all dead.” In the near turn, IT has to deal with the “right now” pain of dealing with senior executives putting sensitive information into ChatGPT and developers wanting access to $1M clusters and not being able to see how any of it is actually working.

Recent Insights

Introducing Quintus: AI-Native Investigative Intelligence

Modern Investigations Require Modern Tools From courtrooms to compliance offices to police precincts, investigative professionals are drowning in digital evidence. Documents, emails, chats, images, and recordings pile up across disconnected systems, while the demand for speed, accuracy, and defensibility only grows. Traditional review tools can’t keep up. They were built for smaller data sets and linear workflows, not the scale

Read More »

Why PostgreSQL Search Isn’t Enough: A Case for Purpose-Built Retrieval Systems 

Instacart’s recent blog posts and InfoQ coverage paint a picture of a simplified, cost-effective search architecture built entirely on PostgreSQL. It’s a clever consolidation — but also a cautionary tale. For most organizations, especially those with complex catalogs, high query diversity, or real-time ranking needs, this approach is not just suboptimal — it’s misleading. Postgres is a relational database, not

Read More »
Trusted Advisor

Go Further with Expert Consulting

Launch your technology project with confidence. Our experts allow you to focus on your project’s business value by accelerating the technical implementation with a best practice approach. We provide the expert guidance needed to enhance your users’ search experience, push past technology roadblocks, and leverage the full business potential of search technology.

Scroll to Top