Generative AI security considerations for today’s enterprise

MC+A Insight Guest Article. Original Article can be found here.

Vivek Sriram

Vivek Sriram

Chief Product Officer @ Bookend AI

Generative AI has captured the imagination of the corporate world from the corner office to the back office. CEOs see bottom-line improvement through automation, and CMOs salivate at the possibility of transforming experiences. To some, generative AI looks like pixie dust that offers the chance to overhaul even the most human-centered R&D processes. IT leaders, on the other hand, are asking some tough questions.

The loud note of caution from the offices of the CIO / CISO merits a listen. Disruptive and transformative, generative AI isn’t free of risk. On the contrary, there is plenty to be cautious about — and especially so for the enterprise. In a mad scramble to keep from being last, it’s prudent to pay heed to risks from deepfakes, unintended misuse of personally identifiable information, and a multitude of cybersecurity threats.

Inreal life: A recent study from NYU’s Center for Cybersecurity took a hard look at GitHub’s CoPilot, and found that it generates vulnerable code 40% of time. While CoPilot may live up to its promise that it saves time by helping developers stay focused, it also requires security professionals to pay attention to what it produces. A large bank suffering the same outage and clawback that OpenAI had would cause no shortage of headaches

Consider an even weirder situation. An executive at a large company, “…cut and pasted the firm’s 2023 strategy document into ChatGPT and asked it to create a PowerPoint deck. Samsung employees repeatedly got caught posting confidential data into Generative AI apps. The blame isn’t any single individual’s or even of individual companies. Rather, the lesson to take is that enterprises need new systems, tools, policies and ways of working to co-exist in a generative AI world. That doesn’t quite exist yet.

2 white dices on blue surface

Consider the problems.

First, corporate risk. Generative images are trained on publicly available images under a fair use license. Already there is plenty of noise about trademark and copyright infringement issues resulting from inappropriate use of uncredited original art. No doubt the issues that can result from more sensitive imagery such as blueprints, engineering schematics or medical imaging being misused are many, many orders of magnitude worse.

Second, cyber risk. Generative code often contains vulnerabilities. If the previous point about Open AI is extended, the same vulnerability on a big retailer or bank that exposed similar data through a similar experience is a vastly more consequential problem.

Third, competitive risk. A time-starved executive hoping for some relief from the mind-numbing drudgery of making slides may be excused for taking a shortcut and asking ChatGPT to do his work for him. Unfortunately, the reputational and political risks that come along with it are serious.

The risk from the unintended consequences of generative AI can be highly detrimental to enterprises. The responsibility to catch it before it happens and remediate it afterwards falls on IT. Unfortunately the tools for managing generative AI, or even Large Language Models, are virtually non-existent. The tools IT does have at its disposal are from the previous generation. They are about as useful as heavy tanks are in fighting a pandemic — the wrong tool for a big problem.

man on rope

So what can an enterprise do?

Organizations should balance experimentation with caution. Generative AI truly does have the power to be transformational. However, enterprises should take care to safeguard their data and, with explicit policies and rules on how private data should be used with no-code or low-code generative AI apps. One such policy requires developers to manage the full model under the auspices of the organization’s control.

Organizations should certainly be skeptical about services that promise great outcomes, but are going to use one firm’s data to train their own models — ostensibly defeating any competitive advantage. These tools encourage misuse by promoting capabilities which are beyond a user’s ability to understand. For example, a marketing developer who feeds corporate data into a low-code code generator without knowing how to assess vulnerabilities is inviting trouble.

Tooling isn’t yet mature for unfettered use in the enterprise. Over time purpose-built tools will emerge and give enterprises opportunities to safely, cost-effectively and scalably let developers make wide use of generative AI in multiple apps and multiple workflows. In the interim, IT professionals have no better choice than to watch carefully.

Recent Insights

Is your Elastic Cloud Cluster Right Sized?

How and when you right size your cluster can lead to better utilization of your cloud spend Elastic Cloud cluster for ELK Stack = Easy Elastic Cloud makes deploying, operating, and scaling the Elasticsearch Stack (ELK) in the cloud easy. The Elastic Cloud is run by Elastic, the maker of Elasticsearch and related products. It runs in all the major

Read More »

Experience Intelligence – Investigation

Speakers Description In this webinar we discuss MC+A’s new solution and approach for Intelligence Experiences for investigation use cases using LLM technology and machine learning. We demo how the solutions can act as a catalyst in expediting investigative processes. LLM technology assist with investigation due to its ability to understand, interpret, and analyze vast swathes of data, thereby aiding in

Read More »
Trusted Advisor

Go Further with Expert Consulting

Launch your technology project with confidence. Our experts allow you to focus on your project’s business value by accelerating the technical implementation with a best practice approach. We provide the expert guidance needed to enhance your users’ search experience, push past technology roadblocks, and leverage the full business potential of search technology.

Scroll to Top