Skip to content

Be VERY Concerned by the GPT Store

Author: SEAN LEE, CISSP, MANAGING DIRECTOR

January 29, 2024

AI was everywhere in 2023—but that was just the tip of the iceberg.

OpenAI, the creators of ChatGPT, recently opened a marketplace called the GPT Store where creators can sell generative pre-trained transformers (GPTs) with highly-specific applications: creating logos, writing blogs, conducting research, even designing tattoos, and critiquing pitch decks. 

The marketplace only has a few dozen offerings at launch, but that will change quickly because it incentivizescreates an incentive for developers to experiment with generative AI by making it easy for them to market, sell, and monetize their creations. Much like how the arrival of the Apple and Google app stores led to a bonanza of new apps, GPT Store will attract many talented people to work on AI. Likewise, the name recognition and reach of OpenAI will drawattract many users to explore the marketplace, making it likely (if not guaranteed) to become the dominant market for AI products. 

With abundant amounts of supply and demand colliding in the same place, GPT Store could be a wellspring of innovation, activity, and prosperity—the spark that ignites the AI dynamite. “Could” being the operative word, as that still remains to be seen. 

Much more certain, though, is something much less exciting: the marketplace is also a data privacy and cybersecurity nightmare waiting to happen.

Why We Should All Be Alarmed

The current gold rush around AI has made it clear that issues like security, privacy, transparency, and accountability are secondary at best or, too often, complete afterthoughts for developers eager to ignore risk in favor of the all-mighty dollar. 

Those risks aren’t hypothetical, either. Even now, in AI’s infancy, we’re seeing errors, hallucinations, incidents, and attacks being caused by the extreme insecurity of generative AI technologies. From mistakenly denying people health insurance and denying job applicants based on their gender to automated tools that craft targeted spear phishing campaigns at scale, there are plenty of examples of AI that doesn’t work as intended, putting systems, data, organizations, and most importantly, people all in danger. 

Unsafe and untrustworthy AI should alarm all of us, which is why the OpenAI marketplace should too. As developers and users flock there, bad actors will follow, attracted by the ability to easily target millions of people while using the legitimacy of OpenAI to disguise their intentions. As the marketplace fills up, the total will include large numbers of malicious, insecure, or otherwise dangerous offerings. 

How many? For reference, Google reported it removed 1.43 million bad apps from the Play Store in 2022 and banned 173,000 developer accounts. It will take time for the ChatGPT marketplace to become a comparable minefield—but all the pieces are in place. 

It’s unrealistic to rely on OpenAI to police the marketplace adequately, and regulators don’t seem like the solution either. That puts the onus on AI developers to start taking cybersecurity and data privacy (much) more seriously, but ultimately the responsibility (not to mention the consequences) falls in the lap of end users. It’s up to each of us to carefully vet the AI tools we use and cautiously select the ones we trust with our data. 

Does the marketplace advance that objective? Or does it do the opposite, encouraging and enabling people to adopt more AI with less due diligence?

Time will tell. Until then, the AI iceberg keeps growing and, someday soon, the Titanic will come along. 

Share This Post

Related Articles

Mutation XSS: The Sneaky Security Threat You Need to Know About

Web developers, it is time to add another item to your security checklist: mutation cross-site...

Navigating the Muddy Waters of CMMC

The adage “trust but verify” is a principle that emphasizes the importance of verifying the...

Hands of robot and human touching virtual AI brain data creative in light bulb. Innovation futuristic science and artificial intelligence digital technology global network connection.

The adoption of Large Language Models (LLMs) has increased at an alarming rate ever since...