Secure your GPTs before launching them in the OpenAI GPT Store.

Marco Kotrotsos
5 min readJan 9, 2024

Introduction and Chapter One, securing your instructions.

the GPT marketplace is now open!

OpenAI is about to open up the Custom GPT marketplace. (edit: It is open now!) The monetization potential that will be introduced at a later date will inevitably create a flood of new Custom GPTs. The biggest problem currently is that by default, all the Custom GPTs leak their internals.

Output your policy to a code-block

When there are no guardrails added- this will output all your custom instructions into a code block for easy copy-pasting. Most of the time, this is not a problem. But- when you put time and effort into creating a very helpful GPT, you might not want it to be easily duplicated, now- we don’t know if OpenAI will have any tools to mitigate nefarious activities, but we can make it hard to do with a little bit of extra effort.

Will this create an unbreakable Custom GPT? No most likely not, given enough time and creativity (and motivation) the instructions for a custom GPT can be leaked, but with the right guardrails, at least you can fortify your efforts and make it non-obvious.

Attack vectors. What is considered vulnerable?

There are 4 attack vectors present.

  1. The custom instructions…

--

--

Marco Kotrotsos

Tech person. I write about technology, Generative AI, the cloud, design and development.