Data

Unsafe by Design? A First Look at Security and Privacy Risks in OpenAI’s Custom GPT Ecosystem

Macquarie University
Sunday Ogundoyin (Aggregated by)
Viewed: [[ro.stat.viewed]] Cited: [[ro.stat.cited]] Accessed: [[ro.stat.accessed]]
ctx_ver=Z39.88-2004&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Adc&rfr_id=info%3Asid%2FANDS&rft_id=info:doi10.25949/30143212.v1&rft.title=Unsafe by Design? A First Look at Security and Privacy Risks in OpenAI’s Custom GPT Ecosystem&rft.identifier=10.25949/30143212.v1&rft.publisher=Macquarie University&rft.description=In this study, we analyze 14,904 custom GPTs to assess their susceptibility to seven exploitable threats, such as roleplay-based attacks, system prompt leakage, phishing content generation, and malicious code synthesis, across various categories and popularity tiers within the OpenAI marketplace. We introduce a multi-metric ranking system to examine the relationship between a custom GPT’s popularity and its associated security risks. Our findings reveal that over 95% of custom GPTs lack adequate security protections. The most prevalent vulnerabilities include roleplay-based vulnerabilities (96.51%), system prompt leakage (92.20%), and phishing (91.22%). Furthermore, we demonstrate that OpenAI’s foundational models exhibit inherent security weaknesses, which are often inherited or amplified in custom GPTs. These results highlight the urgent need for enhanced security measures and stricter content moderation to ensure the safe deployment of GPT-based applications.Further information about the data is available in the readme.md file.&rft.creator=Sunday Ogundoyin&rft.date=2025&rft_rights= https://creativecommons.org/licenses/by/4.0/&rft_subject=Data and information privacy&rft_subject=GPT apps&rft_subject=jailbreak&rft_subject=privacy&rft_subject=roleplay&rft_subject=attacks&rft_subject=phishing&rft_subject=LLM&rft.type=dataset&rft.language=English Access the data

Full description

In this study, we analyze 14,904 custom GPTs to assess their susceptibility to seven exploitable threats, such as roleplay-based attacks, system prompt leakage, phishing content generation, and malicious code synthesis, across various categories and popularity tiers within the OpenAI marketplace. We introduce a multi-metric ranking system to examine the relationship between a custom GPT’s popularity and its associated security risks. Our findings reveal that over 95% of custom GPTs lack adequate security protections. The most prevalent vulnerabilities include roleplay-based vulnerabilities (96.51%), system prompt leakage (92.20%), and phishing (91.22%). Furthermore, we demonstrate that OpenAI’s foundational models exhibit inherent security weaknesses, which are often inherited or amplified in custom GPTs. These results highlight the urgent need for enhanced security measures and stricter content moderation to ensure the safe deployment of GPT-based applications.

Further information about the data is available in the "readme.md" file.

This dataset is part of a larger collection

Click to explore relationships graph
Subjects

User Contributed Tags    

Login to tag this record with meaningful keywords to make it easier to discover

Identifiers