ChatGPT Survey Reveals Concerns About Generative AI Security
Malwarebytes, a Santa Clara, California-based security company has rolled out a consumer survey demonstrating deep reservations about ChatGPT.
Only 10% of the respondents expressed trust in the information produced by ChatGPT, while 63% disagreed. Beyond concerns around trust and accuracy, a resounding 81% of respondents believed ChatGPT could be a possible safety or security risk with 52% of respondents calling for a pause on ChatGPT work for regulations to catch up.
Only 35% of respondents expressed familiarity with ChatGPT. Among that group of respondents, 63% distrust ChatGPT information, and 51% question whether AI tools can improve internet safety.
“An AI revolution has been gathering pace for a very long time, and many specific, narrow applications have been enormously successful without stirring this kind of mistrust,” said Mark Stockley, Cybersecurity Evangelist at Malwarebytes. “At Malwarebytes, Machine Learning and AI have been used for years to help improve efficiency, to identify malware and improve the overall performance of many technologies. However, public sentiment on ChatGPT is a different beast and the uncertainty around how ChatGPT will change our lives is compounded by the mysterious ways in which it works.”
Malwarebytes conducted the survey of its newsletter readers across the globe between May 29 and May 31, 2023. In total, 1449 people responded.
Channel Impact®
One might argue that a relatively small percentage truly understands the platform, but the risks are clearly feared by many.
Stay in the Know
Keep tabs on what’s happening in the channel and the impact it will have on the partner community by subscribing to Channel Impact communications.
Recent News
Search Buzz
Buzz Categories