The question of whether or not NSFW AI chat systems can be used at school is a controversial one, especially if they might affect children. National Center for Education Statistics data indicates more than 90 percent of high school students in this country have access to the internet and about 60 percent connect over social media and messaging apps. Given this depth of digital engagement, the risks for exposure to inappropriate or harmful content are significant. For example, you could theoretically employ NSFW (not safe for work) AI chat systems to protect students from inappropriate content by filtering it out or labeling it during analysis. But it is their own nature that may also provide the needed objection.
Arguably, the most pressing issue is whether NSFW AI chat systems are capable of accurately determining context and this is reinforced in an educational institution. For instance, an AI may pick up on certain flagged words in the conversation that imply inappropriate content without really getting the overall context under which those words are being used. Over 30 percent of the times, AI moderation tools incorrectly misclassified a content leading it to give false positive or false negative results in a study conducted by University of Maryland. This risks confusing topics for educational discussion with those that incite genuine harm, or vice versa.
Meanwhile schools had progress in using AI in tools for education, where the use of AI is accelerating learning. According to a McKinsey & Company report, up to 30% can be achieved by harnessing the power of AI in schooling that creates personalized student learning. On the other hand, NSFW AI chat systems can be helpful for monitoring and preventing harmful content, but they do not provide any educational benefit. These tools are designed for content moderation, which is not the primary goal of a school, where the main goal should be providing a safe, and enriching learning environment.
Of course, there is always the aspect of privacy and data-geekery when it comes to the implementation of NSFW AI systems in school. Artificial Intelligence has also recorded its share of infraction against students with an ACLU report revealing that schools have been surveilling their students right under their noses using non-transparent and overbearing AI-based surveillance tools (ACLU, 2020) which in some systems do not even explicitly notify anyone when monitoring their online activities. Just as with other NSFW AI chat systems, students’ online practices through these different forums—and the kind of sensitive data that represents—could be hoarded by these corporations and then sold or used against their interests.
Additionally, the implementation of NSFW AI chat systems can further draw stigma on the students. According to a study from the Journal of Educational Psychology, students exhibited heightened levels of anxiety and disengagement when they felt their online activities were being recorded at too high a frequency. This could lead to privacy infringement for students, making them lose interest in the digital platforms which could have been beneficial for their studies otherwise.
NSFW AI chat systems which can discover and filter harmful content are not appropriate for an educational setting that is focused on protecting the user. Education in general aims to develop a way of thinking, creativity and communication skills which is preferably possible through open and free debate. Rather than depending on NSFW AI chat, and using it to monitor and moderate, schools could take a more proactive approach through direct education in digital literacy and caution online.
Considering these reasons, it is important for schools to understand the impact of such NSFW AI chat systems. With that said, these systems can offer some level of protection but they do not exactly serve the pedagogical purpose I believe is needed where students are exposed to free and robust debate. If you want to learn more about how nsfw ai chat system work, go to nsfw ai chat.