• Bluelight Article Discussion Welcome Guest
    Bluelight Rules Posting Rules
    Articles Page Submission Guidelines
  • BAD Moderators: (Wordy)

ARTICLE: 'The pros and cons of asking chatbots about drugs'

(Wordy)

Moderator: Article Discussion
Staff member
Joined
Oct 23, 2005
Messages
1,528
This thread is for discussion of the article 'The pros and cons of asking chatbots about drugs' by Stu Hatton AKA (Wordy) - yep, that's me! ;)

Feel free to post your thoughts about the article, and any questions you might have.

I'll be more than happy to respond to questions/comments about the article. However, I won't pretend to be an expert on the topic of AI. I'm simply a writer with an interest in the topic. :)

As with all the other discussion threads in the Bluelight Article Discussion forum, constructive criticism and debate is most welcome, however abusive comments will not be tolerated. The Bluelight User Agreement (BLUA) applies, as it does across our site. We hope to conduct civil and constructive discussions here about the issues raised in the article.
 
Last edited:
I've been watching PsyAI at work in Bluelight's Telegram channel. A lot of people have been coming in and asking it all kinds of questions - more information about a novel substance, information about dosage, information about contra-indications, and what to consider if combining one substance with another.

From what I've seen, chat bots tend to work in ephemeral spaces - where the textual response is not archived. While often it's a one on one conversation (the user and the chatbot), at least in the Bluelight spaces, it's a one to many - one bot answering one person's question but in the presence of multiple people, who can review the chat log and see the bot interact. Having it embedded in the forum in a way that is like a forum user, where it can respond to a forum post automatically, is a really interesting idea. It would mean that its content will be archived in the same way as human created content here. I'm curious about how that will work and whether it will be useful for everyone.

Obviously it's not going to be providing the 'community' aspect, but instead augmenting the information/advice component of what we do.
 
Hello! Sernyl here.

I've been getting closely acquainted with and involved in the BL community, but in a "lurker" kind of way, mostly behind the scenes (e.g. providing tech support for community members using PsyAI, my HR bot). For a while now I've been looking for an appropriate thread to drop an intro note in, before Tronica linked to this one, and I think it's about time I say more than a few words to the public. :]

Quick explanatory digression regarding my chosen name -- "Sernyl" -- intended as a cheeky homage to the tangle mess that drug legislation in the US. "Sernyl" and "Sernylan" are both defunct brand names of Phencyclidine (PCP), which is arguably the most maligned drug in Western societies, if not in the rest of the world also. Some folks enjoy bashing PCP as a violence-waking, super-human-strength-imbuing, nasty substance without any conceivable therapeutic properties. Yet, few know/recall that PCP is still in Schedule II (not Schedule I, like THC/Cannabis) based on the ruling of the Drug Enforcement Administration, meaning that it was once deemed to have therapeutic value, and could be legally prescribed some decades ago, as a veterinary anesthetic. I'm not saying that PCP should be in Schedule I; I'm encouraging all of us, including myself, to reassess certain narratives that form and travel across communities, about certain drugs -- even those that have narrow safety profiles. I am not pro- or against PCP, but I am absolutely against the demonization of PCP, or any other substance, for that matter.

My personal website, https://sernyl.dev, described me in 3 succinct bullet points:
  • humble Anarcho-Shulginist
  • computer scientist, software developer, machine learning engineer
  • psychopharmacology nerd
(Don't overthink the first point -- I just thought "Anarcho-Shulginist" captures the intersection between a (peacefully) anarchist approach to drug regulation, and intellectual curiosity. :) )

Before PsyAI, I've been involved in a small handful of harm reduction projects (https://drugsand.me, https://sojourns.io). PsyAI was very much an accidental experiment (not to sound ostentatiously modest) that eventually emerged as a harm reduction tool that enough people found useful for me to actively develop and share it with much enthusiasm. As any software that relies on third-party AI technology, I welcome any scrutiny and feedback from the community, the more the merrier. PsyAI is not the only chat bot out there, and it's not entirely uncharted territory -- but, as one can imagine, the challenges and concerns in the context of AI being applied to harm reduction, are uncountable (and justifiably so).

My current aspiration for the mid to long term, in general terms, is to weave PsyAI into the forum. This concept is open for discussion and far from a set plan. I encourage the community to interact with both the bot and I, asking questions not just about substances and drug culture, but also about the technology behind PsyAI, its inner workings, data management, and privacy concerns. Rest assured -- we'll respond with complete transparency. :)

Thank you for welcoming me into your fold. I'm here, ready to listen, learn, and collaborate. ✨

P.S. The bot is available on the Bluelight discord server, and via Telegram. More details here: https://psyai.chat.

- Sernyl
 
Hello! Sernyl here.

I've been getting closely acquainted with and involved in the BL community, but in a "lurker" kind of way, mostly behind the scenes (e.g. providing tech support for community members using PsyAI, my HR bot). For a while now I've been looking for an appropriate thread to drop an intro note in, before Tronica linked to this one, and I think it's about time I say more than a few words to the public. :]

Quick explanatory digression regarding my chosen name -- "Sernyl" -- intended as a cheeky homage to the tangle mess that drug legislation in the US. "Sernyl" and "Sernylan" are both defunct brand names of Phencyclidine (PCP), which is arguably the most maligned drug in Western societies, if not in the rest of the world also. Some folks enjoy bashing PCP as a violence-waking, super-human-strength-imbuing, nasty substance without any conceivable therapeutic properties. Yet, few know/recall that PCP is still in Schedule II (not Schedule I, like THC/Cannabis) based on the ruling of the Drug Enforcement Administration, meaning that it was once deemed to have therapeutic value, and could be legally prescribed some decades ago, as a veterinary anesthetic. I'm not saying that PCP should be in Schedule I; I'm encouraging all of us, including myself, to reassess certain narratives that form and travel across communities, about certain drugs -- even those that have narrow safety profiles. I am not pro- or against PCP, but I am absolutely against the demonization of PCP, or any other substance, for that matter.

My personal website, https://sernyl.dev, described me in 3 succinct bullet points:
  • humble Anarcho-Shulginist
  • computer scientist, software developer, machine learning engineer
  • psychopharmacology nerd
(Don't overthink the first point -- I just thought "Anarcho-Shulginist" captures the intersection between a (peacefully) anarchist approach to drug regulation, and intellectual curiosity. :) )

Before PsyAI, I've been involved in a small handful of harm reduction projects (https://drugsand.me, https://sojourns.io). PsyAI was very much an accidental experiment (not to sound ostentatiously modest) that eventually emerged as a harm reduction tool that enough people found useful for me to actively develop and share it with much enthusiasm. As any software that relies on third-party AI technology, I welcome any scrutiny and feedback from the community, the more the merrier. PsyAI is not the only chat bot out there, and it's not entirely uncharted territory -- but, as one can imagine, the challenges and concerns in the context of AI being applied to harm reduction, are uncountable (and justifiably so).

My current aspiration for the mid to long term, in general terms, is to weave PsyAI into the forum. This concept is open for discussion and far from a set plan. I encourage the community to interact with both the bot and I, asking questions not just about substances and drug culture, but also about the technology behind PsyAI, its inner workings, data management, and privacy concerns. Rest assured -- we'll respond with complete transparency. :)

Thank you for welcoming me into your fold. I'm here, ready to listen, learn, and collaborate. ✨

P.S. The bot is available on the Bluelight discord server, and via Telegram. More details here: https://psyai.chat.

- Sernyl

Hey Sernyl, nice work on the RAG bot > check your DMs though...
 
Personally, I wish some kind of God existed and could become corporeal to use a gigantic thumb (o hand) to queash all ChatBots to death. Never had so frustrating experiences then when dealing with one of them. I prefer to travel two hours on public transportation to go to a place where I ask a real person before using a ChatBot.

Reading this threat made me consider one pro:
Obviously it's not going to be providing the 'community' aspect, but instead augmenting the information/advice component of what we do.

@sernyl Welcome. My opinion is just an overall opinion about ChatBoxes, not yourself or your workethics.

*I can't insert the quote where you wrote about the idea of integrating the Bot into the forum*

But if I ever were under the impression one of my questions is being answered by a ChatBot it, I would surely stop coming here. One of the reasons I like this forum is the *human* contact. Even if it's pure, cold facts I'm looking for.
 
Last edited:
Personally, I wish some kind of God existed and could become corporeal to use a gigantic thumb (o hand) to queash all ChatBots to death. Never had so frustrating experiences then when dealing with one of them. I prefer to travel two hours on public transportation to go to a place where I ask a real person before using a ChatBot.

Reading this threat made me consider one pro:

@sernyl Welcome. My opinion is just an overall opinion about ChatBoxes, not yourself or your workethics.

*I can't insert the quote where you wrote about the idea of integrating the Bot into the forum*

But if I ever were under the impression one of my questions is being answered by a ChatBot it, I would surely stop coming here. One of the reasons I like this forum is the *human* contact. Even if it's pure, cold facts I'm looking for.

Robots taking to robots. Nearly there on platforms like LinkedIn, and I couldn't agree more. Although it does mean an influx of real-life person interactions!
 
Personally, I wish some kind of God existed and could become corporeal to use a gigantic thumb (o hand) to queash all ChatBots to death. Never had so frustrating experiences then when dealing with one of them. I prefer to travel two hours on public transportation to go to a place where I ask a real person before using a ChatBot.

Reading this threat made me consider one pro:

@sernyl Welcome. My opinion is just an overall opinion about ChatBoxes, not yourself or your workethics.

*I can't insert the quote where you wrote about the idea of integrating the Bot into the forum*

But if I ever were under the impression one of my questions is being answered by a ChatBot it, I would surely stop coming here. One of the reasons I like this forum is the *human* contact. Even if it's pure, cold facts I'm looking for.
Thanks for the feedback.
I wouldn't want a chat bot to be mistaken for a human on here, nor would I want chatbots to be communicating with chatbots here (I know that happens in many platforms!).

I reckon its possible for us to build your concerns into the design, e.g. forum members can choose to have their question answered by a tailored harm reduction bot, but would never be forced to use it. And I think the beauty of this happening on Bluelight is that any answers by the bot would be subjected to community scrutiny. That is, if there were errors or concerns about the info given, as a community, we can discuss that and add our own opinions. And as we have the developer here and engaged in the process and keen to ensure high quality HR info, then the chatbot can be easily updated with corrected or expanded info.

This won't solve your concern about bots even existing, but yeah I think the horse has bolted on that one. the question for me is how, not whether, we utilise the tech that's available. And with any use of bots, the best uses will be thoughtful ones. We can use them well and we can use them badly, so it's about how we do it IMHO (I work in university settings and that field is really grappling with this at the moment!).
 
And I think the beauty of this happening on Bluelight is that any answers by the bot would be subjected to community scrutiny. That is, if there were errors or concerns about the info given, as a community, we can discuss that and add our own opinions
From an informational point of view definitely a pro. Plus, it would lead back to a human/human interaction.
 
Top