cross-posted from: https://lemmings.world/post/21993947

Since I suggested that I’m willing to hook my computer to an LLM model and to a mastodon account, I’ve gotten vocal anti AI sentiments. Im wondering if fediverse has made a plug in to find bots larping as people, as of now I haven’t made the bot and I won’t disclose when I do make the bot.

  • End0fLine@midwest.social
    link
    fedilink
    English
    arrow-up
    10
    ·
    22 hours ago

    I’m starting to think this account is the LLM experiment. A poor one, though. It just keeps saying the same thing and not accepting new information.

  • rglullis@communick.news
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    What I am failing to understand is: why?

    Is this just for some petty motivation, like “proving” that people can not easily detect the difference between text from an LLM vs text an actual person? If that is the case, can’t you just spare all this work and look at all the extensive studies that measure exactly this?

    Or perhaps it is something more practical, and you’ve already built something that you think is useful and it would require lots of LLM bots to work?

    Or is it that you fancy yourself too smart for the rest of us, and you will feel superior by having something that can show us for fools for thinking we can discern LLMs from “organic” content?

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      2
      arrow-down
      1
      ·
      22 hours ago

      What I am failing to understand is: why?

      People do things for fun sometimes. You could ask this about almost anything that people do that isn’t directly and immediately related to survival. Why do people play basketball? It’s just pointlessly bouncing a ball around in a room, following arbitrary rules that only serve to make the apparent goal of getting it through the hoop harder.

      • rglullis@communick.news
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        22 hours ago

        People do things for fun sometimes.

        This is not the same as playing basketball. Unleashing AI bots “just for the fun of it” ends up effectively poisoning the well.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          3
          ·
          22 hours ago

          No, it’s not the same. I was using basketball as an analogy. Someone who doesn’t enjoy basketball wouldn’t “get it”, just as you’re not “getting” the fun that can come from building and playing around with AI bots. Different people find different things to be fun.

          • rglullis@communick.news
            link
            fedilink
            English
            arrow-up
            2
            ·
            21 hours ago

            I completely understood your analogy, and I certainly understand the fun in tinkering with technology. What you might be missing is that it seems that OP is planning to write a bunch of bots in here and then test how well people can detect them, and that affects other people.

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              18 hours ago

              Right, and this is presumably something he finds fun. You were asking why, I was explaining why.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      9
      ·
      22 hours ago

      Why would be easy to understand. To implement counter AI measures, best way to counter AI is to implement it.

      Btw I like the Lemmy hostility

      • rglullis@communick.news
        link
        fedilink
        English
        arrow-up
        4
        ·
        22 hours ago

        To implement counter AI measures, best way to counter AI is to implement it.

        You are jumping into this conclusion with no real indication that it’s actually true. The best we get for any type of arms race is a forced stalemate due to Mutually Assured Destruction. With AI/“Counter” AI, you are bringing a medicine that is worse than the disease.

        Feel free to go ahead, though. The more polluted you make this environment, the more people will realize that it is not sustainable unless we started charging from everyone and/or adopt a very strict Web of Trust.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          22 hours ago

          Well the only way to know I’m write or wrong with certainty is to test myself. It’s like we both don’t know if I’m right. I’m going to figure it out because I’ll investigate and I’ll have fun investigating.

          The alternative is we don’t do anything about the LLM bots on the fediverse and they just integrate in. Also there’s only one way to see if your MAD is correct

          • rglullis@communick.news
            link
            fedilink
            English
            arrow-up
            7
            ·
            21 hours ago

            You don’t do tests in an actual production environment. It is unethical and irresponsible.

            Feel free to do your experiments on your servers, with people who are aware that they are being subject to some type of experiment. Anything else and I will make sure to get as many admins as possible to ban you and your bots from the federation.

              • rglullis@communick.news
                link
                fedilink
                English
                arrow-up
                4
                ·
                20 hours ago

                You want to write software that subverts the expectations of users (who are coming here with the expectation they will be chatting with other people) and abusing resources provided by others who did not ask you to help you with any sort of LLM detection.

                • PixelPilgrim@lemmings.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  4
                  ·
                  19 hours ago

                  That doesn’t answer my question and it’s not coherent. Like I’m apparently “abusing resources” when I use a bot but not when I use a bot to make a leaderboard that tracks fediverse streamers stats or if I make the content with fleshy brain, just my resources.

  • Corgana@startrek.website
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    21 hours ago

    Moderation on the Feviderse is different than on commercial platforms because it’s context-dependent instead of rules-dependent. That means that a user accout (bot or otherwise) that does not contribute to the spirit of a community will not be welcomed.

    There is largely no incentive to run an LLM that is a constructive member of a community, bots are built to push an agenda, product, or exhibit generally disruptive behavior. Those things are unwelcome in spaces built for discussion. So mods/admins don’t need to know “how to identify a bot”, they need to know "how to identify unwanted behavior".

  • notanapple@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    why dont you make something like r/SubredditSimulator? It would be cool to see what modern llms can do in this respect.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      22 hours ago

      I actually wandered away from the SubredditSimulator successor subreddits because even with GPT2 they were “too good”, they lost their charm. Back when SubredditSimulator was still active it was using simple Markov chain based text generators and they produced the most wonderfully bonkers nonsense, that was hilarious. Modern AIs just sound like regular people, and I get that everywhere already.

    • Valmond@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      A fun twist could be letting people post, and llms answer. Each with its specific angle.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 day ago

    Lemmy and Fediverse software have a box to tick in the profile settings. That shows an account is a bot. And other people can then choose to filter them out or read the stuff. Usually we try to cooperate. Open warfare between bots and counter-bots isn’t really a thing. We do this for spam and ban-evasion, though.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      3
      ·
      22 hours ago

      I know and it’s up to the accounts author to do that. I know when I made my bot on mastodon. And counter measure against ai should start

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        20 hours ago

        I really don’t think this place is about bot warfare. Usually our system works well. I’ve met one person who used ChatGPT as a form of experiment on us, and I talked a bit to them. Most people come here to talk or share the daily news or argue politics. With the regular Linux question in between. It’s mostly genuine. Occasionally I have to remind someone to tick the correct boxes, mostly for nsfw, because the bot owners generally behave and set this correctly, on their own. And I mean for people who like bot content, we already have X, Reddit and Facebook… I think that would be a good place for this, since they already have a goot amount of synthetic content.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          20 hours ago

          Lol not the place for bot warfare 😆. That’s like saying America isn’t a place for class warfare and yet the rich already mobilized. Plus someone is probably doing the same thing as me and not disclosed it

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            18 hours ago

            It is like I said. People on platforms like Reddit complain a lot about bots. This platform on the other hand is kind of supposed to be the better version of that. Hence not about the same negative dynamics. And I can still tell ChatGPT’s uniquie style and a human apart. And once you go into detail, you’ll notice the quirks or the intelligence of your conversational partner. So yeah, some people use ChatGPT without disclosing it. You’ll stumble across that when reading AI generated article summaries and so on. You’re definitely not the first person with that idea.

            • PixelPilgrim@lemmings.worldOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              18 hours ago

              Reddit is different than fediverse. They work on different principles and I argue fediverse is very libertarian.

              Is there anyway you can rule out survivorship bias? Plus I’m already doing preliminary stuff and I looking into making response shorter so that there’s less information to go on and trying different models

              • hendrik@palaver.p3x.de
                link
                fedilink
                English
                arrow-up
                1
                ·
                16 hours ago

                What kind of models are you planning to use? Some of the LLMs you run yourself? Or the usual ChatGPT/Grok/Claude?

                • PixelPilgrim@lemmings.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  14 hours ago

                  So far I’ve experimented with ollama3.2 (I don’t have enough ram for 3.3). Deepseek r1 7b( discovered that it’s verbose and asks a lot of questions) and I’ll try phi4 later. I could use the chat-gpt models since I have tokens. Ironically I’m thinking about making a genetic algorithm of prompt templates and a confidence check. It’s oddly meta

  • TootSweet@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    ·
    edit-2
    1 day ago

    What would the “bot that finds bots larping as people” do exactly? Ban them? Block or mute them? File reports? DM an admin about them?

    If it’s just for pointing out suspected LLM-generated material, I think humans would be better at that than bots would be, and could block, mute, or file reports as necessary.

    Also, are you saying you intend to make a bot that posts LLM-generated drivel or a bot that detects LLM-generated drivel?

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      6
      ·
      1 day ago

      At minimum flag them. Think about an Amazon review but that detects fact reviews or sponsor block. Make a database and the elements that are their post get eliminated.

      I’ll see if people can pick up on bots. If they can see if they do any on that.

      I won’t exactly say what I intend but it will involve LLM

      • Lazycog@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        13
        ·
        1 day ago

        Please don’t create a bot account that is not flagged as a bot. There is enough malicious activity that you might not see because mods/admins are doing their job.

        There is no need to increase the volunteer work these people do.

          • Lazycog@sopuli.xyz
            link
            fedilink
            English
            arrow-up
            6
            ·
            22 hours ago

            This… This is how fediverse works though… You are also on an instance run basically entirely on volunteer work.

            If “we quit” there is no fediverse. On top of it all moderation tools are not mature enough yet.

            You’d rather have your data scraped and sold? Or pay to use a platform like this?

            • PixelPilgrim@lemmings.worldOP
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              4
              ·
              22 hours ago

              Lol nice apparently stopping moderation means closing instances, plus how the fediverse works is redundancy. Someone can make an instance and make it complete anarchy and people can join it

              • Lazycog@sopuli.xyz
                link
                fedilink
                English
                arrow-up
                4
                ·
                22 hours ago

                Instance administration is also quite time consuming and is also based on volunteer work.

                The vast majority of content on lemmy will be inaccessible to an instance who is under complete anarchy / unmodded.

                There is the fediseer project that helps instances block such instances.

                If you spin up an LLM farm instance it’s guaranteed to be blocked in many of the big ones - making your instance a lone island.

                • PixelPilgrim@lemmings.worldOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  22 hours ago

                  So they’ll don’t do LLMs because it will lead to instance that aren’t connected to many other instances. Better than fediverse being gone

      • JustAnotherKay@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        ·
        1 day ago

        Forcing the Fediverse into your experiment isn’t going to get you into the position you think it will be the way

  • hisao@ani.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    What I would expect to happen is: their posts quickly start getting many downvotes and comments saying they sound like an AI bot. This, in turn, will make it easy for others to notice and block them individually. Other than that, I’ve never heard of automated solutions to detect LLM posting.

    • PixelPilgrim@lemmings.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      19
      ·
      1 day ago

      Ahhhhh I doubt average Lemmy users are smart enough to detect LLM content. I already thought of a few ways to find LLM bots

      • Docus@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 hours ago

        The further get down this thread, the more you sound like a person I don’t want to deal with. And looking at the downvotes, I’m not the only one.

        If you want people blocking you, perhaps followed by communities and instances blocking you as well, carry on.

        • PixelPilgrim@lemmings.worldOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          18 hours ago

          That’s fine if people don’t want to deal with me I never interacted with them before this thread (most likely)

      • hisao@ani.social
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        Imo their style of writing is very noticeable. You can obcure that by prompting LLM to deliberately change that, but I think it’s still often noticeable, not only specific wordings, but higher-level structure of replies as well. At least, that’s always been the case for me with ChatGPT. Don’t have much experience with other models.

        • Docus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          That’s not entirely true. University assignments are scanned for signs of LLM use, and even with several thousand words per assignment, a not insignificant proportion comes back with an ‘undecided’ verdict.

          • hisao@ani.social
            link
            fedilink
            English
            arrow-up
            1
            ·
            18 hours ago

            With human post-processing it’s definitely more complicated. Bots usually post fully automatic content, without human supervision and editing.