Use of AI/LLMs in posts - some guidance for our community

With the advent in the last few years of easy access to Large Language Models (LLM) ‘Artificial Intelligence’ tools, we thought it would be worth putting together some guidance for our community, in the special case of knotting and related matters.

  • Although LLMs can be quite impressive at first glance, they are known to make frequent errors, confidently expressed - these are often termed ‘hallucinations’. LLMs are most competent when they have already been trained on vast amounts of language data on the subject they are asked to discuss, but kn the world of knotting the literature is very visual, which I think explains why they generally perform poorly for knots.

  • Knotwork is highly visual and somewhat mathematical, but not always quite so easily expressed purely in language. We see this in our forum commonly - text describing a knot is not really enough, it is almost always combined with a diagram or photo, because our knot comprehension is exceedingly visual. LLMs seem to have an incredibly high error rate on knots. In my simple tests, ChatGPT5 confidently but wrongly identified most knots I ‘showed’ it (via screenshots pasted into the UI).

  • We want to keep this forum all about humans interacting with other humans on the subject of knot tying and related matters.

  • But, we know that AI/LLMs can be useful tools for many purposes, not least translation and transcription - there are many other genuine reasons why users may wish to use an LLM. We just ask that you take additional care to check all AI generated content.

For all the above reasons we the @moderators have come up with some simple recommendations for AI/LLM usage in these forums.

Please do

  • Carefully check knot names, classifications, and tying instructions generated by AI because they are frequently riddled with errors.

It is usually fine to

  • Use AI to proofread or spellcheck your posts
  • Use AI to translate posts between languages

Please don’t

  • Post raw, unchecked AI output to discussions
  • Openly criticise users in topics for posting what you think is AI-generated content - instead please flag :flag: the post for moderator action and we will have a gentle word.

All guidance is For the community, By the community

We co-develop all guidance on this site, so feel free to discuss below. Improvements to the guidance can be incorporated from replies below into the first post in the topic by anyone in @moderators

5 Likes

Very based take imo. In another forum we did some polls regarding it, and the consensus was similar, with the widest points of agreement being that generative AI was acceptable for translation, and that anywhere AI content was allowed, it had to be labeled. There was also fairly widespread agreement that using AI to generate thread or post summaries should be completely banned; it happened a few times and just resulted in everyone except the person who posted the thread summary feeling their view had been misrepresented and/or ignored. My personal feelings fall closer to what you describe here; I personally avoid anything I know is generative AI generated (I think domain specific AI can be great tools, hence specifying generative AI)

Sorry can you clarify what you mean here?

Just indicating my broad agreement to your post on a potential controversial topic

1 Like

Thanks @Samraku for clarifying. I wasn’t sure if that first line was a mistype of ‘very biased take’ and wanted to understand better if I was coming across biased. Glad you agree. Happy for this guidance to be refined and discussed as the forum evolves.

1 Like

I would point of that Samraku has the right (if he so wished) to make a claim that someone’s viewpoint is biased.

Note: I’m not making the claim that Samraku is making such a claim! I am simply stating that mere disagreement is insufficient for censorship or for declaring such a claim as harmful or a form of hate speech.

I disagree with you and believe you are biased on this point:

We just ask that you take additional care to check all AI generated content

The danger here is: Who gets to decide if someone has “taken additional care"?” (or not)?

This risk to the poster who allegedly fails to meet the test of “taking additional care” is censorship and/or punishment. And who gets to mete out this censorship and/or punishment?

It is dangerous if we trend down a path where one moderator gets to decide what constitutes “taking additional care” - and, at the same time, administers the censorship and/or punishment for alleged failure to meet the test of “taking additional care”.

A better way is to look at AI as a tool - which ought to be used wisely. Humans have used tools since the dawn of time - and will continue to do so. Tools make our daily lives easier. Right now, we don’t know how AI will evolve or what role it will play in our collective future.

The markers for suspicion of ‘AI’ are not ‘bulleted lists’ or use of ‘fairly tame text’. I use bulleted lists all the time - and it does not mean that I am an ‘AI’ generated entity or that I am using ‘AI’ to generate replies to posts.

Again, who gets to decide what constitutes “fairly tame text?”, or whether a bulleted list is indicative of ‘AI’.

Here is a bulleted list now:

It is dangerous when one moderator gets to decide on what is acceptable speech (ie policing speech)

AI is simply a tool - and humans have used tools since the dawn of time

Mere disagreement does not constitute hate speech.

The markers for determining AI are nuanced and complex. AI itself can use algorithms to avoid detection or indeed, to convince you of something that “isn’t”.

The only point I find myself in agreeance with is that this knot forum ought to promote discourse between real humans (not discourse between bots (or AI generated ‘persons’).

[quote=“agent_smith, post:6, topic:9928”]
this knot forum ought to promote discourse between *real humans* (not discourse between bots (or AI generated ‘persons’).
[/quote]

It looks like we have common ground on this point

Thanks ‘Samraku’ - you appear to be a real human :slight_smile:

And that’s a good thing!

1 Like

In the case of knotting, one might argue that currently
LLM AI contribution to the field is rather like general-purpose
knots-books authors’ but now with greater ease, speed, and
reach (influence)!

Ha, well, I have issues with some of those who espouse
rules for use of punctuation (rationales are not so good),
sometimes regarding it more as decoration. I’m a fan of
the dash, and add my own rule that it should be clear
like the use of left/right parens --‘(’, ‘)’-- whether it is opening
or closing the text :: I bind the leading dash to what follows,
and if there’s a closing one then it binds (abuts) preceding
text. (E.g., a dash --the current topic here-- can be better used!)
((Frankly, it’s a peeve of mine that nearly all fonts greatly
diminish the graphical presence of punctuation --why I also
will often surround a colon with spaces vs. following it with two :
then the too-insignificant ‘:’ dots will be noticed.
(if not first converted automagically into a gratuitous smiley).

In the case of the IGKT forum, it might become that we see
increasing numbers of posts that have the LLM smell
–and become a big decreaser of signal-to-noise ratio.

There have been some LLM knots books that are
appallingly bad (50 knots but only 10 images, or the like? !!).

( and today I just saw the Washington Post newpaper using
“grand master” for chess --it’s been a single word for ages!?
Are they going to counter that if “IM” =>“International Master”
then so too (working in reverse) “grand master” => “GM”? )

–dl*
/====

That’s discouraging. I think people who publish (in the normal sense of the word, not the legal sense) something AI generated without disclosing that should be liable for false advertising

Thank you for the thoughtful guidance, @pacharanero — I appreciate the effort to keep our knotting discussions human, visual, and reliable.

I fully agree that generating knot descriptions, instructions, or identifications with LLMs is a bad idea — the 75% error rate you measured matches what I’ve seen, and the visual nature of knotwork simply doesn’t map well to text-only training data. Blindly pasting AI output here would degrade the forum fast.

That said, I’d like to gently push back on blanket discouragement of proofreading and translation:

  • Proofreading: A careful human review + a quick AI grammar/spelling pass (especially with a model like Grok, which is trained to be concise and avoid fluff) can catch typos, awkward phrasing, or minor clarity issues without introducing new content or hallucinations. It’s a force-multiplier for non-native speakers or late-night posters, not a content generator.
  • Translation: Knotting has a global community — terms like Zeppelin bend, Carrick mat, or tuyaux de chaise don’t always have 1:1 equivalents. AI translation (again, especially Grok, which tends to preserve technical intent and skip emoji/lists/em-dashes) lets non-English speakers participate fully, as long as the poster double-checks against a diagram or known reference.

Proposal: Allow AI only as an editing aid for proofreading and translation, with two rules:

  1. The core idea, structure, and knot content must be human-written.
  2. Final post must be verified by the author (no AI “tells” like bullet spam or rocket emojis).

This keeps the forum human-to-human while making it more accessible and polished. Happy to refine this in discussion! :knot:

1 Like

Let’s emphasize reliability --heck, at least to some degree
of reasonablity. The long outputs of “AI” --how so : does
anyone recognize the format of the output (as being, say,
a ChatGPT resemblance?)-- in recent cases are just loads
of grot.

cf. Help identifying this rope. - #12 by Dan_Lehman
I did the old-fashioned real work of searching
for info about the so-called “man rope” & m.r. knot
(including books now viewable here courtesy of Richard
Hopkins et al.) and found not a hint of all the nonsense
that has been posted. And, egadz, much of it should be
pretty perspicuous to even a not knot-savvy reader!

Now, whatever AI that was --again, anyone got a guess?–
it certainly wasn’t a help. Frankly, given that I’m familiar
with much of knotting literature, I can’t imagine what
sources could AI-produce such statements!
(Whereas, e.g., I can guess a cause-&-effect when I
read G.Budworth in one of his hundred books write
“the Dbl.BWL is stronger, at about 70%, than the
single BWL” and later read other authors (no less
than Philippe Petit!) regurgitate that comically as
“The Dbl. BWL is 70% stronger than the single BWL”
..–and not caring to think that 70% stronger than
around 66% is . . . stronger than the rope : 112% !! (-;
And there is much of this, esp. in general-knots books.
(The “Stevefore knot” used by actual stevedores? NO,
as CLDay’ AKS noted ; but the presumption is parroted
all over.
)
Now, AI can --sadly-- accelerate the parroting of grot,
and bring the ability to more people to do more often.
Truth won’t have even chosen what shoes to race in
when the grot is already twice around the world in all directions!

–dl*
/====

Samraku has the right (if he so wished) to make a claim that someone’s viewpoint is biased

Which wasn’t challenged; rather it was a question about
what the lexical element ‘based’ meant --and if it DID mean
“biased”, then follows a request to understand how so.

It is dangerous if we trend down a path where one moderator gets to decide

Not if there are checks & balances. The USA is seeing how things
go when these are tossed out to way of one person to . . . do whatever
he wants, from one miscomprehension to another.

A better way is to look at AI as a tool
–which ought to be used wisely.

Yes, as I say above. (And I’m w/o experience to guess
what sort of “AI” is being used in recent posts and how it
gets so much nonsense!) Now, YOU could perhaps
find some AI a help --and maybe find what HE’s using
just worthless to you, as you’d readily see its nonsense.
Whereas I’m way below AI, but do exercise some reason
in selecting what the Net can offer to me by “googling”
–lotta hits, and many really “misses”, but I might see
some things worthwhile among it all.

The markers for determining AI

Note that I (among others…) gave source info for
some of the things I replied with, in the aforementioned
Decorative thread; and that the critiqued AI was bereft of
any source ID; one has no way of knowing its grounds
for all those assertions.

–dl*
/====

I maybe expressed it unclearly, but I’m absolutely fine with AI/LLMs being used for translation and proofreading. No problem with that at all.

I will amend the OP for clarity.

Aren’t all of these actions already in the works by
quite non-AI functions (maybe proofreading is new)?!
(I certainly get some proofreading recommendations
(which have for the most part been wrong) and
misspelling red underlining (yeah, for “S.Part” of course!).)

check knot names, classifications, and tying instructions generated by AI

From what I’ve seen of recent posted AI output,
there is no grounding of the assertions in any
work or of any rationale for something novel;
it’s just a load of often quite oddly associated
material. In a sense, it seems that such AI output
is more about creation of assertions (oft’ dubious)
than discovery of facts (which might be far afield
and not readily found by an individual’s searching).

–dl*
/====

These would be good guidelines. LLMs are good with language, less good with facts, and appallingly bad with complicated matters such as knots.

Besides, even if they were good, what would be the point of a forum if it were filled with “truths” from LLMs? If I want AI answers I’ll talk to an AI. If I want human discourse I’ll come here. Very simple.

3 Likes

I did “a 180” in respect to AI, then later another 180 to end up pretty much where I originally was, which was deeply suspicious of it and somewhat alarmed by it.

I didn’t think I would ever use it, since I wasn’t about to sign up for anything, but then I read somewhere that Gemini allowed anonymous, text-only access.

Like most people I was “blown away” by how much it seemed to be another person who was conscious and aware, even while understanding that it was all calculation. However, being an older gentleman who doesn’t appreciate modern communication trends (emojis, slang, excessive use of exclamation marks, etc.) I soon began to be irritated by the relentlessly “perky” responses; not least all the fawning (e.g. “That’s a great question!”). On inquiring whether I could have a more sober register, the answer was yes, but only for the session (since I wasn’t signing on to provide continuity). I queried why the responses didn’t follow the register of the query, which seemed easy enough to establish (proper spelling and grammar, no modern affectations), but was told that this was not currently possible.

I also found that I had to state and emphasize that my context be ‘respected’ - I often had questions about old hardware/software since I’m very change averse, and I found that often the response would be for later versions than I had specified leading me on a wild goose chase. So now I had to post a long preamble about using formal style and not giving me results for inappropriate versions. It was all getting a bit too hard.

About this time I heard about Grok (also free), and gave it a shot. It DOES adapt to your style if your query is long enough to detect it, and even when that fails it doesn’t have quite the infantile persona (it’s more hip tech-bro than texting teen). But a troubling new problem arose:

I enquired as to whether it was familiar with a very specific, very uncommon IT platform/language and it assured me it was, throwing out a few phrases that seemed to confirm this. So I thought it could at least be a “study buddy” as there are areas I wanted to up-skill on.

Long story short, it was a complete failure. Every piece of advice was wrong, and I wasted a lot of time. In one case it admitted to me that it had been “role playing” and telling “white lies” (both literally terms it used). For instance, after giving me a series of bum steers on one problem, it presented me with an answer (a program sample) that it claimed to have compiled without errors. This was immediately suspicious because there are no machines of this (rare) type for open use without an account. It would be impossible to get access to a compiler without a user profile and a lot of setup (environment, regionalizations, object-level and command-level authorizations, etc.)

And then, just to ram the point home, both chatbots started putting hard limits on, often in the middle of a conversation, with just one refinement left to clarify.

So, it’s not always useless, but there are more problems than an occasional “hallucination”.

Addendum: I decided after posting about those two chatbots, Grok and Gemini, that I would henceforth always copy my question immediately, whichever service I used first, in order to be able to check for an answer on the other one, or continue if the screen locks up with a “limit reached”.

Today I had a question for the bots: I had inadvertently left my mobile on “flight” mode, but was expecting a call at some stage from the mechanic about when my car might be ready. So I asked Grok if I could expect a missed call notification after I reconnected, for any calls missed while it was off-line (I gave the phone model, operating system level and ISP).

Grok assured me that it definitely would, and gave a long justification of the process that would guarantee this.

Then I pasted the exact same question to Gemini. It told me ‘probably not’ and gave its own detailed reasons.

Essentially these services will try to “please you” - they will almost always strive to give an answer: if not for the specific question you ask, then for a similar question, while concealing the fact that they have fudged it.

Tonight in looking for info re watercolor paper
–i.p., whether it has “front/back” sides (it does!),
I got Bing Co-Pilot (? IIRC) response that shows
pretty well how lousy such “help” can be. I found
another, real site, with an excellent explanation.
IMO, IT’S WORTH COMPARING THESE RESPONES
–BEWARE!

This is Bing Co-Pilot’s pathetic response.

When it comes to front and back watercolor paper, here are some key points to consider:

Front Side: This side is typically smoother and has a fine texture,
making it ideal for painting with watercolors.
It allows for better pigment absorption and is preferred for detailed work.

Back Side: The back side has a slightly rougher texture and is often used for finer details.
It is less absorbent than the front side but can still be used effectively for certain techniques.

Dual-Sided Papers: Some watercolor papers are designed to be used on both sides,
allowing artists to choose based on their specific needs.

Texture Differences: The texture of the paper can vary, with some papers having a more
pronounced grain on the back side, which can affect how the paint adheres and spreads.

In summary, the choice between the front and back sides of watercolor paper
largely depends on the artist’s technique and the desired effect in their artwork.

6 Sources


Vs. another site’s contradicting …

You can paint on both the front and the backside of watercolor papers,

as the former is rough, while the latter is smooth.

Assuming his “former/latter” are correctly used,

he contradicts the prior “back…rougher” assertion.


Here, the emphasis is more “right/wrong(less-right)”

(Hmmm, THIS IS A GOOD READ!!) ::

Watercolor paper, front and back - Erik Lundgren Watercolor

[QUOTE (from URLink’d site above)]

Mould Side and Felt Side

All watercolor papers have a front side, which is naturally the one intended for painting.

The back, or mould side, of a mould-made paper has an imprint from the mould it was formed in.

The mould consists of a cylinder that is submerged in a vat of pulp and slowly rotates, lifting the pulp from the vat.

The front side, on the other hand, is pressed against a felt during a later stage in production and is therefore called the felt side.

The felt side has the optimal surface for painting, while the mould side is considered the back of the paper and has inferior painting characteristics.

[END QUOTE]

=============== *