KNOT TESTING GUIDELINES - is IGKT best positioned to set fundamental guidelines?

Proposition :-

Fundamental Guideline #1

Establish what it is that you want to know. Do not be distracted by the ‘how it might be determined’, concentrate on formulating exactly what it is you want to know.

Only after you have formulated your goal should you then start to investigate what tests might yeild your answer.

Derek

Okay, well, I didn’t think there was no jamming,
only that your image belied which side. But I see
now that the apparent “space”-revealing white
spec is some sort of reflection off of the rope tail and
not the space I’d thought --which sort of bit of space
can be seen in the less-loaded knot to the left and
at the other side of both SParts than is this midleading
spec.

Now, to try this in some different rope --i.p., something
that doesn’t compress like the multi-stranded-kern’d
rope.

(-;

per Derek:

Amateur Amateur Equipped Professional Testing Facility. ?

But to be honest, I don’t see much need to separate them

I had considered using the descriptor ‘amateur’ but, it can imply a meaning that is unfair or unwarranted:
See this link to a dictionary definition: https://dictionary.cambridge.org/dictionary/english/amateur (scroll down a bit to see this possible imputation; …“someone who does not have much skill in what they do”)

Perhaps a more suitable descriptor for backyard testers is; Hobbyist/Enthusiast tester ?

Full disclosure statement: Derek, I am providing a dictionary link not in an attempt to be derogatory or insulting toward you. I am merely pointing to an external source. There is absolutely no intention to be insulting in any way! I have to insert these disclaimers because I ran afoul with Mobius for quoting the dictionary - which he interpreted as being demeaning or derogatory. Just to be clear, I intend nothing of the sort!

So for the reason that the word ‘amateur’ could possibly be misconstrued - I chose not to use it.

In terms of a desire to distinguish between different classes of ‘tester’ (and here again is a source of irritation with the ambiguous distinction between a knot tester and a knot trialer):
I do think making a distinction is important.
I believe that expectations of scientific rigour scale accordingly.

I believe that some on this forum have apprehension of drawing criticism for their ‘knot testing’ efforts.
And so they shy away from identifying as a class of tester where expectations may be beyond their capabilities.
In my view, I think advances are made in a scientific field when others have a chance to peer review of try to reproduce results published. That is how science is done - someone tests and publishes - and then others can either confirm or refute the results.
Criticism is part of the process - but it is inevitable that some may have difficulty in accepting criticism. And if expectations scale according to your ‘tester class’ - setting a lower bar is a way of escaping this process.

If we look at past evidence and the current crop of knot test reports from around the world - it is clear that some are ‘holding them self out’ as being an expert. That is, you can read/download some reports from certain individuals - and it is clear that they are holding themselves out as possessing a special expertise. Readers often assume they are ‘experts’ - and accept their conclusions on face value.
Credibility plays a role - and some knot testers (mostly from a class of testers I refer to as ‘pseudo labs’ - or well equipped enthusiasts) - hold a certain level of professional credibility and can significantly influence the lay public. Examples of these pseudo lab testers are Richard Delaney (rope test lab) and Grant Prattley (Over the edge rescue). They regularly test and publish their results. I would not class them as enthusiast/hobbyist (aka ‘backyard testers’). But they are not certified, nationally accredited tests labs.
So my view is that semi-professional (pseudo lab) testers like Richard Delaney and Grant Prattley must be willing to accept criticism and peer review of their published results - as they are publishing to the world - and people assume they are ‘experts’.
Expectations scale accordingly - and I believe that a higher level of scientific rigour is warranted from these individuals than from enthusiast/hobbyist testers.

Richard Delaney (for example) also holds an Engineering degree from a university - which he further promotes as an integral part of his test lab. Such credentials impart credibility - which an enthusiast/hobbyist generally does not have (some may - but on balance, most enthusiast/hobbyist testers likely wouldn’t hold Engineering degrees).

I would expect a much higher degree of scientific rigour from a certified, nationally accredited test lab (ie professional test lab). If this class of tester is publishing to the world, they must be willing to accept criticism via peer review. Certainly, NautiKnots arguments for scientific rigour would apply to this class of tester. They are generally well funded, have a purpose built test facility and can measure and capture data with sophisticated computers and software. There is usually an Engineer in residence at the facility.

For an enthusiast/hobbyist class tester, with very limited funds (meaning nearly zero $), improvised force generating machine and maybe some sort of force measuring device (a fishing scale?) - not to mention very limited spare time - scientific rigour is likely to be (at best) minimal. The ability of others to try to repeat their results (to confirm or refute) is probably not possible. For example, the cord/rope material is often the cheapest they can source - and likely doesn’t meet any particular manufacturing standard. For another peer review tester living in a different nation, it would be near impossible to try to purchase the exact same identical material.

So I think we do need to distinguish between different classes of tester - and it is most certainly not intended to be demeaning, derogatory or insulting. Furthermore, differentiating classes of knot tester is also not intended to be insulting or to devalue anyone. It is simply a way of scaling expectations of scientific rigour.

EDIT NOTE
In relation to distinguishing between different classes of testers:

  1. Hobbyist/Enthusiast
  2. Semi-professional
  3. Professional test lab

This avoids the term ‘backyard’ - which some may take offense to (even though it isn’t intended to be derogatory or demeaning - it is just a metaphor).

I had considered using the descriptor 'amateur' but, it can imply a meaning that is unfair or unwarranted: See this link to a dictionary definition: https://dictionary.cambridge.org/dictionary/english/amateur (scroll down a bit to see this possible imputation; ..."someone who does not have much skill in what they do")

I take your point on this Mark. Here in the UK the distinction is more focused on payment. If you are paid for your work you are professional and if you are unpaid you are Amateur, there is no denigration in perceived value. This possibly stems from the fact that Amateur Radio hams are amongst the world’s top experts in their field. Often, because they are not constrained by the need to turn a profit, Amateurs are able to progress R&D way beyond that achieved by ‘Professionals’. Add to this the fact that Amateurs are driven by passion while Professionals are driven by wage and continued employment, and you might see that Amateurs are generally respected as the experts.

Still, I am not a word botherer, and far more important to me is the unjustified and unjustifiable elevation in credibility you seem keen to accord to Professional Test Labs. I have said it already, but it seems worth stating again - I have been there - my labs used state of the art equipment, 0.1DIN test equipment, NIST traceable standards, automated analysis equipment, direct data capture and latest generation Statistical Analysis software. High precision, high accuracy, high repeatability using agreed methodology. Yet with no Nodeologist present it might be nothing other than highly accurate rubbish, while a Knot expert in his workshop, with a ruler and a bag of sand or a 10ton jack (for comparative assessments) would be able to make seriously valid assessments of knot behaviour.

Put it another way - scalling precision without expert intelligence does not scale value.

Accuracy without Expertise is a sham of our modern mindset, and it should be our job to think of them as a 0.1 DIN hammer …

There, I have explained it twice now. You know where I stand and why. I will now shut up on the subject.

Derek

I believe that some on this forum have apprehension of drawing criticism for their 'knot testing' efforts. And so they shy away from identifying as a class of tester where expectations may be beyond their capabilities. In my view, I think advances are made in a scientific field when others have a chance to peer review of try to reproduce results published. That is how science is done - someone tests and publishes - and then others can either confirm or refute the results. Criticism is part of the process - but it is inevitable that some may have difficulty in accepting criticism. And if expectations scale according to your 'tester class' - setting a lower bar is a way of escaping this process.

Point made. New ideas and perspectives are more valuable than a 0.1DIN hammer.

If we look at past evidence and the current crop of knot test reports from around the world - it is clear that some are 'holding them self out' as being an expert. That is, you can read/download some reports from certain individuals - and it is clear that they are holding themselves out as possessing a special expertise. Readers often assume they are 'experts' - and accept their conclusions on face value. Credibility plays a role - and some knot testers (mostly from a class of testers I refer to as 'pseudo labs' - or well equipped enthusiasts) - hold a certain level of professional credibility and can significantly influence the lay public. Examples of these pseudo lab testers are Richard Delaney (rope test lab) and Grant Prattley (Over the edge rescue). They regularly test and publish their results. I would not class them as enthusiast/hobbyist (aka 'backyard testers'). But they are not certified, nationally accredited tests labs. So my view is that semi-professional (pseudo lab) testers like Richard Delaney and Grant Prattley must be willing to accept criticism and peer review of their published results - as they are publishing to the world - and people assume they are 'experts'. Expectations scale accordingly - and I believe that a higher level of scientific rigour is warranted from these individuals than from enthusiast/hobbyist testers.

This, to me, is the most interesting (and valuable) category - Professional knot user, using ingenuity rather than a 0.1 DIN hammer. and yes they should certainly attract extensive scrutiny and review of their publications by experts from the knotting world. I haven’t noticed their work published anywhere near the IGKT - perhaps there is a reason for this?

If they are publishing knot related work, then the IGKT Forum is an excellent review platform that they should be eager to utilise.

Someone mentioned earlier that this Forum could use a Peer Review Page, perhaps the WebMistress could set that up for us and perhaps you could take on the role of feeding those reports into the page?

Derek

Add to this the fact that Amateurs are driven by passion while [b]Professionals are driven by wage and continued employment[/b], and you might see that Amateurs are generally respected as the experts.
Not all professionals are driven solely by employment (and this imputes 'money'). Some also want to make a difference and/or contribute to improvement.
... and far more important to me is the [b]unjustified and unjustifiable elevation in credibility you seem keen to accord to Professional Test Labs[/b].

Am unclear where you joined the dots to reach this viewpoint?
Where did I type words that specifically convey that I (personally) accord a higher level of credibility to professional test labs? For the record, I don’t accord any one class of knot tester any elevated status in credibility over another.

All I was doing was identifying that there are different classes of knot testers. And that expectations of scientific rigour scale according to which class you identify as.
Note: I personally identify as a hobbyist/enthusiast tester, and I acknowledge that a professional test lab would not automatically have expertise in knots and knot tying.

Not all professionals are driven solely by employment (and this imputes 'money'). Some also want to make a difference and/or contribute to improvement.

Agreed, but you try working in a testing lab and see how long your will holds out…

All I was doing was identifying that there are different classes of knot testers. And that expectations of scientific rigour scale according to which class you identify as.

If more credibility is not accorded to a greater level of rigour, then what is the value of that additional imposition?

But, I seem to have struck a nerve, if I have then I apologise, it was not my intention.

Derek

If more credibility is not accorded to a greater level of rigour, then what is the value of that additional imposition?
I thought that with this long discussion, I had already explained why? I think it is important to identify which class of tester you identify with because this then sets the level of expectation of scientific rigour. Examples are Mobius, Alan Lee and NautiKnots. [b]NautiKnots [/b]had a long argument for scientific rigour - with lengthy discussions about accuracy, repeatability, and the need for statistically valid sample gathering. He emphasised the need to consult with external institutions such as the cordage institute and the IEEE. He became irritated when I advanced that the cordage institute and IEEE are not experts in [i]knots [/i]and [i]knot tying[/i].

I had advanced (for example) that the cordage institute is not going to be helpful in devising knot tests that examine aspects not related to pure MBS yield (ie pull-it-till-it-breaks default mentality). Examples include things like assessing different knot geometries, dressing states, and knot security and stability. Their expertise is linked to cord/rope manufacturing standards and MBS break testing of rope manufactured material (not knots per se).

For a hobbyist/enthusiast tester who wants to examine something like jamming and stability in a particular knot geometry, I dont see how contacting the cordage institute is going to make any revolutionary breakthroughs in experimental design. Given that a hobbyist/enthusiast tester likely has non existent funds, non existent test bed (eg a 10 ton ram) and likely a non existent calibrated load cell, measurement accuracy will not fall within the reliability standards of a professional test lab. Photographs of any real quality are also likely to be non-existent - and a typed report with a conclusion is also likely to be absent.

The things that NautiKnots was arguing for - are likely beyond the reach of your typical hobbyist/enthusiast tester.

And Mobius insisted that he is not a backyard tester (ie a better definition I now use is hobbyist/enthusiast tester) and indeed insisted on identifying as a ‘knot trialer’ (and not a knot tester). Presumably, this insistence was related to the type of ‘rig’ that he uses to apply force and, to avoid being held to the same standards as semi-professional testers and professional test labs. The point being that Mobius didn’t want to identify as being in a class of tester where the bar is set at level where expectations of quality and accuracy are in the domain of the professional lab (and therefore avoid drawing unwanted peer review criticism).

For me personally, I identify as a hobbyist/enthusiast tester (aka formerly a ‘backyard’ tester) - and my measurement accuracy is only as reliable as my eye watching a dimly lit LCD screen while cranking a lever. My mathematical skills are poor and I dont have sophisticated instruments and software to aid in capturing data.
I do know how to use my el cheapo compact digital camera - which is simply aided by placing a white screen behind the knot so you have a uniform plain white background - and voila - you can get reasonable images. I am also working on my own, with very limited spare time and I dont have an engineering degree (which is likely a common theme with all hobbyist/enthusiast knot testers).

But, I seem to have struck a nerve, if I have then I apologise, it was not my intention.
You have absolutely nothing to apologise for! And you have struck [i]no [/i]nerve. That's the problem with typed words instead of the traditional face-to-face conversation. You are reading what I type - but, you place your own interpretation on it. I have direct first-hand experience with all 3 classes of testers, and can see the limitations for each class. These limitations are financial, time, reputation and equipment resources related. A hobbyist/enthusiast tester will be restricted by access to funds, free time, and equipment resources. In contrast, a professional test lab is usually well funded, has allocated time, high precision equipment and likely an established reputation.

Derek, my original questions was in relation to what can the IGKT do to establish robust knot testing guidelines.
You have attempted to answer this question.
In my experience on this IGKT forum, I have a feeling that there is going to be spinning wheels for quite some time. I dont think we will find 1st gear to get moving in the short term (and maybe not even the long term).

I think this will come down to the motivation of individuals - who are willing to make a difference (but will draw inevitable attention, both positive and negative).

per Derek:

Someone mentioned earlier that this Forum could use a Peer Review Page, perhaps the WebMistress could set that up for us and perhaps you could take on the role of feeding those reports into the page?

I am definitely a strong advocate for adding a new category to this forum:
Titled: Knot test reports and peer review
This category should only be for test reports and peer review criticism/feedback of those tests reports (no other non related off-topic discussions - that is, discussion should be constrained to critique and feedback on the test reports tendered.).

I would not be willing to take on any role in isolation. I would be willing to (when I have spare time) upload test reports that I find on the internet - as anyone else could easily do.
I would be willing to upload my own hobbyist/enthusiast test reports (as anyone else could do too).

I think this will come down to the motivation of individuals - who are willing to make a difference (but will draw inevitable attention, both positive and negative).

We can but propose, consider, discuss and review Guidelines, then put them out there for others to consider their usefulness. We cannot force anyone or any organisation to follow them. We can use them ourselves and demonstrate a value by example and promote them in discussion.

I do not believe that the IGKT as a body will endorse or support any such guidelines, so yes, it will be a long road and down to the individuals motivated to progress our field.

This category should only be for test reports and peer review criticism/feedback of those tests reports (no other discussions).

Review without the facility of question and discussion becomes nothing more than an opportunity to grandiose ones own opinion.

If we wish to understand and develop, then we must also be prepared to question, discuss and understand before passing a qualified opinion.

Derek

Review without the facility of question and discussion becomes nothing more than an opportunity to grandiose ones own opinion.

If we wish to understand and develop, then we must also be prepared to question, discuss and understand before passing a qualified opinion.

Derek, I think I have to choose and type my words very carefully - otherwise, it appears that you inject your own interpretation?
I have added a few extra words to my earlier post to clarify what I meant.

When I said “no other discussions” - to be crystal clear and very precise so it cant be misconstrued:
I meant:
No other off-topic non related discussions. Discourse should be constrained to providing critique and feedback directly related to the knot test reports. For example, posting a new topic on a subject that has absolutely nothing to do with peer reviewing a knot test report should not be permitted. In this way, the category of “Knot test Reports and Peer Review” can remain focused on the relevant subject material and not drift or veer away to completely irrelevant and non related discussions.

Hopefully my original intent is now clearer?

I fully concur.

A start --maybe enough-- could be a thread under
Practical Knots in which the OP will be what lists
all brought-to-our-attention knot tests; the author
simply continually Modifies this msg. to include
new citations/URLs as they are brought forwards.
Commentary can then refer to this msg. and to the
particular reports, in other threads.

(URLs that go dead should probably be retained
with the note to that effect. The “Wayback Machine”
can sometimes find things otherwise apparently dead,
too.)

–dl*

Okay…

So, what is the procedure for creating a new topic category in this forum?

Is it done by a forum vote or is it done by web admins decision?

In my view, if a separate knot test report and peer review category was created - it would hopefully turn up in google searches by future knot testers. Over time, and by default, perhaps it could develop into something quite useful and worthwhile as a central hub for knot testers.

Hi Mark.

The procedure is to firm up the format of how the data will be presented, how much opinion or if any straying discussion should be included in the postings, should the data be locked after entry and only modified by request and should it be a child board and in what parent board?

A forum vote can be good, then go from there.

SS

?! What I meant was simply begin a thread on this point,
which thread lists test reports as they are known. As the
author of this OPost, you are who maintains it. I suppose
follow-on posts in the thread could be what bring in ever
more citings of tests, which you then collect into the ever-growing
OP listing.

Maybe we agree on a simple format, on what to list/say
about each citing given. (E.g., name of tester(s), some
description of tested material & knots & test method
(drop test, pull test, ) ?) Perhaps we should
figure out how to present test reports that are not
URLinked (say, are present in a book) --maybe as
follow-up msg.s, which themselves could be linked
to in this master OP.

Thom EVANS’s summary of testing given at :
http://itrsonline.org/wordpress/wp-content/uploads/2016/11/Evans_Truebe.A-Review-of-Knot-Strength-Testing_2016.pdf
concludes with a long list of such things.
And which shows a problem we’ll have to deal
with : knot designations ! (Of course, that can
be forseen --silly me, !)

.:. In short, I don’t see this --at this inchoate start,
at least-- as differing from our usual forum posting,
except in the expected --though UNusual-- updating
of the OP (which is the master list)!
Now, experience might induce us to some better course?

Some places have special-status threads that appear
always at the top of the (sub-)forum listings!? Activity
otherwise will put this one current, in initial building.

–dl*

As it currently stands, there is no consistent and coherent place for people to post knot test reports within this forum.

I 100% believe that an wholly new and distinct category needs to be created within this forum - titled “Knot test Reports (and peer review)”.

At the moment, it is random and incoherent - exactly where does a would-be knot tester currently search? The search function of this forum is not exactly perfect - I just dont see how this will ever develop legs and turn into something really useful unless a whole new category is created.

So, I guess Dan Lehman is not in favor of creating a whole new category titled “Knot test reports”?

How many people do we need to make a giant leap forward for mankind and create a new forum topic/category?

The IGKT really has an opportunity here to create something that could turn into a central hub for knot testers and peer review of those tendered knot test reports.

The key is to make the information easy to find/search in one obvious and readily identifiable place. And not the current random incoherent mess which is laborious and not user friendly to search.

I was hoping that I am not the sole person in this forum who has a vision to really make a difference?