Research Project using a possible new knot

It all comes down to:

[ol]- what do you want to know,

  • how confident do you want to be in your results, and
  • how much risk of error you are willing to take. [/ol]

If a knot always broke under the exact same load, then you would need only one sample to find out what that load was. If, on the other hand, different instances of the knot break at different loads, then you need more samples to understand what to expect of that knot in the future. That is, if our goal is to understand and make predictions about the behavior of the population at large, then our sample set needs to be large enough to have the same distribution of values as the whole population. As the size of the sample set increases, our confidence in the result increases, and the risk of error decreases.

A sample size of one yields a result (in practical terms) in which we have no confidence, and a high risk of error - regardless of what one is trying to measure (average, minimum, or maximum strength). All you learn is that the overall population contains that value - not what one can expect from other members in that population. Tests such as those performed by Yachting Monthly and Practical Sailor, which use a single result to extrapolate the behavior of the general population are worse than useless. They reveal incompetence that borders on negligence. The results mislead more than they inform.

One might do some full testing of a few things so to get an idea of whether e.g. cordage has much variance,...
Depending on what you are trying to determine, and what the distribution of the population looks like, you might be able to get the desired confidence with a smaller sample set, but that size will always be significantly greater than one. Let's say, for example, you want to know what the probability is of a given knot in a particular rope slipping before it breaks. So, you start tying samples, and pulling on them until they fail (one way or the other). If, after reaching 11 samples, you found that 10 slipped and 1 broke, you can conclude that the knot has a 90% chance of slipping, with a 95% confidence in your result, and an 18% margin of error. If, however, as you test, you find that half of the time the knot slips and half of the time it breaks, you need to take 30 samples to conclude that the chance of slipping is 50% with the same confidence and margin of error.
There's got to be a way forward.
Wishing for something doesn't make it true. If you want to model the general population, then your sample set needs to be big enough to accurately reflect the distribution of the whole population.

I hope that helps,
Eric

Here are some further comments on the above-cited
Yachting Monthly knots article.

  1. “… the strongest knot sailing knot …”
    As noted above already, this “strongest” characteristic
    is both not-so-easily-firgured and not really the key quality.

  2. To

the RYA recommends a round turn and two half-hitches for fastening a mooring line to a pontoon cleat because it can be tied and untied with load in the line. I?ve always used a pre-tied bowline dropped over the cleat,
I wonder what happened to the [i]cleat hitch[/i], which also can be un/tied under load, and was of course intended for cleats!?

3a) In the embedded how-to-tie-some-knots video,
the fig.8 eye knot is supposedly dressed to the dubious
rule of putting some end on the outside; the resulting
knot is a poor excuse for something that has been dressed,
whatever the goal!

3b) The fisherman’s knot is tied w/discordant (i.e.,
different-handed) overhand compoents. There is no
discussion about this aspect, and IMO this is likely the
inferior variation.

3c) The grapevine/dbl.fisherman’s knot is deemed the strongest
–and it probably was–, but the guy in the video only shows
his ineptness at knot tying by struggling to make the
component strangle knots,
putting them also in a discordant/opp-handed version,
BUT for the final image it seems that someone (else?!)
stepped in and tied the knot concordantly and with
sufficiently long tails for the camera !! (I wonder if
this mystery tyer can be rented?)

  1. Their stated lack of “proof-loading” to 50% of tensile
    strength is a surprising remark, IMO. I’ve heard of some
    cyclic loading done pre-testing, or otherwise no indication
    of such conditioning, and of course of testing non-new
    cordage, but … 50% ?! That seems high.
    And
Dyneema and Marlowbraid to see how it affected their known strength, and found that the break load of both lines was reduced to a startling 35 per cent of their ISO specification.
"Startling" indeed. IIRC, pure Dyneema 12-strand (NERopes) was said to have broken --don't know about any pre-loading-- at about 33% (for some few knots NER broke for me via Brion Toss's invitation), but that seems low for conventional cordage. (One can wonder what "break" means, here, for as noted elsewhere the "Dyneema" line often broke in the sheath only. With this mid-line stopper knot, though, the "pulled through" potential didn't exist.)
  1. “Then we looked at loop knots” :: yes, how surprising
    to read that they included these –RT + 2HHitches, anchor bend?!

  2. Although there are two mentions/occurrences of
    “carrick bend”, there is nothing further --no report of
    actually testing it (as stated it would be) or showing
    a result for it. (I guess that along with knotting
    editing is a lacked skill at Yachting Monthly!)

          • T B Continued - - - - - -
            (Goodness, there is so MUCH to cite.)

–dl*

It’s interesting to see that for the dbl. sheet bend
the “Dyneema” & Marlowbraid lines broke at opposite
ends. I figure this :: there was more slippage of the core
in the hitching part than the U-part/bight, so the
former taxed its sheath more (in the yellow hitching
end) and broke there; whereas the Marlowbriad’s
like greater constriction by the hitching part around
the bight/U-part’s ends led to the break coming in
that U-part’s SPart.
In other words, greater constriction was the same,
and the difference resulted from core slippage or not.

IN THEORY … ::slight_smile:

–dl*

Hello Dan and ‘NautiKnots’

You guys are drifting way off topic here… but, I definitely like, and am very interested in setting up proper parameters for knot testing and future knot testers in general.

This discussion should be in a new thread.

I find it ironic that the issue of poor knot testing and poor reporting keeps raising its ugly head.
I think the IGKT must accept at least part of the blame/fault that knot testers are continuously getting things wrong.

The IGKT should set up a knot testers page - which could act as a one stop shop for all future knot testers. They can browse to the IGKT forum and review the "Knot testing guidelines’.

Currently, virtually all knot testers act in isolation - with little to no peer reviewing of their work before it is published. They just test and publish = and the same old tired mistakes are repeated endlessly.

I know some members of the IGKT have tried to address this issue in the past but, it has bogged down or run out-of-steam.

Surely we have enough ‘experts’ in the IGKT to address the issue of knot testing?
Is there any reason in principal why we can’t setup a knot testing page to provide a useful set of guidelines and principles for all future knot testers?

I challenge interested members of the IGKT to take this on as an issue to resolve before the end of 2018!

JFK once said… “We choose to go to the moon not because it is easy, but because it is hard”

Surely if we can put boot prints on the moon, we can establish a set of Knot testing guidelines and principles?

Mark Gommers

EDIT / PS

I think a new category needs to be added to this forum - titled: KNOT TESTING GUIDELINES AND PRINCIPLES

This will make it easier for the general public and interested future knot testers to search and find this site.

I think it would be more accessible if we put agreed guidelines and principles on the main website, perhaps based on a discussion here about what these guidelines and principles might be. Actual results and ensuing discussions would continue to be posted here perhaps with a link from the main website. In theory this shouldn’t take too long or create any great disagreement - that comes from results rather than methodology one would hope…

Sweeney

Further comments on the Yachting Monthly testing.

  1. The RT + 2HH will have different behavior on
    smaller vs. larger-diameter objects; with relatively large
    ones, the clove h. (the knot of this structure)
    grabs onto the (noose) SPart more than it will on a ring.
    (I recall test data where this knot rated higher on a ring
    but lower on a pile than the anchor bend --which
    I presume did so well on the latter by having its
    collar yielding enough to leave the SPart more straight
    and arriving tangential to the object (my guess)!?)

One can try to ameliorate the noose’s effects by settting
it firmly in an iterative manner of pulling hard on the
(noose) SPart and pushing the knot back snug to the
object --of trying to work tight the round turn. From
such a setting, the draw of the SPart ought be a little
less on the knot and thus the angle of contact better
and … . (need that comprehensive testing to see if …)

  1. Marlowbraid tied in a bowline breaking at just 47%
    really surprises me. One can wonder at why --at whether
    the assessed tensile strength, e.g., is too high? (In
    some testing of rockclimbing rope the bowline gets
    up in the 70%. Dave Richards’s testing of 12.5mm low=
    elongation (“static”), 10mm dynamic, & 7mm “accessory”
    cordage got (per 5 test specimens each), resp., 63.3-63.1-67.1%.

8.) The fig.8 eyeknots are hard to figure out from the
report’s images, as they are shown in lousy dressings
–which apparently was the case with the broken knots.

I find it most peculiar that for the “Dyneema” rope the
break occurs at one of the eye legs --not in the
SPart ??? Huh? --and at only “35%” of some supposed
tensile strength, which should mean only 17% in the
broken part (!!) ?!
Were there some slippage of core, that should pull
core from the eye (as seen with the bowline, e.g.)
and LESSEN (remove!) load from the sheath,
not break it!

–dl*

If you wanted to know the quality/composition
of a gallon of milk, how many samples would
you take for analysis? I’m thinking that cordage
is for the most part reasonably assumed to be
pretty consistent, and this could be one factor
in knotted-rope strength (to step away from
attributing strength to knot en vacuo!) that
can be reduced in number.

And given some experience in test results with
some knots, one might come to believe that
skilled tyers can replicate at least some knots
well enough to not need as big a number as
might be wanted for more complex knots!?

But we surely aren’t going to do all the multiples
of sampling per all the multiples of probable
factors that influence strength --a number that
is huge.

Meanwhile, there needs to be a great tightening
of knot-tying skill and reporting detail, irrespective
of test-specimen count. (An extra dozen or two
tests of fig.8 eye knots (mis)tied as done in Yachting
Monthly
would not be a help!)

There is this irony :: Dave Richards remarked in his
report on testing 7mm & 12.5mm low-elongation &
10.5mm dynamic kermantle ropes that he was who
tied all of the knots --the implication being that in
this way he ensured some uniformity to their formation
(it actually doesn’t necessarily mean even this, were
his tying skill not great and his discrimination among
possible versions lacking!).
Well, even assuming it was so,
the implication must be that HIS results cannot be
so meaningful/applicable to all the rest of us, who
will be tying our own knots (in different ropes)!

–dl*

Well, that depends on what you want to know about milk. If you only want to know about that particular gallon at that particular time, you might only need one sample. Don’t, however, expect to make useful generalizations about other gallons of milk. What if the gallon you had tested had spoiled? Would you conclude that all milk is spoiled? If you tested 2% milk, would you conclude that whole milk has 2% fat content?

I'm thinking that cordage is for the most part reasonably assumed to be pretty consistent
I think that's a faulty assumption. Would you assume that knots tied in 12mm double-braid spun polyester would hold the same as those tied in 6mm twisted nylon? How about 3mm single-braid Dyneema? In terms of the milk test above, would you assume that all breeds of cattle produce the same quality milk? If so, you'd be sadly mistaken.
And given some experience in test results with some knots, one might come to believe that skilled tyers can replicate at least some knots well enough to not need as big a number as might be wanted for more complex knots!?

Although unskilled knot tyers might produce knots with greater variability (leading to an increase of samples needed), skilled tyers producing identically dressed knots does not reduce the needed sample size.

But we surely aren't going to do all the multiples of sampling per all the multiples of probable factors that influence strength --a number that is huge.
True. That's why it is so important to construct your test carefully in order to actually find out what you'd like to know about a particular knot and/or cordage.
Meanwhile, there needs to be a great tightening of knot-tying skill and reporting detail, irrespective of test-specimen count. (An extra dozen or two tests of [i]fig.8 eye knots[/i] (mis)tied as done in [i]Yachting Monthly[/i] would not be a help!)
True. A disconcerting proportion of knot tests are performed by people who are either unskilled at tying knots, or unskilled at testing. Either one can easily render the test results meaningless. In the Yachting Monthly test (for example) the test was evidently performed by a reporter who didn't know how to do either. His only qualification was the ability to write an article.

If you want knot tests you can extrapolate from, they need to be constructed and performed by people who know both how to tie knots, and how to test.

-Eric

From NautiKnots:

If you want knot tests you can extrapolate from, they need to be constructed and performed by people who know both how to tie knots, and how to test.

And, what to test.

Its the ‘what to test’ that is often the key problem.

Correctly tying knots and knowing how to set up your test machinery (including presumably statistically valid sampling) is still nothing compared to knowing ‘what’ to test.

And this has drifted waaaayyyyyyy off topic from the original poster’s points.

Though, re underscored part, the gallon might itself
be seen as a sample of the greater-batch-of-that milk
I’m pointing to evenness of a thing; your supposed
challenges to this miss that point --of course one wouldn’t
think this 2% milk gallon implied all milk was so.
And I don’t expect this climbing rope to imply things
for THAT one, or yachting ropes, or … ; and I will
urge the “KNOT strength” is better conceived as
“this-material-so-knotted” strength.

SO, given our gift spool of rope for testing --the milk gallon–,
why test 30 specimens --THAT is my question!

I'm thinking that cordage is for the most part reasonably assumed to be pretty consistent
I think that's a faulty assumption. Would you assume that knots tied in 12mm double-braid spun polyester would hold the same as those tied in 6mm twisted nylon? How about 3mm single-braid Dyneema?
Whoa, then I'm poorly stated :: goodness, no, I meant that a given spool of rope is consistent with itself, not that any rope is ... any other. (See my point re restating "knot strength" to "this-material-so-knotted strength".)
And given some experience in test results with some knots, one might come to believe that skilled tyers can replicate at least some knots well enough to not need as big a number as might be wanted for more complex knots!?

Although unskilled knot tyers might produce knots with greater variability (leading to an increase of samples needed), skilled tyers producing identically dressed knots does not reduce the needed sample size.


Why not? Given our milk gallon, what is to gain?
(At least, I think that after some analysis of tests,
one should come to a conclusion that multiple-knots
testing doesn’t show much new, unexpected.

(How interesting to see in one of Dave Richards’s
tests that it was the grapevine bend that had LEAST
stnd. deviation, even over pure-rope testing!
Similarly, how puzzling to see that both he and also
CMC Rope Rescue thought to test “fig.8 re-woven
AND fig.8 on a bight”, as though the knot should
care how it came to be! NOW, yes, it would be quite
informative to see from field examination if indeed
users came up with different geometries per tying
method --I rather expect that they might,
even to the point of which end is loaded. BUT,
do that and be explicit about it; just giving the
different names and having same tyers, one would
hope that per “dress & set” action the knots came
out the same; in fact, diff.s (5 cases each) were
quite close.)

But we surely aren't going to do all the multiples of sampling per all the multiples of probable factors that influence strength --a number that is huge.
True. That's why it is so important to construct your test carefully in order to actually find out what you'd like to know about a particular knot and/or cordage.
I'm thinking of moving through the multifactored *knot* space as an exploratory expedition, and hoping to travel efficiently so far as test-results fall in expected ranges, some of which ranges will become established by more thorough testing, but some of that easy movement will come by doing with less. .:. Throw the many-sampled cases out where a stout pillar looks to be needed.

AND I have argued for --where possible-- the
single test of multiple-tokens of a knot,
stringing a line say with 5-10 identical knots
in it, to get a break, which is reasonably argued
to better indicate minimum strength, and then
have all those near-rupture (presumably) knots
to examine --maybe to further load and see … .

At least :: like eye knots on the ends of single specimen,
and round slings with TWO of end-2-end knots (hoping
to avoid the knot-compression-imbalance that such
a structure with one-knot-only can yield (the knotted
side i.e. lengthening per knot compression and that
not being fully transferred/balanced to unknotted side
by friction at pins --maybe a pulley would do, but … ).

–dl*

Ok, let’s back up a bit. What exactly are you trying to learn about milk in your 1 gallon test thought experiment? What exactly are you trying to learn about knots in your analogous spool of rope test?

If you know that the milk is homogenous and want to determine fat content, then you need test only one sample to be confident that further tests would yield the same result. That’s analogous to testing your rope to see that it is polyester. If one test reveals that it is polyester, and you know that all the rope on the spool is the same, you don’t have to do further tests to be confident that they would also reveal polyester.

That doesn’t, however, tell you anything about knots. If you wanted to know if, let’s say, how likely a particular knot tied in line from that spool is to jam, a single test won’t tell you that. Knot jamming probability does not have zero variance. Not all tests will yield the exact same result. All a single test tells you is that it’s possible for that knot in that line to jam (or not). It doesn’t tell you (with any confidence whatsoever) what the likelihood is of the next knot jamming.

And you know that, or else you wouldn’t have said:

I have argued for --where possible-- the single *test* of multiple-tokens of a knot, stringing a line say with 5-10 identical knots in it, to get a break
Here you've constructed a way of conducting 5-10 tests on separate knots with a single pull. A compound test like that might yield more confidence in the [I]minimum[/I] strength value, but it still doesn't tell you anything about the distribution of those values.

If you still think you can determine the mean/median/mode/min/max breaking strength of a knot in given cordage with a single test, then we’re never going to agree. If you think you need more than one test, then how many you need depends on distribution of the test results (which you probably don’t know in advance), the amount of confidence you want, and margin of error you’re willing to accept.

It’s very tempting to skimp on the number of samples because it’s inconvenient to take them. Just realize that you’re going to sacrifice confidence or error rate. Do you want to be 95% confident in your results, or 50%? Do you want a 15% margin of error, or a 30% margin of error? At what point do you no longer learn what you set out to discover?

One last time, conventional wisdom in statistics is that if you don’t know the population distribution in advance, you need a minimum of 30 random samples (and quite possibly more) to determine that distribution with meaningful confidence (and even then, it’s possible to be wrong). Don’t take my word for it - ask a statistician or play with the numbers yourself (there are a number of sample size calculators available online). If you choose to take fewer samples, then be prepared for people to dismiss your results as insignificant.

I agree with the others that we have taken this conversation too far off topic already, so I’ll leave it at that.

Regards,
Eric

I think we’ll find comfort that the variation just
isn’t so great to worry about, “at least in some
knots”, as I offered might be reasonably repeatedly
tied alike; some others of more complexity might
not behave so predictably.

And you know that, or else you wouldn't have said:
I have argued for --where possible-- the single *test* of multiple-tokens of a knot, stringing a line say with 5-10 identical knots in it, to get a break
Here you've constructed a way of conducting 5-10 tests on separate knots with a single pull. A compound test like that might yield more confidence in the [I]minimum[/I] strength value, but it still doesn't tell you anything about the distribution of those values.\
Though, per above..., we might come to some comfortable & reasonable belief that the range is not going to surprise us.
then how many you need depends on distribution of the test results (which you probably don't know in advance), the amount of confidence you want, and margin of error you're willing to accept.
Given vagaries of tying & various materials --and this means same brand but different histories of usage--, I think that getting the sort of statistical level of confidence that is defined in the pure math is ... well distant from meaningful/useful information.

Some thorough testings esp. to focus on some
particular factors (e.g., having pretty evenly
made & scaled from smaller-to-thicker like ropes
in checking if size has whatever effects,
and so on) might be the basis for later taking
few tests w/some confidence if results are where
expected.

Assuming that TestPerfect did some statistically impressive
oooodles of test cases and …,
just what confidence does that give ?
Given that someonElse did whatever tying,
that the load was applied in just some manner
(unlikely to be like actual use), and the rope
was just that rope in just that condition.

(-;

How do you know that, until you test it? And, how do you know that until you perform enough tests to have confidence in the results (statistically speaking)?

Assuming that TestPerfect did some statistically impressive oooodles of test cases and ..., just what confidence does that give ?
Go to one of many sample size calculators online (such as https://www.surveysystem.com/sscalc.htm), plug in the numbers, and it will tell you exactly how confident you can be (again, statistically speaking).
Given that someonElse did whatever tying, that the load was applied in just some manner (unlikely to be like actual use), and the rope was just that rope in just that condition.
What are you trying to find out? Haphazard testing will yield haphazard results. If you don't construct your test meaningfully, you won't get meaningful data.

I’m not saying that any test has to be performed X times in order to be useful – I’ve done informal knot testing myself with statistically insignificant sample sizes, but that was just to get an idea of what might be interesting to investigate further. I don’t even remotely assume that those results predict the probability distribution of future outcomes.

I am saying that if you want results that accurately reflect the general population, and are useful for predicting future results, then you’re going to need sample sizes that are statistically significant. No amount of optimistic assumption or wishful thinking is going to change the math.

Regards,
Eric

Tim,

You have vanished from this thread but perhaps you are still reading with interest?
At some point, this topic has drifted into testing methodologies and statistically valid sampling methods…maybe the first divergence occurred roughly at reply #17 and then escalated rapidly.

I think most of the replies from #17 onwards could be an entire new topic just discussing repeatable testing methodologies and statistically valid sampling methodology.

Nevertheless, my responses to your specific text is as follows:

My decision to test which knot/hitch against which knot/hitch came about from over 35 years of working with ropes at sea both professionally and leisure, and from rock climbing for leisure. I was also drawn to an online article in Yachting Monthly (May 2015) which was conducted by Marlow Ropes. http://www.yachtingmonthly.com/sailing-skills/strongest-sailing-knot-30247

By now, you should be aware that the ‘yachting monthly’ test report is just another example of poorly conceived and poorly conducted testing. NautiKnots and Dan Lehman have already voiced their opinions herein - hopefully you wont make the same mistakes?

My pilot tests were as follows.

Bowline v Round turn and two half hitches

  • The round turn and two half hitches won outright 4 times out of 4 (as expected, based on work and climbing experience, and also Marlow Rope’s online testing).

When you mention ‘Bowline’ - exactly which type of ‘Bowline’ are you referring to? There are many different forms of a ‘Bowline’ (note that I wrote ‘a Bowline’ and not ‘the Bowline’).
I am going to take a wild guess and ‘assume’ you meant the common #1010 Bowline which is based on a single right-hand nipping loop? This seams to be the default ‘Bowline’ that knot testers appear to be fixated on.
Its a pity that other ‘Bowlines’ are ignored (or in ignorance, simply not known). I would be most interested if you could test Scott’s locked Bowline.
However, I would like to examine properties other than the default ‘pull-it-till it breaks’ mindset. A significant proportion of knot testers are fixated on the idea of probing MBS yield point of a knot (ie pull till it breaks). This mindset permeates nearly all of humanity. It would be nice to see a different approach…such as probing the following aspects:
jamming threshold
instability threshold
geometry at various load milestones (ie at certain loads, stop and photograph the knot structure - and compare to ‘control’ of no load)
If you are in the majority mindset of pull-to-failure type thinking, could you at least test ‘Bowlines against Bowlines’. For example, test #1010 against Scotts locked Bowline and #1010 against a ‘slipped’ #1010 (adds 3 rope diameters inside the nipping loop).

Looped double fisherman's (Scaffold hitch) v Round turn and two half hitches - The round turn and two half hitches won outright 4 times out of 4 (as expected, base on work experience and based on Marlow Ropes online testing).

Please use ‘ABoK’ numbers where they exist to aid in positive identification. Also, realize that these knot structures act as ‘nooses’. You should characterize them as such. In fact, they are ‘composite’ structures consisting of:

  1. A tensionless hitch; and
  2. A securing mechanism (ie a strangled double overhand knot versus 2 half-hitches which likely form a clove hitch).
    The difference between the 2 structures being the type of securing mechanism.
'English Braids' have very kindly provided me with 200 metres of 4mm 12 stranded polyester dinghy control line to continue my testing.
I wish you could obtain human-rated ropes (eg EN1891 abseil rope and EN892 dynamic climbing rope. Is this an impossibility?
I have tested to failure (three times), short lengths of their control line with a splice in each end using known static weights. So I now know what load the splices part at. The next stage is to test my hitch against the splice under different environmental conditions.
Is there any reason why you couldn't terminate each end using a 'tensionless hitch' where the remaining tail is then clamped (instead of a 'splice'?
Based on Marlow Ropes online knot test which the round turn and two half hitches is rated very highly against a splice, I have high hopes for [b]my hitch[/b] as it outperformed the round turn and two half hitches by far.
By now, you realize that it isn't 'your' hitch. ie it isn't 'new'.

Tim, I believe that there are 3 different types of testers as follows:
[b]

  1. Hobbyist/Enthusiast testers:[/b] (aka Backyard testers) who largely act in isolation:
    (usually an individual who isn’t well funded and doesn’t have sophisticated forced generating equipment that is regularly calibrated. The individual is usually an enthusiast and may seek assistance from a friend of acquaintance). Reporting is generally not bound to scientific rigor.

2. Pseudo lab testers:
(usually individuals but sometimes 2 or more persons who are roping/rescue/rope access enthusiasts. They are not a certified test lab but do have force generating equipment and the means to capture data. They have freedom to test in any way they desire and their testing isn’t accountable to third party accrediting agencies. Scientific rigor falls upon the individuals experience and knowledge (eg whether they have background education from a college/university or access to expertise in repeatable methodology).

3. Certified, nationally accredited test labs:
(who use calibrated force generating equipment and test strictly in accordance with their accredited status (these entities are normally a business enterprise - and they routinely test things to destruction. The personnel at these labs are generally not knotting enthusiasts and knot tying skills isn’t part of their day-to-day employment. All reporting is bound to rules of scientific rigor and statistical sampling methodology).

Which category do you fit within Tim?

WEll, seeing lack of variation in some tested
cases could lead to expecting that in others
that had nothing to make one suspect otherwise
(and then what few test cases fell into range).

Assuming that TestPerfect did some statistically impressive oooodles of test cases and ..., just what confidence does that give ?
Go to one of many sample size calculators online (such as https://www.surveysystem.com/sscalc.htm), plug in the numbers, and it will tell you exactly how confident you can be (again, statistically speaking).
My point here is that the precision of factors leaves all variations still in question. Yes, a calculator can tell about X at Y & Z repeated, but not of X2 at Y & Z2. So, you narrow the testing in a sense --i.e., concentrate your test cases-- and gain that statistical confidence, but at the cost of breadth of applicability.

Quickly :: I don’t want to seem hostile to the
use of these maths,
but one needs knowledge of much broader reaches
than will be got if concentration of test cases is all
that one does.

And we now work in absence of much of any such
tested knowledge.

–dl*

Hi Tim,

By definition, you do not need to test this knot because no knot can be stronger than the cord it is made from, and cord MBS is measured by winding the cord around a round anchor - i.e. essentially a knotless fixing.

Provided the number of turns you use is sufficient for the cord/ anchor combination to shed all the force before the cord leaves to make the final strangle tie off, then the cord will rupture at its MBS at wherever its weakest point happens to be.

The only exception to this situation would be if you have insufficient turns and residual force exits the last turn, finishing up as a lateral force against the SP at the Strangle attachment. The slight angular deflection at that point will act as a weak point, the weakness being proportional to the angular displacement.

So, please ignore all the shedload of ‘protocols’ and statistically significant sample numbers cited above, use your Engineers eyesight and look at how cord is anchored in the MBS test rig, then make sure your knot has sufficient turns to match this and by definition it must be as strong as the cord itself.

Welcome to the wonderful world of Nodeolgists and please keep on knotting and stirring up the dust on this Forum.

Derek

What happens where testing gives contrary evidence?!
“By definition,” the testing is wrong?!!

I remark this in recalling one fellow who IIRC was the
editor of an angling magazine (USA) getting just such
puzzling results --and explicitly recognizing and re-testing
them (in contrast to some reports that ignore them!)–
and, well, … he had no explanation. I think it was a
particular-#-of-wraps Bimini twist that didn’t break,
but the line did, and did so at higher load(s) than did
the line when he tested it --yes, another goood point :
he got his own tensile and contrasted it w/the nominal
one from maker (his were way higher)! – !!

[Oh, I think that this is the guy & site & more recent
than what I recalled, but a point to begin your own
explorations. DOUG OLANDER
[/url]www.sportfishingmag.com/best-fishing-knots-main-line-to-terminal-gear#page-19[/url]
]

Some manufacturers (per Cordage Institute standard?)
use a tested splice strength as material tensile; some
other standard (Eruopean vs. USA-ian) allows if not
specifies some other sort of figuring.

One can beware the claims of evidence of “stronger than
the rope” from testing a round sling with one so-called
knot in it, which don’t consider that knot compression
can feed slack into the knotted side and thus reduce
tension there and … the break can occur at the pin
and not the knot.

(And then there is this elsewhere-examined Yachting
Magazine test-result image showing a break in the

eye leg<<< of a knot !! :o Huh?!

–dl*

What happens where testing gives contrary evidence?! "By definition," the testing is wrong?!!

When testing gives contrary evidence then your understanding of the limitations of the testing or the interpretation of the Statistics is likely wrong…

yes, another goood point : he got his own tensile and contrasted it w/the nominal one from maker (his were way higher)! -- !!
  1. was his tensile testing calibrated?
  2. makers quote MBS which is typically 3 sd’s below highest figure, and for added security, some manufacturers quote MBS at 2 or 3 sd’s below the mean so that 99.9% of their cord will perform within the quoted MBS.