In the industrial safety barrier industry, the word “tested” is everywhere.
Products are labeled “third-party tested.”
Impact ratings are presented as headline energy numbers.
Systems are described as “certified.”
But in environments where barriers protect people, equipment, and operational continuity, the word tested only has meaning when you understand:
Not all testing is equal. And not all uses of the word “tested” mean the same thing.
What testing rigor actually looks like.
There are three primary testing models used in the industrial barrier space:
Each serves a purpose.
But they are not interchangeable.
Serious manufacturers rely heavily on internal validation.
At McCue, internal testing includes:
It is the engine of innovation — driving improvements across industrial safety barriers like rail geometry, post design, anchor systems, and energy absorption performance.
However, structurally, internal testing means:
When done transparently — with documented impact speed, vehicle mass, mounting substrate, anchor embedment, angle of approach, and pass/fail criteria — internal testing can provide meaningful engineering validation. But internal testing remains manufacturer-controlled.
That distinction matters.
Third-party witnessed testing introduces an independent observer.
In this model:
The third party does not:
Witnessed testing adds accountability compared to fully internal testing. It confirms that a test was conducted as described.
However, the manufacturer still controls:
Installation conditions and mounting environment directly influence barrier performance. If a barrier is mounted to a rigid steel frame for convenience rather than anchored into representative concrete slab conditions, the resulting energy rating may not reflect real-world facility performance. Witnessed testing verifies that a test occurred.
It does not transfer technical control.
Independent third-party conducted testing shifts control to the laboratory.
In this model:
Accredited laboratories operate under structured quality management systems and defined documentation protocols. These typically require:
This process reduces bias and increases traceability.
It is more expensive. It requires more documentation.
It takes more time.
But when life safety, risk management, or procurement standards require defensible performance validation, independent laboratory testing provides a materially different level of rigor. When the claim needs to hold up under scrutiny, the difference is structural — not semantic.
Impact ratings are only meaningful when installation conditions reflect real environments.
Industrial safety barriers in distribution centers and manufacturing facilities are typically anchored into concrete slabs. In these conditions:
Testing mounted to non-representative fixtures — such as rigid steel frames — may simplify repeatability, but it does not always replicate field conditions. Representative testing considers:
Installation is not a secondary detail.
It is part of system performance.
The industry also conflates documents that describe testing with documents that require it.
Not all referenced documents are testing standards.
Codes of Practice
A Code of Practice provides guidance and best-practice recommendations. It may outline barrier selection principles or describe general testing concepts.
It typically:
PAS 13, for example, provides guidance on safety barrier usage and includes high-level language regarding dynamic testing practices.
However, it is structured as advisory guidance — not as a prescriptive certification standard with defined laboratory-controlled performance thresholds.
When products are described as “PAS 13 tested” or “PAS 13 certified,” the important technical questions remain:
Terminology alone does not define rigor.
Formal testing standards establish defined and repeatable methodologies — written by industry committees, not manufacturers alone.
For example, ANSI MH31.2 and ASTM F3016 each specify:
That last point matters. Pass/fail criteria are set before the test begins — not interpreted after results come in. The standard removes manufacturer discretion from test design.
These standards are developed and maintained by industry bodies composed of engineers, safety professionals, and technical experts. No single manufacturer controls the methodology.
When executed through an independent laboratory, standards like these create direct comparability across products — same mass, same speed, same conditions, same pass/fail bar.
When both are met — independent execution and a formal standard — the result is a performance claim that stands on its own.
A guardrail system is only as strong as its weakest element. But energy ratings are often built around its strongest — mid-span rail impact under ideal conditions, with posts, connections, anchors, and installation variables left untested or mathematically projected.
Impact energy claims require context. Energy capacity can vary dramatically depending on:
Testing or rating based only on mid-span does not validate post performance. A 45-degree result converted mathematically to a 90-degree rating is not a 90-degree test. Physical impact at that angle is.
When ratings don't reflect real-world performance, the hidden cost shows up as recurring damage and near misses in the facility.
System validation should evaluate:
Headline impact energy numbers do not define full-system capacity.
Performance is holistic.
The most advanced operations don't accept claims at face value. They ask the next question.
Ask:
At McCue, we perform extensive internal testing to innovate and refine. We conduct independent third-party testing. We document installation conditions, provide engineering summaries and video evidence, and distinguish clearly between internal validation and independent laboratory verification.
Testing should eliminate ambiguity — not create it.
In a previous piece, we covered how high-performing facilities match protection to actual exposure. Testing rigor is the next layer of that decision. Industrial safety barriers are not marketing features.
They influence:
The difference between internal testing, witnessed testing, and independent laboratory testing is not academic.
It determines whether a performance claim is:
When safety and uptime are on the line, technical rigor is a risk decision.