Trends & Innovations in Industrial Safety, Retail, and Storefront Safety

What "Third-Party Tested" Actually Means — and Why It Matters

Written by Thomas Ustach | May 6, 2026 10:49:36 PM

How the word "tested" gets used and what to ask before you accept a claim at face value.

In the industrial safety barrier industry, the word “tested” is everywhere.

Products are labeled “third-party tested.”

Impact ratings are presented as headline energy numbers.

Systems are described as “certified.”

But in environments where barriers protect people, equipment, and operational continuity, the word tested only has meaning when you understand:

  • Who conducted the test
  • Who installed the product
  • What it was mounted to
  • How impact conditions were defined
  • How data was collected
  • Who controlled the reporting

Not all testing is equal. And not all uses of the word “tested” mean the same thing.

What testing rigor actually looks like.

The Three Levels of Testing in Industrial Safety

There are three primary testing models used in the industrial barrier space:

  1. Internal (manufacturer-conducted) testing
  2. Third-party witnessed testing
  3. Independent third-party conducted testing

Each serves a purpose.

But they are not interchangeable.

1. Internal Testing: Engineering Control and Innovation

Serious manufacturers rely heavily on internal validation.

At McCue, internal testing includes:

  • Dynamic bogie impact testing
  • Static load and yield testing
  • Anchor pull-out testing
  • Fatigue and cycle testing
  • Destructive ultimate testing
  • Environmental exposure testing
  • Material validation testing
  • Ergonomic and pendulum testing

It is the engine of innovation — driving improvements across industrial safety barriers like rail geometry, post design, anchor systems, and energy absorption performance. 

However, structurally, internal testing means:

  • The manufacturer installs the product
  • The manufacturer defines the mounting condition
  • The manufacturer conducts the test
  • The manufacturer records and interprets the data

When done transparently — with documented impact speed, vehicle mass, mounting substrate, anchor embedment, angle of approach, and pass/fail criteria — internal testing can provide meaningful engineering validation. But internal testing remains manufacturer-controlled.

That distinction matters.

2. Third-Party Witnessed Testing: Observed, Not Executed

Third-party witnessed testing introduces an independent observer.

In this model:

  • The manufacturer installs the barrier
  • The manufacturer defines the mounting conditions
  • The manufacturer conducts the impact test
  • An independent third party observes and verifies that the test occurred under declared conditions

The third party does not:

  • Install the system
  • Define the installation parameters
  • Control the impact equipment
  • Own the instrumentation
  • Perform the primary engineering analysis

Witnessed testing adds accountability compared to fully internal testing. It confirms that a test was conducted as described.

However, the manufacturer still controls:

  • The substrate (steel frame, reinforced concrete, custom fixture, etc.)
  • Anchor type and embedment
  • Edge distance
  • Slab thickness
  • Impact location (mid-span vs. post)
  • Angle of impact
  • Test sequence

Installation conditions and mounting environment directly influence barrier performance. If a barrier is mounted to a rigid steel frame for convenience rather than anchored into representative concrete slab conditions, the resulting energy rating may not reflect real-world facility performance. Witnessed testing verifies that a test occurred.

It does not transfer technical control.

3. Independent Third-Party Conducted Testing: Controlled by the Laboratory

Independent third-party conducted testing shifts control to the laboratory.

In this model:

  • The product is supplied to an independent testing facility
  • Installation is performed by the lab or under their documented procedures
  • Mounting conditions are defined in accordance with the test protocol
  • Impact equipment is operated by the lab
  • Instrumentation is calibrated and controlled by the lab
  • Data acquisition and analysis are performed independently
  • A formal engineering report is issued by the lab

Accredited laboratories operate under structured quality management systems and defined documentation protocols. These typically require:

  • Installation documentation
  • Anchor specification confirmation
  • Defined slab or substrate characteristics
  • Preset impact speeds and surrogate vehicle mass
  • Instrumented data capture
  • Traceable reporting

This process reduces bias and increases traceability.

It is more expensive. It requires more documentation.

It takes more time.

But when life safety, risk management, or procurement standards require defensible performance validation, independent laboratory testing provides a materially different level of rigor. When the claim needs to hold up under scrutiny, the difference is structural — not semantic.

Real-World Conditions Matter

Impact ratings are only meaningful when installation conditions reflect real environments.

Industrial safety barriers in distribution centers and manufacturing facilities are typically anchored into concrete slabs. In these conditions:

  • Concrete breakout can define failure mode
  • Anchor embedment depth influences energy capacity
  • Edge distance affects slab performance
  • Slab thickness affects load distribution

Testing mounted to non-representative fixtures — such as rigid steel frames — may simplify repeatability, but it does not always replicate field conditions. Representative testing considers:

  • Realistic slab properties
  • Appropriate anchor systems
  • Field-relevant embedment depths
  • Impact heights matching forklift chassis geometry
  • Angle of approach reflective of traffic patterns

Installation is not a secondary detail.

It is part of system performance.

Codes of Practice vs. Formal Testing Standards

The industry also conflates documents that describe testing with documents that require it. 

Not all referenced documents are testing standards.

Codes of Practice

A Code of Practice provides guidance and best-practice recommendations. It may outline barrier selection principles or describe general testing concepts.

It typically:

  • Allows interpretive flexibility
  • Does not mandate laboratory execution
  • Does not prescribe strict pass/fail certification thresholds

PAS 13, for example, provides guidance on safety barrier usage and includes high-level language regarding dynamic testing practices.

However, it is structured as advisory guidance — not as a prescriptive certification standard with defined laboratory-controlled performance thresholds.

When products are described as “PAS 13 tested” or “PAS 13 certified,” the important technical questions remain:

  • What specific impact parameters were used?
  • Who performed the installation?
  • Who conducted the impact?
  • What defined pass/fail?
  • Was the testing laboratory responsible for execution and reporting?

Terminology alone does not define rigor.

Formal Testing Standards

Formal testing standards establish defined and repeatable methodologies — written by industry committees, not manufacturers alone.

For example, ANSI MH31.2 and ASTM F3016 each specify:

  • Defined surrogate vehicle mass categories
  • Preset impact speeds
  • Structured test configurations
  • Documented methodology
  • Required instrumentation and data capture protocols
  • Predetermined pass/fail thresholds 

That last point matters. Pass/fail criteria are set before the test begins — not interpreted after results come in. The standard removes manufacturer discretion from test design.

These standards are developed and maintained by industry bodies composed of engineers, safety professionals, and technical experts. No single manufacturer controls the methodology.

When executed through an independent laboratory, standards like these create direct comparability across products — same mass, same speed, same conditions, same pass/fail bar.

When both are met — independent execution and a formal standard — the result is a performance claim that stands on its own. 

The “Big Number” Problem

A guardrail system is only as strong as its weakest element. But energy ratings are often built around its strongest — mid-span rail impact under ideal conditions, with posts, connections, anchors, and installation variables left untested or mathematically projected. 

Impact energy claims require context. Energy capacity can vary dramatically depending on:

  • Mid-span rail vs. upright post impact
  • 90-degree vs. 45-degree impact
  • Physical testing vs. mathematical projection
  • Strongest component vs. weakest system point

Testing or rating based only on mid-span does not validate post performance. A 45-degree result converted mathematically to a 90-degree rating is not a 90-degree test. Physical impact at that angle is.

When ratings don't reflect real-world performance, the hidden cost shows up as recurring damage and near misses in the facility.

System validation should evaluate:

  • Rails
  • Posts
  • Connections
  • Anchors
  • Installation conditions

Headline impact energy numbers do not define full-system capacity.

Performance is holistic.

What “Tested” Should Mean

The most advanced operations don't accept claims at face value. They ask the next question.

Ask:

  • Who installed the product?
  • Who controlled the substrate and anchor conditions?
  • Who conducted the impact?
  • Was installation representative of real-world conditions?
  • Was instrumentation calibrated and documented?
  • Is there a formal engineering report?
  • Does the rating reflect the weakest structural element?

At McCue, we perform extensive internal testing to innovate and refine. We conduct independent third-party testing. We document installation conditions, provide engineering summaries and video evidence, and distinguish clearly between internal validation and independent laboratory verification.

Testing should eliminate ambiguity — not create it.

Conclusion: Rigor Is a Risk Decision

In a previous piece, we covered how high-performing facilities match protection to actual exposure. Testing rigor is the next layer of that decision. Industrial safety barriers are not marketing features.

They influence:

  • Human injury risk
  • Equipment protection
  • Operational continuity
  • Regulatory defensibility
  • Enterprise risk management

The difference between internal testing, witnessed testing, and independent laboratory testing is not academic.

It determines whether a performance claim is:

  • Demonstrated
  • Independently verified
  • Or simply stated

When safety and uptime are on the line, technical rigor is a risk decision.