Proof test coverage
Something that always makes me pause when reviewing designs…
Proof test coverage that somehow always ends up being 100% effective.
On paper it looks great.
The numbers work nicely.
The SIL calculation passes comfortably.
But in the real world I always find myself thinking:
Can we really detecting every dangerous failure with that test?
In my experience, this is a major cause of rework. If the design progresses to the point where commissioning documents are written and then a FSA or design review reveals overly optimistic proof test coverage it’s a lot of work to correct.
Anyone else experiencing this?
1
16 comments
Richard Kelly
4
Proof test coverage
powered by
Functional Safety Play Book
skool.com/roak-6055
Peer decision support for functional safety engineers. Premium: Decision Review Live, SILVerify, FSMS templates + guidance notes. $30 founding price.
Build your own community
Bring people together around your passion and get paid.
Powered by