Jumping on to chapter 5, we look at 'clumsiness', distributed cognitive systems and impact flow diagrams. Finally, after mentioning the keyhole effect we look at an actual example from the Apollo 13 mission.
The point here is that there is a lot in this reading. A lot of these ideas are really cool: distributed systems thinking, error genotypes, impact flow diagrams.
In the near miss at the Davis-Besse nuclear; station (NUREG-1154), there were about ten machine failures and several erroneous actions that initiated the loss of feedwater accident and determined how it evolved.It doesn't give enough information to help understand the point being made unless the reader is much more familiar with the incident. Similarly, the new ideas (such as the IFD) need to be illustrated in greater detail to be understood in context (to borrow a phrase).
Ch. 2 - Premises, p.13
This comes back to the cognitive loads mentioned in the Woods paper. Risks are rarely on our minds, as accidents are rare occurences. Hence a design must make risks obvious, rather than hiding them.