Discussion about this post

User's avatar
Nicholas Weininger's avatar

I used to work on software which was subject to the DO-178B Level A software development regulations (this was so long ago that it was before DO-178C came out), which are probably one of the biggest operational examples we have of real-world regulation of potentially life-endangering software systems. My impression of them, as a then-junior developer who went on to work on other high-reliability but unregulated systems, is that they were ~20% actually useful stuff, like:

-- stringent, high-coverage testing requirements

-- requiring that you actually write down a failure mode analysis and point to where you were mitigating each failure mode and have that document reviewed by someone

and ~80% bureaucratic CYA and well-intentioned sludge, like:

-- "traceability" requirements from code to multiple levels of documentation and back

-- reviewer "independence" requirements that made it almost impossible to find someone who both knew enough to review the code intelligently and was "independent" enough

-- quantitative fault probability analyses intended to prove that the chance of catastrophic failure was less than 10^-9, which in practice were exercises in making up numbers that were basically impossible to evaluate with any sort of epistemic rigor

Am I being too cynical about DO-178? Either way, can we learn useful things from its practical application history to apply to AI regulation?

Expand full comment
Patricia Clark Taylor's avatar

Fascinating. There’s a ground breaking movie, documentary perhaps, just waiting to be made here. My immediate thought: Is the human race an AIG gone rogue? Suddenly I’m thinking of a comedy/drama film, but one that could explore both hazards and great possibilities.

Expand full comment
6 more comments...

No posts