<Prev | [Index] | Next>


rmstein@ieee.org
Date: Thu, 16 Nov 2017 09:34:08 +0800

http://www.straitstimes.com/singapore/transport/signal-fault-to-blame-for-joo-koon-mrt-collision

"Sharing their preliminary findings yesterday, SMRT and LTA said the first train departed Ulu Pandan depot with a software protection feature, but this was `inadvertently removed' when it passed a faulty signaling circuit."

One needs to ask Thales if their release qualification process injects arbitrary/random faults into their simulation environment to assess problematic responses? Especially to measure viability factor X -- Safety, used to evaluate behavior under fault conditions.

For public safety, stack procurement requires additional vigilance: Inspect test plan, review test results for the deployed version, examine top-10 defect field escapes and root cause analysis, what's the wall clock needed to complete qualification for each change. These steps are unlikely to be pursued for simple consumer items, and are generally above and beyond consumer comprehension. Hence the increasing importance of a public service that vets stack publications and rates them for compliance with a simple
"readiness to publish" metric.

$ ~ V= V(B,F,R,M,I,P,S,T,X,U)

The software stack's or ecosystem's readiness to publish for business purposes (primarily to capture and realize revenue) can be characterized as:

$ = Revenue or quantifiable utility
V = Viability (publication deployment fitness)
B = Business process (via SOPs & Process flows)
F = Function (via API, protocol, command line, database insert/select
ops that enable business process fulfillment)
R = Reliability (continuous hours of operation w/o deadlock, crash, or
data corruption)
M = Resource consumption (absence of memory and/or descriptor leak,
temp disk)
I = Integration (processing of data sources/sinks, payload delivery,
message passing)
P = Performance (x/hr or 99.99% successful content delivery, scaling
under load, etc)
S = Standards compliance (EDI/B2B, FIX, HTTP, RFC, JSON, XML, ANSI/IEEE)
T = Trust (demonstrate a non-repudiated result; immune/hardened against
surreptitious access or corruption, fuzz evaluation hard, OWASP.org at
minimum)
X = Safety (behavior under fault conditions, fail-over consistent)
U = Usability (GUI navigation structure, initial brand exposure,
intuitive usage, psychometrics, a/b test)

Select viability attribute(s) may not be applicable for a software stack or ecosystem under test.
Each viability attribute is measured by one or more test suites designed exclusively for this purpose (minimal overlap). Assign a "1" for each scoped attribute that passes, "0" for non-passing coverage. 
If viability is not achieved per scoped viability factors coverage, prioritized defect repair is essential with release notes to identify known issues subject to extent of test coverage findings.
Ideal practice is to publicly disclose test results and known defects to assist consumer buying decisions and pressure competition.


<Prev | [Index] | Next>