The RISKS Digest
Volume 29 Issue 34

Tuesday, 15th March 2016

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Great encryption segment from John Oliver, with Matt Blaze cameo
LW
Facebook, Google and WhatsApp plan to increase encryption of user data
The Guardian
Kremlin Falls for Its Own Fake Satellite Imagery
Dan Jacobson
Typosquatters Running .om Domain Scam To Push Mac Malware
ThreatPost
139+ breaches in 2016 thru Mar-8
ITRC
Online Leak of N.C.A.A. Tournament Bracket Upstages CBS Selection Show
NYTimes
Web security company breached, client list—including KKK—dumped, hackers mock inept security
BoingBoing
WhatsApp Encryption Said to Stymie Wiretap Order
NYTimes
Skype Co-Founder Launches End-To-End Encrypted 'Wire' App
Tom's
Interesting Bamford piece on life at the NSA
Dave Farber
President Obama at SXSW
Henry Baker
Doctorow on POTUS' infatuation with magic ponies
Richard Forno
Researchers Spoof Phone's Fingerprint Readers Using Inkjet Printers"
Todd Weiss
Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds
NYT
"Nations Ranked on Their Vulnerability to Cyberattacks"
Matthew Wright
Kalamazoo shootings: Uber driver blames app
BBC
Hooray for Hollywood Robots: Movie Machines May Boost Robot Acceptance
Matt Swayne
Re: Florida Senate endorses making computer coding a foreign language
Michael Bacon
Craig Burton
Re: Why no secure architectures in commodity systems?
Nick Sizemore
Info on RISKS (comp.risks)

Great encryption segment from John Oliver, with Matt Blaze cameo

Lauren Weinstein <lauren@vortex.com>
Mon, 14 Mar 2016 08:43:50 -0700
https://www.youtube.com/watch?v=zsjZ2r9Ygzw


Facebook, Google and WhatsApp plan to increase encryption of user data

Lauren Weinstein <lauren@vortex.com>
Mon, 14 Mar 2016 09:34:17 -0700
*The Guardian* via NNSquad
http://www.theguardian.com/technology/2016/mar/14/facebook-google-whatsapp-plan-increase-encryption-fbi-apple?CMP=share_btn_gp

  Silicon Valley's leading companies - including Facebook, Google and
  Snapchat - are working on their own increased privacy technology as Apple
  fights the US government over encryption, the Guardian has learned.  The
  projects could antagonize authorities just as much as Apple's more secure
  iPhones, which are currently at the center of the San Bernardino shooting
  investigation. They also indicate the industry may be willing to back up
  their public support for Apple with concrete action.


Kremlin Falls for Its Own Fake Satellite Imagery

Dan Jacobson <jidanni@jidanni.org>
Thu, 10 Mar 2016 12:30:17 +0800
Russian-Photoshopped footage of MH17 was accidentally picked up by
Putin's Defense Ministry to (falsely) argue that one of its jets never
entered Turkish airspace.

http://www.thedailybeast.com/articles/2016/03/04/kremlin-falls-for-its-own-fake-satellite-imagery.html


Typosquatters Running .om Domain Scam To Push Mac Malware

Lauren Weinstein <lauren@vortex.com>
Mon, 14 Mar 2016 18:52:58 -0700
ThreatPost via NNSquad
https://threatpost.com/typosquatters-target-apple-mac-users-with-new-om-domain-scam/116768/

  Typosquatters are targeting Apple computer users with malware in a recent
  campaign that snares clumsy web surfers who mistakenly type .om instead of
  .com when surfing the web.  According to Endgame security researchers, the
  top level domain for Middle Eastern country Oman (.om) is being exploited
  by typosquatters who have registered more than 300 domain names with the
  .om suffix for U.S. companies and services such as Citibank, Dell, Macys
  and Gmail. Endgame made the discovery last week and reports that several
  groups are behind the typosquatter campaigns.


139+ breaches in 2016 thru Mar-8 (ITRC)

"Alister Wm Macintyre \(Wow\)" <macwheel99@wowway.com>
Fri, 11 Mar 2016 14:29:13 -0600
The Identity Theft Resource Center (ITRC) offers weekly updates to
subscribers, on what's going on in the world of breaches, which have been
confirmed by news media and government sources.  Here is the latest
overview:
http://hosted.verticalresponse.com/358216/81857fe937/1746749985/70520fd45b/

As of March 8, the number of breaches captured in the 2016
<http://cts.vresp.com/c/?IdentityTheftResourc/81857fe937/70520fd45b/684a49f334>
ITRC Breach Report totals 139, up 4.5 percent over last year's record pace
for the same time period (133).  This 30 page PDF provides abstracts of each
of the 139 breaches of 2016 thru March-8.

5 industry sectors were involved in the 2016 breaches so far, statistics
charted here: Data Breach Category Summary
<http://cts.vresp.com/c/?IdentityTheftResourc/81857fe937/70520fd45b/6ee5bc6e37>

While businesses had 41.7% of the breaches, with Medical/Healthcare in 2nd
place with 36.7%,

Medical/Healthcare had the most records breached, of the over 4 million,
87.9% of them.

Both above perspectives (2016 breaches, statistics) are included in a larger
41 page PDF on the overall breaches for 2016 so far:
<http://cts.vresp.com/c/?IdentityTheftResourc/81857fe937/70520fd45b/bf2a37cf2f>
ITRC Breach Reports

For a chronology of data breaches going back to 2005, check out:
http://www.privacyrights.org/data-breach


Online Leak of N.C.A.A. Tournament Bracket Upstages CBS Selection Show

Monty Solomon <monty@roscom.com>
Mon, 14 Mar 2016 03:02:19 -0400
http://www.nytimes.com/2016/03/14/sports/ncaa-tournament-bracket-leak-selection-sunday.html

The N.C.A.A. said it would investigate how a Twitter user obtained, and
revealed, the tournament field before its TV partner could do the same.


Web security company breached, client list—including KKK—dumped, hackers mock inept security (BoingBoing)

Lauren Weinstein <lauren@vortex.com>
Fri, 11 Mar 2016 13:14:09 -0800
http://boingboing.net/2016/03/11/web-security-company-breached.html?utm_sourcețedburner&utm_mediumțed&utm_campaignțed%3A+boingboing%2FiBag+%28Boing+Boing%29

  Newport Beach based Staminus Communications offered DDoS protection and
  other security services to its clients; early this morning, their systems
  went down and a dump of their internal files were dumped to the Internet.
  The individuals claiming credit for the breach published an accompanying
  article called "TIPS WHEN RUNNING A SECURITY COMPANY," a blistering attack
  on the sub-par security they say they encountered at Staminus. The hackers
  claim all the systems shared a root password, that the power systems for
  the company's servers had open telnet access, that the company hadn't
  patched its systems, that they allowed for common PHP attacks, wrote
  subpar code, and, worst of all, stored credit card numbers in the clear.


WhatsApp Encryption Said to Stymie Wiretap Order (NYTimes)

Monty Solomon <monty@roscom.com>
Sat, 12 Mar 2016 14:28:00 -0500
http://www.nytimes.com/2016/03/13/us/politics/whatsapp-encryption-said-to-stymie-wiretap-order.html

A fight with WhatsApp, the world's largest mobile messaging service, would
open a new front in the Obama administration's dispute with Silicon Valley
over encryption, security and privacy.


Skype Co-Founder Launches End-To-End Encrypted 'Wire' App

Lauren Weinstein <lauren@vortex.com>
Fri, 11 Mar 2016 16:23:29 -0800
Tom's Hardware via NNSquad
http://www.tomshardware.com/news/wire-app-complete-end-to-end-encryption,31389.html

  A group of former Skype, Apple and Microsoft employees, backed by Skype's
  co-founder Janus Friis, created a Skype alternative called "Wire" back in
  2014, which wasn't end-to-end encrypted at the time. The team announced
  that the latest version of the app brings open source end-to-end
  encryption from everything to chats to video calls, as well as
  multi-device end-to-end encryption.


Interesting Bamford piece on life at the NSA

Dave Farber <farber@gmail.com>
Sat, 12 Mar 2016 16:31:52 -0500
Watch Thy Neighbor
To prevent whistleblowing, U.S. intelligence agencies are instructing staff
to spy on their colleagues.
James Bamford, 11 MAR 2016
https://foreignpolicy.com/2016/03/11/watch-thy-neighbor-nsa-security-spying-surveillance/
or http://atfp.co/24TyhlT


President Obama at SXSW

Henry Baker <hbaker1@pipeline.com>
Mon, 14 Mar 2016 13:48:12 -0700
Was anyone else terrified by President Obama's suggestions of on-line
registration and on-line voting *in the same interview in which he was also
asking for weak encryption* ?

Weak encryption + voting apps = GAME OVER for democracy.

President Obama Participates in South by Southwest Interactive
https://www.youtube.com/watch?v=FhFibpHSJFE


Doctorow on POTUS' infatuation with magic ponies (via DF)

Richard Forno <rforno@infowarrior.org>
March 12, 2016 at 3:53:14 PM EST
Obama: cryptographers who don't believe in magic ponies are "fetishists,"
"absolutists"

Obama's SXSW appearance included the president's stupidest-ever remarks on
cryptography: he characterized cryptographers' insistence that there is no
way to make working cryptography that stops working when the government
needs it to as "phone fetishizing," as opposed to, you know, reality.

In a rhetorical move that he would have flunked his U Chicago law students
for, Obama described a landscape with two edges: "Strong crypto" and "No
crypto" and declared that in the middle was a reasonable territory in which
crypto could strong sometimes and disappear the rest of the time.

This is like the territory in which you are "Pregnant" or "Not pregnant"
where, in between, you are "a little bit pregnant" (or, of course, like
"Vaccinations are safe," vs "Vaccinations cause autism" whose middle ground
is "Vaccinations are safe, but just to be sure, let's not give 'too many' at
once, because reasons, and never mind that this will drastically increase
cost and complexity and reduce compliance").

Obama conflated cryptographers' insistence that his plan was technically
impossible with the position that government should never be able to serve
court orders on its citizens. This math denialism, the alternative-medicine
information security.

He focused his argument on the desirability of having crypto that worked in
this impossible way, another cheap rhetorical trick. Wanting it badly isn't
enough.

If decades of attending SXSW (I leave for the airport in 30 minutes!) has
taught me anything, it's that someone will be selling or giving away "phone
fetishist" tees with PGP sourcecode on one side and a magic pony on the
other before the week is out.

http://boingboing.net/2016/03/12/obama-cryptographers-who-don.html


"Researchers Spoof Phone's Fingerprint Readers Using Inkjet Printers"

"ACM TechNews" <technews-editor@acm.org>
Fri, 11 Mar 2016 12:27:19 -0500 (EST)
Todd R. Weiss, eWeek, via ACM TechNews, Friday, March 11, 2016

Researchers Spoof Phone's Fingerprint Readers Using Inkjet Printers
eWeek (03/09/16) Todd R. Weiss

Michigan State University (MSU) researchers used off-the-shelf inkjet
printers to demonstrate how fingerprint readers on popular smartphones can
be manipulated into unlocking the devices using spoofed fingerprints made
with printer inks.  MSU's Kai Cao and Anil K. Jain sought to investigate the
overlooked spoofing strategy, which is especially relevant because half of
smartphones sold by 2019 are expected to have an embedded fingerprint
sensor.  "With the introduction of Apple Pay, Samsung Pay, and Android Pay,
fingerprint recognition on mobile devices is leveraged for more than just
device unlock; it can also be used for secure mobile payment and other
transactions," the researchers note.  Cao and Jain used an inkjet printer
loaded with three silver conductive ink cartridges and a normal black ink
cartridge, and scanned a fingerprint of a phone's authorized user at 300 dpi
(dots per inch) or higher resolution.  Afterward, the print was reversed or
mirrored before being printed onto the glossy side of a piece of AgIC paper.
"Once the printed [two-dimensional] fingerprints are ready, we can then use
them for spoofing mobile phones," the researchers note.  The spoofed print
successfully unlocked Samsung Galaxy S6 and Huawei Honor 7 smartphones.  Cao
and Jain say their experiment "further confirms the urgent need for
anti-spoofing techniques for fingerprint-recognition systems, especially for
mobile devices."
http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-ec1dx2df10x065256&


Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds

Gabe Goldberg <gabe@gabegold.com>
Tue, 15 Mar 2016 08:31:22 -0400
http://well.blogs.nytimes.com/2016/03/14/hey-siri-can-i-rely-on-you-in-a-crisis-not-always-a-study-finds

Smartphone virtual assistants often fail in their responses when someone is
in distress, a new study found testing phrases such as *I was raped*.


"Nations Ranked on Their Vulnerability to Cyberattacks" (Matthew Wright)

"ACM TechNews" <technews-editor@acm.org>
Fri, 11 Mar 2016 12:27:19 -0500 (EST)
Matthew Wright, University of Maryland, 9 Mar 2016, via ACM TechNews, 11 Mar
2016

Researchers at the University of Maryland (U-M) and the Virginia Polytechnic
Institute and State University have co-authored a book ranking the
vulnerability of 44 nations to cyberattacks.  The U.S. was ranked 11th
safest, while Scandinavian countries such as Denmark, Norway, and Finland
were ranked the safest.  China, India, Russia, Saudi Arabia, and South Korea
ranked among the most vulnerable.  "Our goal was to characterize how
vulnerable different countries were, identify their current cybersecurity
policies, and determine how those policies might need to change in response
to this new information," says U-M professor V.S. Subrahamian, who led the
research.  The book, "The Global Cyber-vulnerability Report," was based on a
two-year study that analyzed more than 20 billion automatically generated
reports, collected from 4 million machines each year worldwide.  The
rankings were partly based on the number of machines attacked in a given
country and the number of times each machine was attacked.  Trojans,
followed by viruses and worms, posed the principal threats to machines in
the U.S., but misleading software is much more prevalent in the
U.S. compared with other nations that have similar gross domestic product,
suggesting U.S. efforts to reduce cyberthreats should focus on education to
recognize and avoid misleading software.
http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-ec1dx2df14x065256&


Kalamazoo shootings: Uber driver blames app (BBC)

Lauren Weinstein <lauren@vortex.com>
Mon, 14 Mar 2016 15:44:05 -0700
BBC via NNSquad
http://www.bbc.com/news/world-us-canada-35808627

  Police said Jason Dalton, 45, carried out the shootings on 20 February
  while working for the ride-sharing company. "When I logged onto [the Uber
  app], it started making me feel like a puppet," Mr Dalton told
  investigators. He claims that the smartphone programme told him to kill
  his victims ... According to documents released on Monday, Mr Dalton said
  the horned cow head of a devil would appear on his phone screen and give
  him an assignment.


Hooray for Hollywood Robots: Movie Machines May Boost Robot Acceptance (Matt Swayne)

"ACM TechNews" <technews-editor@acm.org>
Fri, 11 Mar 2016 12:27:19 -0500 (EST)
Matt Swayne, Penn State News, 9 Mar 2016 via ACM TechNews

Pennsylvania State University (PSU) researchers recently conducted a study
in which 379 older adults who recalled more robots portrayed in films had
lower anxiety toward robots than seniors who remembered fewer robot
portrayals.  Remembering robots from how they are portrayed in films may
help ease some of the anxiety older adults have about using a robot,
according to the researchers.  Finding ways to ease anxiety about robot
adoption also could help senior citizens accept robots as caregivers, the
researchers add.  "Robots could provide everything from simple
reminders--when to take pills, for example--to fetching water and food for
people with limited mobility," says PSU professor S. Shyam Sundar.  In
addition, the researchers found the trusting effect held up even when older
adults recalled robots that were not friendly human-like helper robots.
"So, it seems like the more media portrayals they can recall, the more
likely their attitudes would be positive toward robots, rather than
negative," says PSU researcher T. Franklin Waddell.  The research also found
people had a more positive reaction to robots that looked more human-like
and those that evoked more sympathy.  The researchers suggest robot
designers incorporate features that remind older adults of robots in the
media, and create robots with more human-like interfaces and models with
features that increase sympathy.
http://orange.hosting.lsoft.com/trk/click?ref=znwrbbrs9_6-ec1dx2df15x065256&


Re: Florida Senate endorses making computer coding a foreign language (RISKS-29.33)

Michael Bacon - Grimbaldus <michael.bacon@grimbaldus.com>
Fri, 11 Mar 2016 05:48:37 +0000
Perhaps the study of COBOL and FORTRAN might find its way into the Classics
syllabus.

I was gratified that PGN mentioned Algol.  It was one of my first languages
and should certainly be studied; the development of the Bakus-Naur Form
being a key topic.  I recall that Tony Hoare (Inventor of the Quicksort
algorithm and CSP) described Algol as: "A language so far ahead of its time
that it was not only an improvement on its predecessors but also on nearly
all of its successors."


Re: Florida Senate endorses making computer coding a foreign language (RISKS-29.33)

Craig Burton <craig.alexander.burton@gmail.com>
Thu, 10 Mar 2016 11:51:29 +1100
Creating human languages as a method of dividing people goes back to the
Tower of Babel!  Coding is now something a lot of very different people
share.  Don't make it a language.

I would offer that computer coding is a lot more like writing a legal
document.  One wrong dot or comma and you have a lot of trouble!


Re: Why no secure architectures in commodity systems? (past RISKS)

Nick Sizemore <bolshev@theriver.com>
Thu, 10 Mar 2016 16:15:54 -0700
  [All mentioned: I've also addressed each of you separately as PGN may feel
  it necessary to reject or radically shorten it.  You may, of course, reply
  directly to me should enthusiasm, or irritation, demand. NS]

    [On the contrary, I think Nick's item is sufficiently valuable
    educationally that we're running it in its entirety.  It is unusally
    long, but worthy.  PGN]

Please forgive my delay in responding.  To all who responded, I extend my
gratitude.  Specifically, I wish to thank:
  PGN, risko@csl.sri.com, our esteemed moderator;
  Mark Thorson, eee@sonic.net;
  Michael Marking, marking@tatanka.com;
  Fred Cohen fc@all.net
  Robert Stanley, rstanley@magma.ca;
  Alister Wm Macintyre (AlMac - twice), macwheel99@wowway.com;
  Paul Black, drpaule@gmail.com;
  Martin Torzewski (separately, not list), Martin.Torzewski@blueyonder.co.uk.

If I've missed anyone, my apologies and thanks, and resend to me if you
didn't originally send to the group.

While I was aware of several of the points raised, there were several I had
not considered, as well as some that I had not thought of in connection with
the topic at hand.  These responses are intended to indicate some of the
evidence I have encountered that prompted my query.

1.  Pre-PC systems:

As noted in the exchange, pre-PC hardware, specifically mainframe and
minicomputer architectures before circa 1980 contained many features that
enhanced reliability and could have been part of a secure architecture.
While never, outside of Multics and a few other special purpose systems,
combined in one place as security features, they were common, and used by
the operating systems in operator or programmer selected ways as dictated by
the application context.  These included hardware enforced program memory
bounds, divide by zero protection, file system protections, etc.

That much of that was ignored in the PC world was demonstrated in the late
nineties sea trials on the Aegis cruiser Yorktown, where many ship command
and control functions were subordinated to a group of PCs running what was
then Microsoft's flagship OS, Microsoft NT.  The cruiser was rendered dead
in the water by an operator entry leading to a divide by zero error, which
ultimately resulted in crashing the control network.  This from an error
that had been discovered and studied in the late forties and fifties, and
solutions created and implemented in most, if not all, commercial computers
and OS's by, at least, the mid sixties, when I first entered computing.
This was over a decade before Microsoft was created.

Of course, the microprocessors of the early period were severely limited in
instruction set and memory resources.  That should have been all the more
reason for a greater amount of defensive programming to provide at least
some of the same protections then available in mainstream computing
environments.  There are valid arguments why that might not have been done
initially, but few, if any, that would support well over two decades of
continued neglect.

2.   Formal methods:

It seemed to some that I was seeking 'perfect security', an ideal, like
'justice', or 'fairness', not attainable in the real world.  That was not my
intent, and I apologize for creating that misunderstanding.  That said, it
is clearly possible to formally specify security properties, and prove them
attainable in a given, formally specified, computing model, which may
represent a software system, a hardware system, or a combination of both.
Moreover, is is also possible to prove that the intended implementing source
code (for software) correctly implements those properties., as done, e.g.,
in the SPARK Ada toolset, among others.  Formal methods are increasingly
used in real world applications, though usually addressing only the portions
of systems containing safety- and/or mission-critical components.  Several
companies now exist offering formal methods consultation, training, and
services to system developers.

It's worth noting a point many in this list will see.  So far as I'm aware,
no one has yet built a full, formally verified tool chain, i.e., one in
which all the tools that 'touch' the requirements or software, from
analyzing and verifying the requirements, to editing the source, to
producing and manipulating the binary, have been formally designed,
developed and verified.  This would ultimately be necessary to produce
trusted code.  I seem to recall one or two experimental projects in that
direction under one of the big EU IT initiatives, but haven't seen anything
else.

As noted by PGN, there have been, over the years, several partially or
wholly successful efforts to create secure operating systems.  A far from
exhaustive list includes Multics, the successor PSOS project, and the more
modern SEL4 and Nexus projects.  We might also include operating systems
aimed at high-reliability embedded systems.  While not generally targeted at
security per se, they often do have security features, and are developed
with rather greater software engineering discipline than the commodity
systems available today from Microsoft, Apple, Google, and the many free and
commercial Unix and Linux distributions.  Some of the more than a hundred
vendors of RTOS (real-time operating systems), from a 2003 article in
'Embedded Systems Programming' (1) include relatively well-known names like
Green Hills, Wind River, Red Hat, LynxOS, OS-9, and QNX.  An increasing
number of these come with GUIs, unlike many of the research projects.

(1) "Choosing an RTOS,", Embedded Systems Programming, January 2003.

Of course, there is the overwhelming, albeit seldom mentioned, argument
against continuing with 'seat of the pants' methods in software, without
hardware support.  I worked for a number of years in the DoD test and
evaluation community, where an officer at one organization with which we
were working asked me why we didn't simply 'completely' test software.  I
told him I had an answer, but would need to check some figures to make sure
my answer was correctly framed.  Somewhat later I gave this (greatly
simplified) reply.  The argument is the same, but my numbers then might have
been somewhat different.

   |-  Assume a 100,000 byte program with 4-byte instructions.
   |-  Assume an average of one branch or jump every 10 instructions – a
       figure cited in literature at that time for many types of program.
   |-  Assume an oracle is built for this program that can verify one
       path through the program every microsecond
   We have ( 100,000 / 4 ) / 10 ) = 2,500 branches, giving ( 2^2,500 )
   paths.
   In years, it will take:

                 ( 2^2,500 ) DIVIDED BY
(1,000,000 microsecs/sec)*(60 secs/min)*(60 min/hr)*(24 hr/day)*(365.25days/yr)

   Several arbitrary-precision integer programs give the result:
   ≈ 1.1909271410208624 Ă— 10^739 years.
   Even if we assume a more linear scientific/engineering program with
   one jump every 50 instructions, we have the result:
   ( 2^500 )/(1,000,000 * 60 * 60 * 24 * 365.25)
   ≈ 1.0372748903263056 Ă— 10^137 years
   (Actually, I have the exact integer results, but restrained myself.)

   In any event, the answer was that it's theoretically possible, in the
   sense given, to 'completely' test this (small) program.  However,
   given one estimate that the universe will die the 'heat death' from
   proton decay in ~ 10^40 years, there will be a schedule problem.

Of course, the obvious lesson is that the only way to insure correctness in
software is by construction, to the extent we're able to do that.  While
there are certainly some definable errors we can discover through testing,
it is most definitely not 'the' solution.

3.   Hardware:

Lastly, there are hardware efforts.  PGN noted the CHERI project, which
shows how some hardware security features, specifically, a memory protection
model, have been implemented in a hardware coprocessor and associated
software.  The importance of this model lies in demonstrating an incremental
approach to hardware protection, allowing accumulation of experience that
current microprocessor developers could use in evaluating design features.
As I have only marginal familiarity with the hardware architecture business,
I can't cite other efforts with any confidence without a lot more research,
though the CHERI papers PGN pointed me to (2) do mention several others.

(2)  all from http://www.csl.sri.com/neumann/
        2012resolve-cheri.pdf
        isca2014-cheri.pdf
        isca2014-cheri.pdf
        2015asplos.pdf
        2015oak.pdf

4.   Supply chain – counterfeit components:

This problem has grown to the point that there are now major government and
industry initiatives aimed at assuring delivery of genuine components and
verifying the integrity of different portions of the supply chain.  As with
many industrial efforts, this is always a concern, though it only
(relatively) recently became recognized as a significant problem in
electronic component supply.  Here, as elsewhere, commercial and government
organizations have come together – not to eliminate, but mitigate - the
problem.  The recent 'bad USB' scare is another, albeit smaller scale,
example of a problem most want mitigated, and many suppliers have responded,
producing USB components less vulnerable to the threat.

The classic example, familiar to all risks readers, is airline safety.
After a rash of accidents, the government, with industry support,
established mechanisms to study each accident in great detail and report
their findings, leading, over time, to far fewer such accidents.  Similarly,
those problems arising from counterfeit components, to the degree they
became a major concern, have given rise to mitigation efforts.

5.   Security as obstacle; insiders:

A number of responses pointed out, from different viewpoints, the
awkwardness of standard security provisions, e.g., passwords, role
permission maintenance, individual and group account maintenance, etc.  A
related issue is the vulnerability to 'social engineering' enabled by many
of these mechanisms as currently implemented.  Most of these are, I
maintain, the product of ineffective security support at the hardware level.
It's my belief that security, properly implemented, should be essentially
invisible to the users, and only require action by IT in exceptional cases.
If, for example, each device came with a fingerprint and retinal scan
device, and all user authentication was performed through these directly,
through protected channels, at a minimum, 'social engineering' would rapidly
become a strongly deprecated attack vector, as virtually nothing the user
could provide to an outside inquiry would grant them digital access.

This problem is one of those forms of absurdly strong and widespread
'coupling' of, in this case, security mechanisms across an organization,
best summarized by the posters everywhere, at least in many government
organizations, proclaiming that 'Security is Everyone's Responsibility', or
words to that effect.  To the extent that system or network security depends
on each of dozens or hundreds of people each correctly exercising various
security rituals, multiple times a day, the probability of that system or
network security remaining intact becomes ever closer to 0.0.  As with
studies of pilot error, the issue is not to castigate the much maligned
human, but to recognize there are things people generally do well, and
others, not so well: e.g., creating and memorizing (multiple) strong
passwords/phrases.

Similar issues arise with respect to most malicious – or really ignorant –
insiders.  If a user has properly authenticated, their access should
generally be limited, both in the extent of digital resources available to
them, and the sorts of operations they may perform on them.  Exceptional
requirements might occasionally dictate some relaxation of those
constraints, but that should also be 'ring-fenced' by in-process audit
checks and data protection measures to limit the possibility of abuse or
gross error.  Under this sort of regimen an insider might still be able to
achieve some undesired results, but the extent would almost always be much
less severe.  That even the DoD, remarked by some as being 'more able', and
willing, to enforce security, is not that disciplined was amply demonstrated
by Manning and Snowden, as well as others less publicized.  While I have
some sympathy for whistleblowers, what's relevant here is that they
demonstrated glaring deficiencies of access control in what were supposedly
extremely secure environments, as was noted in risks some time ago.  Note
that with proper hardware and OS protection, while a highly placed insider
might extend their reach somewhat, it should be impossible to cover their
tracks, or even prevent immediate alerts being made.

6.   Organizational issues:

As was noted obliquely in several posts, most individuals have limited
understanding of technology in general, and computer technology in
particular.  Even in an age of large professional organizations, and
numerous hobbyists and groups of enthusiasts, such technology is
sufficiently complicated, and complex, that even most professionals
understand only portions of the field in any depth.  Computer security, as a
medium-scale subfield of computing, until relatively recently a quite minor
subfield, is thus even less understood.

This extends to organizations.  Even many large organizations involved in
technology areas often lack what might be termed an organizational
understanding.  Moreover, many such organizations, especially at middle and
upper management levels, are populated largely by more senior people, only a
few of whom have any professional computer background, and are subject to
many budgetary, bureaucratic, and political pressures.  Additionally, the
degree of discipline applied in following the many regulatory and procedural
requirements varies considerably based on the available expertise,
leadership skills, and the dominant pressures at a given point in time.

The Ada policy noted in one of the posts was a classic example.  There were
binding DoD and implementing service regulations, thousands of pages of
training materials, and a plethora of literature and speakers advocating
Ada.  Nonetheless, quite a few acquisition programs didn't even pay lip
service to the Ada 'mandate', and several major DoD organizations
'standardized' on a completely different language, usually C.  To my
knowledge, no DoD official or officer in those programs or organizations was
ever reprimanded, much less disciplined, for failure to follow the standing
orders.

A more obvious example of DoD lack of aware leadership was demonstrated – at
the highest levels - to an even greater degree when the
'commercial-off-the-shelf', or COTS, mandate was extended to software and
computing services.  COTS, of course, works quite well with industries that
already have strong process and quality control.  The software industry, on
the other hand, represents nearly the opposite of those desiderata, although
enterprise software does somewhat better, and the major telecommunications
equipment providers do better still.  This decision, according to at least
one of the government folks cited in the GCN article on the previously
mentioned Aegis incident (3), was a political one.

(3)  bit.ly/1VXevju

Lastly, there are organizational approaches to better engineered code and
managed IT.  The CMM (Capability Maturity Model) and ITIL (Information
Technology Infrastructure Library), Six Sigma, and TOGAF (The Open Group
Architecture Framework), are examples.  There are several others.  I have
seen studies showing significant organizational improvement on several
metrics using CMM and Six Sigma.  I haven't seen any for ITIL or TOGAF,
though it may just be that, now retired, I'm no longer keeping current on
many such issues.  I had experience with DODAF (DoD Architecture Framework)
as a contractor employee for a DoD organization.  It has obvious potential,
but the learning curve can be steep, and even some of the courses available
seem oblivious to some of the subtler points of which one needs to be aware.
I worked with the earlier TAFIM (Technical Architecture Framework for
Information Management), and even developed two summaries of emerging
standards that they used for one of the updates, which may have helped me
better understand DODAF, when it emerged.  All of these do require
significant upfront investment from adopting organizations, but, at least
for CMM and Six Sigma, it has been shown that it can pay for itself over
time and lead to lasting improvement.

There are two lessons here.  One is that security must be mostly 'engineered
in'.  We can't depend on organizational procedures for its day-to-day
maintenance.  The other is that organizations need a well designed and
integrated set of policies and procedures to enable proper development of
systems, including security provisions.  This is a similar lesson to that
arising from thirty years of software development methodology literature,
i.e., that virtually any software development methodology implemented in a
disciplined and consistent fashion will improve code quality over that
delivered by ad hoc development.  There will always be exceptional, and
exceptionally disciplined, developers, but they are rare.  There will always
be reasons for exceptions to some provisions of a methodology in exceptional
circumstances, but they, too, are rare – rather more so than suggested by
those annoyed by any prescribed methodology.

7.  Specific replies:

Mark Thorson asked “...what if the trusted executable code is a javascript
or SQL interpreter?” Use 'sandbox' methods.  These may be implemented with
varying degrees of rigor, and with varying stringency of constraint.  At the
very least, one may build a parser that strictly enforces the applicable
syntax, and filter the statements for inappropriate command sequences before
delivery to the applicable interpreter or run-time.  This latter check could
be a good AI application.  Assuming suitable limitations on privileges that
code has inherited, it should be possible to strongly limit its ability to
cause mischief.

Michael Marking said

   “Complete "security" isn't obtainable. We can only approach it, we
   can never truly reach it. We can't prove consistency in mathematics
   from within a system (Godel's theorem), we can't prove that a message
   was received without error (although we can be sure with vanishingly
   small probability of error), and there is no such thing as scientific
   "proof" (so the best we can do is a well-accepted theorem). It's all
   relative.”

The first sentence I've already addressed.  I prefer to focus on what we can
do.  We can prove that a theorem in mathematics is true (or not) with
respect to a system of axioms and other proved theorems, which gives formal
methods their utility.  There is scientific proof; it's just negative, i.e.,
we can prove that some provision of a scientific theory or hypothesis, or,
in rarer cases, the entire thing, is false.  That's what gives the
scientific method its utility.  It's all relative, yes.  But that relativity
decreases as we become able to identify and clearly define more of the
context.  That's what gives much of science, and many of the non-scientific
disciplines, their utility.

He also said:

   “Yes we could get very close to security if the system were open, but
   no system is completely open. We may get to observe the design
   process, but we don't have enough eyes to see it all the way to
   implementation and to manufacturing. Even if most actors are both
   ethical (whatever that means) and competent (again), there are always
   some bad actors. It's a long way from a specification to a product,
   and I can't completely trust that there are enough eyeballs to catch
   all of the mistakes and the mischief. How do I know that the circuit
   I bought from a supplier is the same one that was specified and
   designed?”

The last sentence I've already addressed, Most of the rest is covered by
some of the generic comments.  Of course, if we don't know what 'ethical' or
'competent' mean in a given social and professional context, there's an
education problem – in the former case, philosophical, in the latter,
professional.  The decline of what used to be called a 'liberal education'
has been discussed at length in many books, papers and articles, and is
certainly beyond the scope of this already over-long reply.

Mr. Marking has a number of other excellent points I've addressed only
tangentially.  Here I'll only say that, in my perception, we allow, even
encourage, far too much coupling in our present systems.  Several recent
items in CACM have pointed out the problem with proliferation of web
development frameworks, and the associated library management problems that
have arisen.  I once helped build an expert system for software analysts.
Learning and having to describe specific desiderata for detecting excess
coupling and inadequate cohesion in code taught me far more than I had
previously realized was important, and why.  This is an engineering lesson
we have yet to learn.

Fred Cohen said, with regard to my remarks on consortium development:

   “The mechanism is called a Trusted Platform Module (TPM), and TPMs
   are present in enormous volume in commodity computers of all sorts.
   This is based on the Trusted Computing Group's efforts that stem from
   the work on cryptographic checksums for integrity protection of the
   1980s and on a lot of other related things.”

I have not reviewed the TPM specifications.  My understanding was, possibly
overly, shaped by Ross Anderson's several papers on the subject, From that
perspective, the TPM appears to be largely aimed at DRM (digital rights
management).  It protects selected signed content from use not intended by
the originator, granting access only to applications and users approved by
the content originator.  It doesn't seem to address protection of OS or user
processes from subversion by malicious actors, and only protects user
content created within an approved user process under an approved
application.  I'll certainly look more closely at what TPM specification
data I can obtain to see where my impressions may be mistaken.  I'll also
look at the architecture site recommended – many thanks.

Fred Cohen also noted the possibility that “Two perfectly secure devices
(meeting some specification such as limited information flow) when connected
may violate the flow controls of each other, thus a combination of "secure"
things may produce an "insecure" thing.” But why?  There's a reason.  It may
be different security models, different protocols, different operation or
status code definitions, etc.  These are all things that could be diagnosed.
As with TCP, IP, and virtually all hardware devices, we'd need standards for
connections.  These might be hardware, software, or some combination.  Just
as we might now design a specification for a standard chip – a small special
purpose processor – to resolve all the current date issues, one that would
automatically convert from some base format, such as Julian (in the sense of
day and decimal from some base date) to any defined national calendar and
representation, as well as automatically handling leap seconds, so we should
have standards for security interactions, once we've built some at least
potentially secure hardware and software systems.

Most of Mr. Stanley's remarks are addressed in the above generic comments.
He does mention use of AI monitors.  That's an even better approach to
monitoring alerts and warnings, though in either application the system will
probably need to be tailored, or to 'learn', the patterns of the given
monitored processes in the organizational context.

Many of Mr. Macintyre's comments address organizational issues.  These were
largely addressed in my generic remarks.  Here I would add that the
organization discipline called for by CMM or ITIL could remedy these to some
extent by educating the organization as a whole on why some things need to
be done in specific ways, and how.  Of course, as he implies, organizations
contracted for support functions need to understand these things as well,
and the contracting organization needs to see that they do.  Any 'urgent'
project can lead to oversights, but they're less likely when all parties
understand the need for process and its attendant constraints.  As with
basic security, this sort of thing should be 'under the hood', as part of a
workflow or ERP system.  At least we'd then have some notice when shortcuts
were taken, and could take measures to correct the situation or mitigate the
effects.

I don't agree with Mr. Macintyre, or Mr. Thorson, that it's too late to
change.  It will certainly take time and resources, and necessarily be
incremental.  Some changes, like the increasing application of formal
methods, some of the hardware mentioned by Mr. Stanley, and the CHERI
project cited by PGN, are changes appearing.  I guess my argument is that
there should be a more concentrated effort to track results of these
projects, arrange pilot deployments, note problems and propose and evaluate
solutions, etc.  Right now things are so piecemeal that many innovations may
sink with no record if they don't get immediate support.  At the very least,
a central (virtual) repository for saving papers and reports on such efforts
should exist.

Mr. Black's comment on multiple iterations is valid, and I agree that the
efforts need to be incremental, as illustrated by the CHERI project.  It's
not necessarily the case that new instructions sets, compilers, etc., will
be needed, as illustrated, again, by CHERI.  I suspect that even with a full
security kernel, only modest instruction set changes would be required, with
correspondingly modest changes to software development tools, with some
recompilation and redistribution of applications.  Again, all possible as
incremental changes.

Mr. Black's last comment on the remaining software work to be done is
certainly partially true.  On the other hand, as I've noted, with even
modest hardware modifications, many of the problems he notes can be put
'under the hood'.  And developing better security software, and security
aware applications, will be much facilitated, and the software
correspondingly simplified, with (relatively) robust hardware mechanisms.
The TPM, for all the faults it may have, demonstrates that the industry can
work together to design and build (and deploy) hardware and software
mechanisms for something perceived as a desirable, and potentially
profitable, goal.

Mr. Torzewski noted, as did others, the superior memory management of
earlier systems.  He was more specific regarding one of the systems he had
used, where “...memory management which preformed bound checking on areas
of store to which access was requested, imposed by hardware, and thus at
hardware speed (no additional processor cycles), as addresses all had a size
component where more than a word (or byte, or quadword etc.) was requested.
All extant languages were amenable to being compiled to comply with this
safety feature.  No possibility of buffer overrun!  It was also not possible
to mix memory pages containing code, and those containing data, via an
execute permission bit.''  This is an example of an incremental change,
clearly possible in the context of a TPM-like mechanism, which has already
been implemented in the past.  It is, in other words, already proven
technology, which CHERI appears to more than replicate.

Please report problems with the web pages to the maintainer

x
Top