The RISKS Digest
Volume 27 Issue 51

Tuesday, 8th October 2013

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

Cyber Schools Fleece Taxpayers for Phantom Students and Failing Grades
Mary Bottari
Our Founding Fathers wisely recognized the risks in voting
Paul Robinson
"Beyond the bottom line: The true cost of patent trolls"
Serdar Yegulalp via Gene Wirchenko
Risks of politics
Chris Adams
Lowering Your Standards: DRM and the Future of the W3C
Danny O'Brien via Dewayne Hendricks
Why you can't stop checking your phone
Monty Solomon
Silk Road's founder arrested
PGN
Technologists' Comment to the NSA Review Group
Joseph Lorenzo Hall
Re: Cost and Responsibility for Snowden's Breaches
Robert R. Fenichel
Bruce Schneier: NSA attacks Internet
via Steven J. Greenwald
Mugged by a Mug Shot Online
David Segal via Matthew Kruk
Adobe Announces Security Breach
David Kocieniewski via Matthew Kruk
Notarizations Go Digital in North Carolina
Gabe Goldberg
Info on RISKS (comp.risks)

Cyber Schools Fleece Taxpayers for Phantom Students and Failing Grades (Mary Bottari)

"Peter G. Neumann" <neumann@csl.sri.com>
Thu, 3 Oct 2013 14:02:08 PDT
Mary Bottari, PRWatch: In recent years, there has been an explosion of
full-time `virtual' charter schools paid for by the taxpayer. K12 Inc. and
ALEC have pushed a national agenda to replace brick-and-mortar classrooms
and hands-on teaching with computers and "distance learning."
http://www.prwatch.org/news/2013/10/12257/junk-bonds-junk-schools-cyber-schools-fleece-taxpayers-phantom-students-and-faili


Our Founding Fathers wisely recognized the risks in voting

Paul Robinson <paul@paul-robinson.us>
Mon, 7 Oct 2013 11:57:41 -0700 (PDT)
Every two years I become a county employee, for two days. I work at the
polling place as an election judge on the Primary election and the General
election. I'm one of those petty bureaucrats that make you stand in line for
hours and prevent you from voting. All kidding aside, I'm proud of the fact
that at our little polling place, even under heavy loads of people showing
up, 95% of the time, people are inside, already having been checked in and
are simply waiting to use one of the voting machines.

So, anyway, I point out to the other election judges the true reasons why
the United States usually holds elections on "The First Tuesday after the
First Monday" in the month of the election. Consider that when our
forefathers set up this country, they didn't let every one who was over 21
and had a pulse (or even if they don't if you live in Chicago), you had to
be a wealthy landholder, which almost always meant you were a rich, white
male. Well, that usually meant you had a household to run and if the
election fell on the first of the month, you might not have time to saddle
your horse or hitch up your horse and buggy and ride for hours all the way
to the polls, it was a long trek, and you had bills to pay and accounts to
settle. So they wanted the election not to be on the 1st of the month.

Now, they also didn't want the election falling on Monday. After a weekend
of binge drinking - especially on Sunday, where, if you weren't binge
drinking you might have to stay awake on Sunday when you're in church where
the preacher would be giving his dull, boring sermon, and if you stayed
awake long enough you might realize how really dull and boring he was and
pull out your ball and powder pistol and plug him just for the fun of it or
to get some excitement in your life - you'd be so hung over Monday morning
that you wouldn't be able to see straight, let alone be able to vote. So,
that's why elections are held on Tuesday so that hopefully you'd be sober
and not suffering too much from a hangover.

Those founding fathers were no dummies.

Speaking of plugging people, that's also the reason Wyoming was the first
state to give women the right to vote. A couple thousand years ago, the way
you moved from Slave or peon to Citizen in Imperial Rome was you raised
enough money to afford a sword and shield; you could now show anyone who
dared to try to enslave you, a blade.

Well, out west, women had to go armed. On the East Coast, this was rare; out
west, there were lots of threats from rattlesnakes, both the ones with a
tail and the ones on two legs; the best protection against sexual assault
was a Colt .45, where it was said, "All men were created equal, but Samuel
Colt allowed them to stay that way." and women going armed also made them
equal to any man.

So, a woman spent several hours to get to the polling place where she walks
up to someone like me, rests her hands on her pistols, and declares that she
wants to vote. That registrar is not going to tell some woman who's packing
heat that she came out there for nothing, he's liable to soon become a
permanent resident of Boot Hill. So, women having arms gave them the power
to vote, which is usually where people got the power to vote, because they
had the power to use violence if they were denied it.

Paul Robinson <paul@paul-robinson.us> - http://paul-robinson.us


"Beyond the bottom line: The true cost of patent trolls" (Serdar Yegulalp)

Gene Wirchenko <genew@telus.net>
Fri, 04 Oct 2013 12:54:50 -0700
Serdar Yegulalp | InfoWorld, 4 Oct 2013
Beyond the bottom line: The true cost of patent trolls
The costs of patent trolling often aren't directly visible, but the
benefits of fighting back are coming to light
http://www.infoworld.com/t/intellectual-property/beyond-the-bottom-line-the-true-cost-of-patent-trolls-228170


Risks of politics

Chris Adams <chris@improbable.org>
Tue, 1 Oct 2013 19:43:42 -0400
I noticed a fair amount of discussion today from people dealing with key
resources disappearing from government servers:

* XML parsers choking on documents which use a DTD maintained by a
  government organization:

https://plus.google.com/u/0/106980687849423472398/posts/b1ubKCPJmy7

(A recurring RISK of an XML implementation anti-pattern: see the 2008 W3C
post about the DoS effect of owning popular XML standards:
http://www.w3.org/blog/systeam/2008/02/08/w3c_s_excessive_dtd_traffic/)

* Dan Chudnov and Becky Yoose created mirrors of key Library of Congress
  datasets:
  https://twitter.com/dchud/status/385061127010668544
  https://twitter.com/yo_bj/status/385070010965585921

* Ed Summers, a coworker at LC, posted about Linked Open Data
  disappearing and various challenges in distributed replication:
  http://inkdroid.org/journal/2013/09/30/preserving-linked-data/

* Code for America mirrored various census.gov datasets:
  http://forever.codeforamerica.org/Census-API/shutdown-2013.html

* The Sunlight Foundation had a good general post talking about how
  they are impacted by canonical sources disappearing:
http://sunlightfoundation.com/blog/2013/09/30/what-happens-to-gov-in-a-shutdown/

The meta- risk here is assuming that any single point can be too big to
fail.  Government agencies commonly put considerable resources into making
sure that a disaster won't take them offline but it's impossible to deal
with a political crisis which prevents you from incurring any new expense
whatsoever. That's where the more interesting questions for designing
resilient systems come up because questions like replication and trust
really need to be considered first-class requirements. Using a protocol like
BitTorrent could solve many of the challenges for widely replicating large
datasets but it would still require some sort of attestation protocol to
decide what copies are trustworthy.

One more technical aspect is also the benefit to reusing standard Web
technologies where possible: much of the data which currently down can be
recovered from the Internet Archive if it's easily accessible to
crawlers. An unlinked API or anything with access control will fail much
harder.


Lowering Your Standards: DRM and the Future of the W3C (Danny O'Brien)

"Dewayne Hendricks" <dewayne@warpspeed.com>
Oct 3, 2013 3:42 PM
Danny O'Brien, EFF, 2 Oct 2013 (via Dave Farber)
<https://www.eff.org/deeplinks/2013/10/lowering-your-standards>

On Monday, the W3C announced that its Director, Tim Berners-Lee, had
determined that the "playback of protected content" was in scope for the W3C
HTML Working Group's new charter, overriding EFF's formal objection against
its inclusion. This means the controversial Encrypted Media Extension (EME)
proposal will continue to be part of that group's work product, and may be
included in the W3C's HTML5.1 standard. If EME goes through to become part
of a W3C recommendation, you can expect to hear DRM vendors, DRM-locked
content providers like Netflix, and browser makers like Microsoft, Opera,
and Google stating that they can now offer W3C standards compliant "content
protection" for Web video.

We're deeply disappointed. We've argued before as to why EME and other
protected media proposals are different from other standards . By approving
this idea, the W3C has ceded control of the "user agent" (the term for a Web
browser in W3C parlance) to a third-party, the content distributor.  That
breaks a—perhaps until now unspoken—assurance about who has the final
say in your Web experience, and indeed who has ultimate control over your
computing device.

EFF believes that's a dangerous step for an organization that is seen by
many as the guardian of the open Web to take. We have rehashed this argument
many times before, in person with Tim Berners-Lee, with staff members and,
along with hundreds of others, in online interactions with the W3C's other
participants.

But there's another argument that we've made more privately. It's an
argument that is less about the damage that sanctioning restricted media
does to users, and more about the damage it will do to the W3C.

At the W3C's advisory council meeting in Tokyo, EFF spoke to many
technologists working on Web standards. It's clear to us that the
engineering consensus at the consortium is the same as within the Web
community, which is the same almost anywhere else: that DRM is a pain to
design, does little to prevent piracy, and is by its nature user-unfriendly.
Nonetheless, many technologists have resigned themselves to believing that
until the dominant rights holders in Hollywood finally give up on it (as the
much of the software and music industry already has), we're stuck with
implementing it.

The EME, they said, was a reasonable compromise between what these contracts
demand, and the reality of the Web. A Web where movies are fenced away in
EME's DRM-ridden binary blobs is, the W3C's pragmatists say, no worse than
the current environment where Silverlight and Flash serve the purpose of
preventing unauthorized behavior.

We pointed out that EME would by no means be the last "protected content"
proposal to be put forward for the W3C's consideration. EME is exclusively
concerned with video content, because EME's primary advocate, Netflix, is
still required to wrap some of its film and TV offerings in DRM as part of
its legacy contracts with Hollywood. But there are plenty of other
rightsholders beyond Hollywood who would like to impose controls on how
their content is consumed.

Just five years ago, font companies tried to demand DRM-like standards for
embedded Web fonts. These Web typography wars fizzled out without the
adoption of these restrictions, but now that such technical restrictions are
clearly "in scope," why wouldn't typographers come back with an argument for
new limits on what browsers can do?

Indeed, within a few weeks of EME hitting the headlines, a community group
within W3C formed around the idea of locking away Web code, so that Web
applications could only be executed but not examined online. Static image
creators such as photographers are eager for the W3C to help lock down
embedded images. Shortly after our Tokyo discussions, another group proposed
their new W3C use-case: "protecting" content that had been saved locally
from a Web page from being accessed without further restrictions.
Meanwhile, publishers have advocated that HTML textual content should have
DRM features for many years.

In our conversations with the W3C, we argued that the W3C needed to develop
a clearly defined line against the wave of DRM systems it will now be
encouraged to adopt. ...

Dewayne-Net RSS Feed: <http://dewaynenet.wordpress.com/feed/>


Why you can't stop checking your phone

Monty Solomon <monty@roscom.com>
Mon, 7 Oct 2013 10:38:14 -0400
To fight texting and driving means confronting a bigger problem, say
experts: our technology is reprogramming us.

Leon Neyfakh, *The Boston Globe*, 6 Oct 2013

DRIVE FOR LONG ENOUGH in America, and you're bound to see someone texting
behind the wheel. Maybe it'll be the guy ahead of you, his head bobbing up
and down as he tries to balance his attention between his screen and his
windshield. Or maybe it'll be the woman weaving into your lane, thumbing at
her phone while she holds it above the dashboard. Maybe it'll be you.

A recent study by the Virginia Tech Transportation Institute showed that
drivers who are texting are twice as likely to crash, or almost crash, as
those who are focused on the road. It's a disturbingly common habit:
According to a survey analyzed by the Centers for Disease Control and
Prevention, nearly one-third of American adults had e-mailed or texted on
their phones while driving at least once during the previous month. And
while most get away with it unscathed, many do not. The National Safety
Council estimates that 213,000 car crashes in the United States in 2011
involved drivers who were texting, up from 160,000 the year before. ...

http://www.bostonglobe.com/ideas/2013/10/06/why-you-can-stop-checking-your-phone/rrBJzyBGDAr1YlEH5JQDcM/story.html?s_campaignƒ15


Silk Road's founder arrested

"Peter G. Neumann" <neumann@csl.sri.com>
Wed, 2 Oct 2013 13:57:03 PDT
Ross William Ulbricht (a.k.a. Dread Pirate Roberts, a name well known from
The Princess Bride) arrested in San Francisco, charged with conspiracy to
distribute narcotics, computer hacking, money laundering, after at least two
years of running an online black market for drugs.  Silk Road is a service
hidden by Tor's anonymization and operated as an "eBay for illegal goods and
services."  CNN reports that he had successfully been using Tor, but that
his apprehension resulted after he posted his Gmail address online!

  [In the movie, Dread Pirate Roberts was not a unique identity, with
  multiple people taking on that alias.  It is of course conceivable that
  there might be other DPRs lurking.  PGN]

Tim Hume, CNN, 5 Oct 2013
http://www.cnn.com/2013/10/04/world/americas/silk-road-ross-ulbricht/index.html

Thomas Claburn, Information Week, 2 Oct 2013
http://www.informationweek.com/security/vulnerabilities/silk-road-founder-arrested/240162165

Brian Krebs on Security
http://www.krebsonsecurity.com [no longer the first item.  PGN]


Technologists' Comment to the NSA Review Group

<Joseph Lorenzo Hall>
Saturday, October 5, 2013
EFF and CDT submitted a public comment to the NSA Review Group on behalf of
47 leading technologists (including yourself, of course!). It emphasizes the
need for technical input into oversight processes, that the NSA is making
everyone unsafe by planting backdoors and subverting encryption, and,
finally, that U.S. commitments to privacy and civil liberties require
honoring the human rights of non-U.S. people online.

CDT blog post: https://www.cdt.org/tech-comment-nsa-review

EFF blog post:
https://www.eff.org/deeplinks/2013/10/47-prominent-technologists-nsa-review-panel-we-need-better-technical-oversight

PDF of comment:
https://www.cdt.org/files/pdfs/nsa-review-panel-tech-comment.pdf

CDT and EFF coordinated this effort on behalf of the following 47
technologists:

Ben Adida
Ross Anderson, University of Cambridge
Dan Auerbach, Electronic Frontier Foundation
Brian Behlendorf, Board Member at EFF, Mozilla, and Benetech
Steven M. Bellovin, Columbia University
Matt Blaze, University of Pennsylvania
Scott Bradner, Harvard University
Eric Burger, Georgetown University
L. Jean Camp, Indiana University
Stephen Checkoway, Johns Hopkins University
Nicolas Christin, Carnegie Mellon University
Alissa Cooper, Center for Democracy & Technology
Lorrie Faith Cranor, Carnegie Mellon University
Nick Doty, University of California, Berkeley/World Wide Web Consortium
Jeremy Epstein, SRI International
David Evans, University of Virginia
David Farber, Carnegie Mellon University/University of Pennsylvania
Stephen Farrell, Trinity College Dublin
Joan Feigenbaum, Yale University
Edward W. Felten, Princeton University
Bryan Ford, Yale University
Daniel Kahn Gillmor
Matthew D. Green, Johns Hopkins University
J. Alex Halderman, University of Michigan
Joseph Lorenzo Hall, Center for Democracy & Technology
James Hendler, Rensselaer Polytechnic Institute
Nadia Heninger, University of Pennsylvania
David Jefferson, Lawrence Livermore National Laboratory
Micah Lee, Electronic Frontier Foundation
Morgan Marquis-Boire, Citizen Lab, University of Toronto
Siobhan MacDermott, AVG Technologies
Jonathan Mayer, Stanford University
Sascha Meinrath, Open Technology Institute, New America Foundation
Peter G. Neumann, SRI International
M. Chris Riley, Mozilla
Phillip Rogaway, University of California, Davis
Runa A. Sandvik, Independent Researcher
Jeffrey I. Schiller, Massachusetts Institute of Technology
Bruce Schneier, BT
Seth Schoen, Electronic Frontier Foundation
Micah Sherr, Georgetown University
Christopher Soghoian, American Civil Liberties Union
Ashkan Soltani, Independent Researcher
Brad Templeton, Electronic Frontier Foundation/Singularity University
Dan S. Wallach, Rice University
Nicholas Weaver, International Computer Science Institute
Philip Zimmermann, Silent Circle LLC

https://josephhall.org/


Re: Cost and Responsibility for Snowden's Breaches (RISKS 27.50)

"Robert R. Fenichel" <bob@fenichel.net>
Tue, 01 Oct 2013 15:14:53 -0700
Jonathan S. Shapiro's statement that The only questions were *who* would
leak it and *how soon*. It happened to be Snowden, but if not for Snowden it
would have been somebody else.  is optimistic.  Snowden was the first one to
reveal some of NSA's activities to responsible journalists, but there's no
reason to believe that there were not earlier leakers of the NSA-induced
vulnerabilities, leaking to less savory recipients than the NYT and the
Guardian.

Robert R. Fenichel, M.D.  http://www.fenichel.net


Bruce Schneier: NSA attacks Internet (*The Guardian*)

"Steven J. Greenwald" <sjg6@gate.net>
Tue, 8 Oct 2013 08:47:03 -0400 (GMT-04:00)
Bruce Schneier, theguardian.com, 4 October 2013
Why the NSA's attacks on the Internet must be made public
By reporting on the agency's actions, the vulnerabilities in our computer
systems can be fixed. It's the only way to force change.
http://www.theguardian.com/commentisfree/2013/oct/04/nsa-attacks-internet-bruce-schneier

Today, the Guardian is reporting on how the NSA targets Tor users, along
with details of how it uses centrally placed servers on the Internet to
attack individual computers. This builds on a Brazilian news story from last
week that, in part, shows that the NSA is impersonating Google servers to
users; a German story on how the NSA is hacking into smartphones; and a
Guardian story from two weeks ago on how the NSA is deliberately weakening
common security algorithms, protocols, and products.

The common thread among these stories is that the NSA is subverting the
Internet and turning it into a massive surveillance tool. The NSA's actions
are making us all less safe, because its eavesdropping mission is degrading
its ability to protect the US.

Among IT security professionals, it has been long understood that the public
disclosure of vulnerabilities is the only consistent way to improve
security. That's why researchers publish information about vulnerabilities
in computer software and operating systems, cryptographic algorithms, and
consumer products like implantable medical devices, cars, and CCTV cameras.

It wasn't always like this. In the early years of computing, it was common
for security researchers to quietly alert the product vendors about
vulnerabilities, so they could fix them without the "bad guys" learning
about them. The problem was that the vendors wouldn't bother fixing them, or
took years before getting around to it. Without public pressure, there was
no rush.

This all changed when researchers started publishing. Now vendors are under
intense public pressure to patch vulnerabilities as quickly as possible. The
majority of security improvements in the hardware and software we all use
today is a result of this process. This is why Microsoft's Patch Tuesday
process fixes so many vulnerabilities every month. This is why Apple's
iPhone is designed so securely. This is why so many products push out
security updates so often. And this is why mass-market cryptography has
continually improved. Without public disclosure, you'd be much less secure
against cybercriminals, hacktivists, and state-sponsored cyberattackers.

The NSA's actions turn that process on its head, which is why the security
community is so incensed. The NSA not only develops and purchases
vulnerabilities, but deliberately creates them through secret vendor
agreements.  These actions go against everything we know about improving
security on the Internet.

It's folly to believe that any NSA hacking technique will remain secret for
very long. Yes, the NSA has a bigger research effort than any other
institution, but there's a lot of research being done—by other
governments in secret, and in academic and hacker communities in the open.
These same attacks are being used by other governments. And technology is
fundamentally democratizing: today's NSA secret techniques are tomorrow's
PhD theses and the following day's cybercrime attack tools.

It's equal folly to believe that the NSA's secretly installed backdoors will
remain secret. Given how inept the NSA was at protecting its own secrets,
it's extremely unlikely that Edward Snowden was the first sysadmin
contractor to walk out the door with a boatload of them. And the previous
leakers could have easily been working for a foreign government. But it
wouldn't take a rogue NSA employee; researchers or hackers could discover
any of these backdoors on their own.

This isn't hypothetical. We already know of government-mandated backdoors
being used by criminals in Greece, Italy, and elsewhere. We know China is
actively engaging in cyber-espionage worldwide. A recent Economist article
called it "akin to a government secretly commanding lockmakers to make their
products easier to pick—and to do so amid an epidemic of burglary."

The NSA has two conflicting missions. Its eavesdropping mission has been
getting all the headlines, but it also has a mission to protect US military
and critical infrastructure communications from foreign attack.
Historically, these two missions have not come into conflict. During the
cold war, for example, we would defend our systems and attack Soviet
systems.

But with the rise of mass-market computing and the Internet, the two
missions have become interwoven. It becomes increasingly difficult to attack
their systems and defend our systems, because everything is using the same
systems: Microsoft Windows, Cisco routers, HTML, TCP/IP, iPhones, Intel
chips, and so on. Finding a vulnerability—or creating one—and keeping
it secret to attack the bad guys necessarily leaves the good guys more
vulnerable.

Far better would be for the NSA to take those vulnerabilities back to the
vendors to patch. Yes, it would make it harder to eavesdrop on the bad guys,
but it would make everyone on the Internet safer. If we believe in
protecting our critical infrastructure from foreign attack, if we believe in
protecting Internet users from repressive regimes worldwide, and if we
believe in defending businesses and ourselves from cybercrime, then doing
otherwise is lunacy.

It is important that we make the NSA's actions public in sufficient detail
for the vulnerabilities to be fixed. It's the only way to force change and
improve security.

  Bruce Schneier writes about security, technology, and people. His latest
  book is Liars and Outliers: Enabling the Trust That Society Needs to
  Thrive.  He is a member of the EFF's board of directors.


Mugged by a Mug Shot Online (David Segal)

"Matthew Kruk" <mkrukg@gmail.com>
Sun, 6 Oct 2013 02:13:54 -0600
David Segal, *The New York Times*, 5 Oct 2013
http://www.nytimes.com/2013/10/06/business/mugged-by-a-mug-shot-online.html?nl=todaysheadlines&emcíit_th_20131006&_r=0

  [This is a relatively long but remarkably well researched article on how
  mug shots persist even if the would-be culprit is exonerated, how a
  blackmail-like industry has developed to supposedly remove mug shots, how
  some real perps are able to get their mug shots removed, and much more.
  It is mandatory reading for RISKS readers.  PGN]


Adobe Announces Security Breach (David Kocieniewski)

"Matthew Kruk" <mkrukg@gmail.com>
Fri, 4 Oct 2013 02:00:15 -0600
David Kocieniewski, *The New York Times*, 3 Oct 2013
http://www.nytimes.com/2013/10/04/technology/adobe-announces-security-breach.html?nl=todaysheadlines&emcíit_th_20131004&_r=0

Hackers infiltrated the computer system of the software company Adobe,
gaining access to credit card information and other personal data from 2.9
million of its customers, the company acknowledged on Thursday.

The security breach, which Adobe called a part of a "sophisticated attack,"
also allowed hackers to obtain encrypted passwords and other personal
information from customers.

Hackers also illegally took copies of the source code of some of the
company's widely used products, which are run on personal computers and
businesses servers around the world.

There was no indication that the attackers obtained unencrypted credit card
numbers, Adobe said in a statement. As a precaution, however, the company
said it had notified customers and credit card companies about the breach
and reset customer passwords to prevent further unauthorized access.

"Cyberattacks are one of the unfortunate realities of doing business today,"
Adobe's chief security officer, Brad Arkin, wrote in a blog post on
Thursday.  "Given the profile and widespread use of many of our products,
Adobe has attracted increasing attention from cyberattackers."

The breach at Adobe is one of a recent spate of hacking episodes at
prominent organizations. Already this year, hackers have infiltrated
database aggregators like Lexis-Nexis and Dun & Bradstreet and the security
firm Kroll Associates, as well as the National White Collar Crime Center,
which helps businesses protect their computer systems.

Concerns about the security of data at Adobe were first raised last week,
when a technology researcher and an independent journalist investigating the
hacking episodes discovered copies of Adobe source code on a server that was
believed to have been used in the previous attacks. Brian Krebs, the
journalist, informed Adobe about his findings, and on Thursday publicly
reported the hacking on his site, krebsonsecurity.com.

One of the products that had its source code stolen is ColdFusion, which,
according to Adobe, is used by the United States Senate, 75 of the Fortune
100 companies and more than 10,000 other companies worldwide.

Adobe security officials said they were not aware of any specific risks to
customers. But because the source code contains the DNA of the software
program, computer experts said it could allow hackers to find and exploit
any other potential weaknesses in its security.


Notarizations Go Digital in North Carolina

Gabe Goldberg <gabe@gabegold.com>
Tue, 01 Oct 2013 17:41:09 -0400
North Carolina standardized a new process that allows for notarization to be
completed electronically. Some state officials are calling the program the
first and most robust move to e-notarizations on a statewide level.

Notarizations are traditionally completed manually with paper documents and
require an authorized "notary public" to approve, sign and seal official
documents—often for legal purposes. Laws require that when documents are
notarized, both the notary public and the parties involved with the
documents are physically present for the notarization.

Although notarizations can now be completed electronically in North
Carolina, often with the help of laptops or other mobile devices, the state
still requires that both the notary and involved parties be physically
present when the e-notarization transaction is completed. The move to
digital notarizations was spearheaded by the Secretary of State's office and
its current Secretary Elaine Marshall to enhance the signature value on
official documents while still upholding statewide standards.

http://www.govtech.com/computing/Notarizations-Go-Digital-in-North-Carolina.html

  Aside from the usual "What could go wrong?" query, it's not really clear
  what's happening, since notary and involved parties must still be
  physically present. So only the documents can be electronic? Not a word
  about how they're presented, authenticated, protected, disabled from
  changes...

Please report problems with the web pages to the maintainer

x
Top