The RISKS Digest
Volume 27 Issue 88

Monday, 5th May 2014

Forum on Risks to the Public in Computers and Related Systems

ACM Committee on Computers and Public Policy, Peter G. Neumann, moderator

Please try the URL privacy information feature enabled by clicking the flashlight icon above. This will reveal two icons after each link the body of the digest. The shield takes you to a breakdown of Terms of Service for the site - however only a small number of sites are covered at the moment. The flashlight take you to an analysis of the various trackers etc. that the linked site delivers. Please let the website maintainer know if you find this useful or not. As a RISKS reader, you will probably not be surprised by what is revealed…

Contents

U-2 Fries Air Traffic Control Computers, Shuts Down LAX
Andrew Blankstein via Henry Baker
URL problem for IE users on "Lessons from the ACM Risks Forum?
Yan Timanovsky via PGN
A student data collector drops out
Marc Rotenberg & Khaliah Barnes via PGN
Tech companies get less silent about government data collection
Serdar Yegulalp via Gene Wirchenko
Everyone Is Under Surveillance Now
*The Guardian*
Get Ready for Regulators to Peer Into Your Portfolio
Jason Zweig via Henry Baker
"This is why companies are still afraid of the cloud"
David Linthicum via Gene Wirchenko
Eggs in one basket with a Three dongle
Chris J Brady
Danish gossip magazine steals credit-card transaction information
Donald B. Wagner
"Heartbleed postmortem: OpenSSL's license discouraged scrutiny"
Simon Phipps via Gene Wirchenko
Re: heartbleed
Ivan Jager
Re: credit card fraud
Dimitri Maziuk
Re: The risks of garbage collection delays
Henry Baker
Michael Kohne
David B. Horvath
Info on RISKS (comp.risks)

U-2 Fries Air Traffic Control Computers, Shuts Down LAX

Henry Baker <hbaker1@pipeline.com>
Sat, 03 May 2014 18:45:06 -0700
What will happen when the new Google wifi drones start flying?  (I also
wonder if L.A.'s 1950's Nike missile sites in the Santa Monica mountains
went on high alert?)   HB

http://www.nbcnews.com/news/investigations/spy-plane-fries-air-traffic-control-computers-shuts-down-lax-n95886

Andrew Blankstein, NBC News, 2 May 2014
Spy Plane Fries Air Traffic Control Computers, Shuts Down LAX

A relic from the Cold War appears to have triggered a software glitch at a
major air traffic control center in California Wednesday that led to delays
and cancellations of hundreds of flights across the country, sources
familiar with the incident told NBC News.

On Wednesday at about 2 p.m., according to sources, a U-2 spy plane, the
same type of aircraft that flew high-altitude spy missions over Russia 50
years ago, passed through the airspace monitored by the L.A. Air Route
Traffic Control Center in Palmdale, Calif.  The L.A. Center handles landings
and departures at the region's major airports, including Los Angeles
International (LAX), San Diego and Las Vegas.

The computers at the L.A. Center are programmed to keep commercial airliners
and other aircraft from colliding with each other.  The U-2 was flying at
60,000 feet, but the computers were attempting to keep it from colliding
with planes that were actually miles beneath it.

Though the exact technical causes are not known, the spy plane's altitude
and route apparently overloaded a computer system called ERAM, which
generates display data for air-traffic controllers.  Back-up computer
systems also failed.

As a result, the Federal Aviation Administration (FAA) had to stop accepting
flights into airspace managed by the L.A. Center, issuing a nationwide
ground stop that lasted for about an hour and affected thousands of
passengers.

At LAX, one of the nation's busiest airports, there were 27 cancellations of
arriving flights, as well as 212 delays and 27 diversions to other airports.
Twenty-three departing flights were canceled, while 216 were delayed.
There were also delays at the airports in Burbank, Long Beach, Ontario and
Orange County and at other airports across the Southwestern U.S.

In a statement to NBC News, the FAA said that it was “investigating a
flight-plan processing issue'' at the L.A. Air Route Traffic Control Center,
but did not elaborate on the reasons for the glitch and did not confirm that
it was related to the U-2's flight.

“FAA technical specialists resolved the specific issue that triggered the
problem on Wednesday, and the FAA has put in place mitigation measures as
engineers complete development of software changes,'' said the agency in a
statement.  “The FAA will fully analyze the event to resolve any
underlying issues that contributed to the incident and prevent a
reoccurrence.''

Sources told NBC News that the plane was a U-2 with a Defense Department
flight plan.  “It was a Dragon Lady,'' said one source, using the nickname
for the plane.  Edwards Air Force Base is 30 miles north of the L.A. Center.
Both Edwards and NASA's Neil A. Armstrong Flight Research Center, which is
located at Edwards, have been known to host U-2s and similar, successor
aircraft.

The U.S. Air Force is still flying U-2s, but plans to retire them within the
next few years.

Gary Hatch, spokesman for Edwards Air Force Base, would not comment on the
Wednesday incident, but said, “There are no U-2 planes assigned to
Edwards.''

A spokesperson for the Armstrong Flight Research Center did not immediately
return a call for comment.

Developed more than a half-century ago, the U-2 was once a workhorse of
U.S. airborne surveillance.  The plane's `operational ceiling' is 70,000
feet.  In 1960, Francis Gary Powers was flying a U-2 for the CIA over the
Soviet Union when he was shot down.  He was held captive by the Russians for
two years before being exchanged for a KGB colonel in U.S. custody.  A
second U.S. U-2 was shot down over Cuba in 1962, killing the pilot.

  [danny burstein commented, “Don'cha hate it when our own `weather
  observation' planes knock out our own air traffic control?''  PGN]


URL problem for IE users on "Lessons from the ACM Risks Forum"

Yan Timanovsky <timanovsky@hq.acm.org>
Mon, 5 May 2014 19:37:13 +0000
You can register for Peter Neumann's May 22 webcast, "Lessons from the ACM
Risks Forum," at http://learning.acm.org/webinar/ (click on the registration
link in the sidebar on the right).

I hope that works,

Yan Timanovsky, Education Manager, Association for Computing Machinery (ACM)
2 Penn Plaza, 7th Floor, New York, NY 10121 212-626-0515 timanovsky@hq.acm.org

  [The URL included in Yan's message in RISKS-27.87 apparently works for Mac
  and Linux users on Firefox, Safari, and Chrome, but for some IE users from
  whom Yan received complaints.  For those of you whose browser was unable
  to use what seemed to me to be a perfectly good URL, Yan has provided an
  alternative way of registering for the webinar.  However, this nasty
  incompatibility is itself worth a note in RISKS.  (I routinely have to
  remove the "3D" from "=" and the "= <newline>" at the end of broken
  lines from URLs submitted to RISKS, plus all the E92/E93/E94
  etc. encodings for standard ASCII characters that some stupid mailer
  system cannot handle, although I've gotten lazy with some of the special
  characters with diacritics that I once used to try to decode properly.)
  PGN]


A student data collector drops out (Marc Rotenberg & Khaliah Barnes)

"Peter G. Neumann" <neumann@csl.sri.com>
Mon, 5 May 2014 10:34:59 PDT
(Letter) To the Editor, *The New York Times* Sunday Business, 4 May 2014
http://www.nytimes.com/2014/04/27/technology/a-student-data-collector-drops-out.html

The recent collapse of inBloom, the student data company, is a powerful
reminder that in the era of Big Data, privacy still matters (A Data
Collector Drops Out, Technophoria, April 27).

As we see it, the problem was not misunderstanding by the public, but a lack
of meaningful privacy protections.

The Department of Education also bears some responsibility for inBloom's
demise. Instead of defending important privacy laws that help protect
student data, the department chose to loosen the rules so that private
vendors could pull sensitive data out of local schools. Schools were also
encouraged to collect far more information than they had in the
past. Parents did not know what information was being collected, who would
have access to it or what impact it might have on their children's
future. Not surprisingly, many objected.

The Education Department could help restore confidence in these
data-intensive programs by strengthening privacy rules and establishing a
Student Privacy Bill of Rights. Students should know what information about
them is being collected and how it is being used. And schools should be more
cautious about turning over their students data to others.

Marc Rotenberg and Khaliah Barnes

Mr. Rotenberg is president of the Electronic Privacy Information
Center, and Ms. Barnes is director of EPIC's Student Privacy Project.


Tech companies get less silent about government data collection (Serdar Yegulalp)

Gene Wirchenko <genew@telus.net>
Sun, 04 May 2014 21:48:24 -0700
Serdar Yegulalp, InfoWorld, 2 May 2014
Some major tech companies are starting to push back against government
orders for personal data, though they still must comply
with court orders

http://www.infoworld.com/t/internet-privacy/tech-companies-get-little-less-silent-about-government-data-collection-241815


Everyone Is Under Surveillance Now (*The Guardian*)

"Peter G. Neumann" <neumann@csl.sri.com>
Sat, 3 May 2014 20:04:48 PDT
http://www.theguardian.com/world/2014/may/03/everyone-is-under-surveillance-now-says-whistleblower-edward-snowden

The US intelligence whistleblower Edward Snowden has warned that entire
populations, rather than just individuals, now live under constant
surveillance.  “It's no longer based on the traditional practice of
targeted taps based on some individual suspicion of wrongdoing, It covers
phone calls, emails, texts, search history, what you buy, who your friends
are, where you go, who you love.''

Snowden made his comments in a short video that was played before a debate
on the proposition that surveillance today is a euphemism for mass
surveillance, in Toronto, Canada. The former US National Security Agency
contractor is living in Russia, having been granted temporary asylum there
in June 2013.

The video was shown as two of the debaters—the former US National
Security Administration director, General Michael Hayden, and the well-known
civil liberties lawyer and Harvard law professor, Alan Dershowitz—argued
in favour of the debate statement: “Be it resolved state surveillance is a
legitimate defence of our freedoms.''

Opposing the motion were Glenn Greenwald, the journalist whose work based on
Snowden's leaks won a Pulitzer Prize for the Guardian last month, and
Alexis Ohanian, co-founder of the social media website Reddit.

The Snowden documents, first leaked to the Guardian last June, revealed that
the US government has programs in place to spy on hundreds of millions of
people's emails, social networking posts, online chat histories, browsing
histories, telephone records, telephone calls and texts—“nearly
everything a typical user does on the Internet'', in the words of one
leaked document.

Greenwald opened the debate by condemning the NSA's own slogan, which he
said appears repeatedly throughout its own documents: “Collect it all.''

“What is state surveillance?'' Greenwald asked.  If it were about
targeting in a discriminate way against those causing harm, there would be
no debate.

“The actual system of state surveillance has almost nothing to do with
that. What state surveillance actually is, is defended by the NSA's actual
words, that phrase they use over and over again: 'Collect it all.' ''

Dershowitz and Hayden spent the rest of the 90 minutes of the debate denying
that the pervasive surveillance systems described by Snowden and Greenwald
even exist and that surveillance programs are necessary to prevent
terrorism.

“Collect it all doesn't mean collect it all!'' Hayden said, drawing
laughter.

Greenwald sparred with Dershowitz and Hayden about whether or not the
present method of metadata collection would have prevented the terrorist
attacks on 11 September, 2011.

While Hayden argued that intelligence analysts would have noticed the number
of telephone calls from San Diego to the Middle East and caught the
terrorists who were living illegally in the US, Greenwald argued that one of
the primary reasons the US authorities failed to prevent the attacks was
because they were taking in too much information to accurately sort through
it all.

Before the debates began, 33% of the audience voted in favour of the debate
statement and 46% voted against. It closed with 59% of the audience siding
with Greenwald and Ohanian.


Get Ready for Regulators to Peer Into Your Portfolio (Jason Zweig)

Henry Baker <hbaker1@pipeline.com>
Mon, 05 May 2014 08:33:14 -0700
  (Or in this case, M-O-N-E-Y-P-O-T !)

This is also clearly overreach, as a large fraction of investors don't give
their brokers or money managers any discretion to trade for their accounts.
Perhaps it would be better to make this system *voluntary* for those who did
want someone to watch over a discretionary account ?

http://blogs.wsj.com/moneybeat/2014/05/02/get-ready-for-regulators-to-peer-into-your-portfolio/

Jason Zweig, Bad brokers, meet RoboRegulator, The Intelligent Investor,
  2 May 2014

In December, the Financial Industry Regulatory Authority, which oversees how
investments are sold, proposed what it calls Cards, an electronic system
that would regularly collect data on balances and transactions in brokerage
accounts.

If adopted, Cards would revolutionize how regulators do their jobs and could
make it harder for unscrupulous brokers to bilk customers.

But some critics think it could endanger the privacy and security of
investors' confidential data.  And the proposal ups the ante for Finra,
which often has been criticized for letting wrongdoers slip through the
cracks.

Under Cards (which stands for Comprehensive Automated Risk Data System),
Finra would collect—probably weekly—a record of activity at all of the
more than 4,100 brokerage firms nationwide.

Finra would scour the data continuously, looking for any hints that a firm
or a broker might be taking advantage of a client: excessive trading or
commissions, switching from one mutual fund to another, overcharging for
bond E*Trades, overconcentrating in risky or illiquid securities, and so on.

Cards “would provide us with a treasure trove of information and the
ability to focus quicker on firms that are placing investors at high risk.''
Richard Ketchum, Finra's chairman and chief executive, said in an interview.

Social Security numbers and other personal details won't be included in
Cards, so Finra won't be able to identify which investor an account
belongs to or to match any investor's holdings across firms.  Nor will
the data give anyone access to cash or securities.

Finra relies now partly on data analysis and partly on field examiners who
gather information piecemeal on potential wrongdoing.  With Cards, an ocean
of detail would flow into Finra's computers automatically.

That, Mr. Ketchum argues, would enable the regulator to stop at least some
misdeeds before too much damage is done.  And the sense that a regulatory
RoboCop is watching their every move could deter some brokers from doing
anything wrong in the first place.

“If we can easily compare information across firms, that will build
enormously greater power into our focus,'' Mr. Ketchum says.  “I have no
doubt that this is going to be the standard for regulation in the next three
to five years.''

Some experts, however, worry that Cards could be overkill.

“This goes beyond mere concerns about Big Brother,'' says Henry Hu, who
oversaw data analytics as former director of the Division of Economic and
Risk Analysis at the Securities and Exchange Commission and is now a law
professor at the University of Texas in Austin.  “I think Cards creates a
new form of systemic risk.''

Mr. Hu worries that Cards would take data that is widely dispersed—say
you have money scattered across accounts at E*Trade, Fidelity Investments,
Morgan Stanley and Charles Schwab—and centralize it for the first time.
That could make it more vulnerable.

“It's a Pearl Harbor problem,'' Mr. Hu says.  “All the ships and airplanes
are in one place at the same time.''

The probability of the data being breached by a disgruntled employee, a
terrorist or an unfriendly government is probably very low, Mr. Hu concedes
-- but the consequences could be dire.  “Just read any trashy spy novel,''
he says.  “If you were a hostile foreign government, you would immediately
put some of your top people to work'' trying to crack into Cards.

Finra vehemently disputes that Cards could create systemic risk.  The chance
that anyone could penetrate the system and exploit the anonymous data for
nefarious purposes is “infinitesimally small, out on the fringe of all
possibilities,'' says Steven Joachim, an executive vice president at Finra.

“The good that will come from the dramatically increased ability to reduce
fraud will way overwhelm that extraordinarily remote risk,'' Mr. Joachim
adds.

Mr. Ketchum says he hopes that Cards will go to the Securities and Exchange
Commission for final approval by next year, after further refinements and
input from brokerage firms and the public.

Consumer advocates have long claimed that Finra is insufficiently tough on
the brokerage industry that helps fund it.  But if Cards does go through as
planned, Finra's new powers could leave the regulator itself with nowhere to
hide.

“If they get all this information and fail to find a problem or to do
enough about it, they could be open to serious criticism,'' says Mike Stone,
formerly a senior regulator at the SEC and a top legal officer at Morgan
Stanley, now an adjunct professor at the Benjamin N. Cardozo School of Law
at Yeshiva University in New York.

“We would welcome that scrutiny,'' Mr. Ketchum says.  “If we're not using
the data [from Cards] properly, that's where we should be held
accountable.''

Access to Big Data can still leave big problems festering.  After all, the
recent prosecutions of insider trading were set off as much by informants
wearing wires as by computers running sophisticated analysis on data.

But if Finra does end up putting all its cards on the table, it will have to
follow the data wherever it leads.

Write to Jason Zweig at intelligentinvestor@wsj.com, and follow him on
Twitter:@jasonzweigwsj


"This is why companies are still afraid of the cloud" (David Linthicum)

Gene Wirchenko <genew@telus.net>
Fri, 02 May 2014 09:16:25 -0700
David Linthicum, InfoWorld, 2 May 2014
An ex-employee of a cloud provider was recently convicted of screwing
with its servers—could this happen to you?
http://www.infoworld.com/d/cloud-computing/why-companies-are-still-afraid-of-the-cloud-241432


Eggs in one basket with a Three dongle

Chris J Brady <chrisjbrady@yahoo.com>
Thu, 1 May 2014 17:01:05 -0700 (PDT)
A few years ago I ditched my landline service with BT. This was due to
numerous disconnections and faults that rendered browsing the web with
broadband a waste of time.

So I switched to using a Three dongle. I live in a flat at the top of a
block near a major UK Airport. I get 5 bars. The Three dashboard on Windows
connects OK. The dongle has a light blue light indicating a connection to
the mast / server / router - whatever.

And it all used to be fine a year ago, and browsing the web was a fairly
reliable and speedy experience.

However now when Three's home page is requested all I get is "Resolving host
..." and then "No Internet Connection Is Available."

This is now consistent, repeatable, and almost permanent. Three still charge
me for the non-existent service - of course,

Incidentally all works well in a speeding train commuting from London to
Brighton (except through tunnels) - in a way I find that to be a miracle of
technology.

But what's wrong with living near a major Airport - I would have thought
that there all services would be second to none.

So my dilemma right now is whether to try and continue trying to use a
dongle at home, or going back to a landline.

The risk to me was putting all of my web eggs into the dongle basket and
ditching the landline. I guess it might have been better to have both!


Danish gossip magazine steals credit-card transaction information

"Donald B. Wagner" <zapkatakonk1943.6.22@gmail.com>
Sat, 3 May 2014 09:34:39 +0200
http://cphpost.dk/news/news-of-the-weird.9386.html

Danish police are probing allegations that the popular Danish gossip
magazine Se og Hør snooped on the credit card transactions of the Danish
Royal Family and various celebrities.

Police are investigating claims that an employee at the credit card IT
service Nets passed on details about the credit card transactions of the
beautiful people to Se og Hør, which then published stories based on the
information. ...

Information allegedly based on the illegally purloined credit card
information includes photos of Prince Joachim and Princess Marie's 2008
secret honeymoon in Canada and numbers confirming Prince Henrik's wild
shopping sprees in Thailand. Restaurant visits and other spending by Crown
Prince Frederik and Crown Princess Mary were also tracked. Queen Margrethe
appears to have been spared because she doesn't use a credit card.

Nets handles credit-card transactions for just about all Danish banks. What
has come out in the Danish-language media is that an IBM employee working as
a consultant at NETS received DKK 10,000 a month for supplying credit-card
transaction information for people on a list of names given to him. Then the
magazine could know where the royals were vacationing and send their
photographers.

That this could have gone on for six years with no one the wiser suggests
that both Nets and IBM haven't a clue about security.

Donald B. Wagner, Jernbanegade 9B, DK-3600 Frederikssund, Denmark
Tel. +45-3331 2581 http://donwagner.dk


"Heartbleed postmortem: OpenSSL's license discouraged scrutiny" (Simon Phipps)

Gene Wirchenko <genew@telus.net>
Fri, 02 May 2014 09:11:40 -0700
Simon Phipps, InfoWorld, 2 May 2014
An open source expert believes OpenSSL's custom license was partly
responsible for the neglect behind Heartbleed
http://www.infoworld.com/d/open-source-software/heartbleed-postmortem-openssls-license-discouraged-scrutiny-241781


Re: heartbleed (Maziuk, RISKS-27.85)

Ivan Jager <aij+@mrph.org>
Fri, 2 May 2014 15:27:21 -0400
I think the main reason for writing libraries in C is that the GCed
languages suffer from too many standards. C has has a stable ABI for
many years now, so practically all languages support calling into C.
(It doesn't hurt that it tends to be efficient too, so some code gets
written in C even though it's only called from one language).

In the case of OpenSSL, I'm not convinced C is (or was) necessarily
the wrong language. The most obvious alternative, would be to have N
different versions of OpenSSL, You may be able to write one version in
Scala and have it be usable from both Java and .NET, but you'd still
need separate implementations for Python, Perl, Haskell, SML, OCaml,
Go, Rust, Lua, Javascript, Ruby, etc. Certainly, buffer overflows
would be avoided, but I'd expect logic mistakes to go up, and it would
certainly be a lot more work to maintain that many implementations.
You may also still need a C implementation as it may be hard to
convince all the C/C++/Objective-C users that they should include the
<insert your favorite language here> runtime just use OpenSSL.

Another option may be to use some obscure language that can do a
decent amount of static analysis and then compile down to C. I'm not
sure what the options there look like. I know there are several safe
"dialects" of C, but I'm not sure they maintain ABI compatibility. Any
of the fat pointer implementations would of course be right out...

And of course you don't need all of GC to avoid buffer overflows. Fat
pointers are enough. You could also avoid memory leaks and
use-after-free with a linear type system, but they're not very common
yet.

But anyway, the problem isn't GC performance, it's lack of a runtime
everyone can agree to. (Which in part, IMHO, is due to lack of a type
system everyone can agree to.)


Re: credit card fraud (Sanders, RISKS-27.85)

Dimitri Maziuk <dmaziuk@bmrb.wisc.edu>
Fri, 02 May 2014 10:39:08 -0500
In our case they blocked a payment in a dive shop in Curacao. Later that day
I was able to pay 1) in the supermarket, 2) at a gas station, and 3) at the
hotel—run by the same people who run the dive shop, though through a
different shopfront. That last bit was particularly confusing because the
transaction amount was not that different from the one that bounced either.

> - The fraud detection system does not maintain any transaction history.

That's why I don't worry too much about targeted advertisers or three-letter
acronyms collecting my metadata: you'd think a string of dive equipment
purchases, plane tickets to and hotel booking in Curacao, and a few
purchases on the island would be a clear indication that 1) we're probably
there and 2) not home to check our answering machine. If visa's software
can't do that I doubt nsa's software can do much better.

> - Everyone assumes that card holders have continuous telephone access.

Even if you have phone access, you have to know to call home and check your
answering machine. I don't think I even remember what buttons to press to
check mine.


Re: The risks of garbage collection delays (Loughran, RISKS-27.87)

Henry Baker <hbaker1@pipeline.com>
Thu, 01 May 2014 18:57:13 -0700
Let's be absolutely clear about our terminology w.r.t. garbage collection
(GC).

"Hard real time" systems respond within an a priori fixed bounded length of
time.  For many systems, this bound might be 50 milliseconds, for example.

"Real time" garbage collection works by performing a tiny bit of GC work for
each allocation, so that there is _never_ a long GC delay.  Depending upon
the amount of GC work per allocation, the total amount of space required
might vary, but this is a good thing; memory space these days is cheap, so
the ability to trade larger memory space for smaller processing delays is a
good thing.

To the extent that a Java implementation requires long delays for GC, it is
_not_ a suitable host for a real-time environment such as a web site.  This
is not a problem for the Java language—per se—or even with the concept
of garbage collection, but a problem with that particular Java
_implementation_.

I hear excuse after excuse about garbage collection, but the "roll your own"
alternatives are _always_ buggier and more expensive at the end of the day.

The crypto people are fond of warning people not to roll their own crypto
systems; perhaps the same thing should be said about memory
management/garbage collection systems.

I've spent a lifetime trying to come up with appropriate analogies to better
understand computer science issues.

I liken the problem of buffer overflows (due to non-safe languages) to the
1854 cholera epidemic in London.  This epidemic was only stopped after the
pump handle which pumped sewage-laden drinking water was removed.

http://en.wikipedia.org/wiki/1854_Broad_Street_cholera_outbreak

Perhaps the modern equivalent of that pump handle is "gcc".  Gcc may have to
be forceably removed from programmers' cold, dead hands in order to get them
to program in safer languages.

(My comment should in no way be interpreted as a slam against the quality of
gcc; the pump in 1854 London was a perfectly fine pump; it just allowed the
pumping of contaminated water.)


Re: The risks of garbage collection delays (Loughran, RISKS-27.87)

Michael Kohne <mhkohne@kohne.org>
Thu, 1 May 2014 20:01:17 -0400
I have to confess to not understanding why so much attention is paid to
this style of garbage collection, rather than reference counting systems.
Yes, GCs like those found in Java ARE more efficient - they GC in larger
chunks, rather than every time something goes out of scope. I get that.

But reference counting accomplishes the primary goal of making sure memory
gets cleaned up without programmer intervention, without the 'it just stops
for 1/2 a second every now and then' issue we see with regular GC. As well,
reference counting means you can depend on your 'finalizer' equivalent to
run at an appropriate time, which means you can use regular objects to
automatically collect things like sockets or DB connections or what ever
you have.

We waste CPU on all sorts of things to make our programs easier to write --
why aren't we willing to spend some of that CPU GCs are saving us on making
our programs more predictable? It would make the problem described in
Steve's post vanish without having to implement anything that's not already
well understood.


Re: The risks of garbage collection delays (Loughran, RISKS-27.87)

"David B. Horvath, CCP" <dhorvath@cobs.com>
Thu, 01 May 2014 21:19:39 -0400
As I'm sure you're aware, this is nothing new. BASIC had GC and applications
would "go away" (perceived as a "hang") while GC was taking place.

When developing a PC and Apple ][ based application in the early 1980's we
experienced this problem. It took us a bit of time to figure out that the
behavior, a hang that would resolve itself if one was patient enough, was
caused by GC. Our solution was to force GC every time we hit the primary
menu (right after user menu choice entry). That way there was never so much
processing required that the time was noticeable.

Unfortunately, the GC method in Java is a request rather than an order.

Please report problems with the web pages to the maintainer

x
Top