30 December 2009

Happy many things

Didn't mean to disappear for so long, and, unfortunately, it'll be a while yet before I'm posting again.

First, though, happy Winter Solstice, New Year, and Perihelion (January 3rd).  Hope that all is going well, and you all have a good new year.

I managed to mistake cleaning up a tangle of wires, and disconnect the internet from home.  So it'll be a while yet before I post regularly.  But, once that's taken care of, first up will be a look at the claim of 'but temperature leads CO2 during the ice ages'.  Somehow that's taken to mean that the current CO2 levels are explained by earlier warm periods (they're not) or that CO2 cannot affect temperatures (it can).  The post will have the full illustration.

Cheers

03 December 2009

Technological progress

A couple of videos that caught my eyes. First is one on an upside of technological progress -- cars today are enormously safer in a crash than cars 50 years ago. This video shows the collision, and the driver crash test dummies, between a 1959 and 2009 car. The 2009 car undoubtedly weighed far less than the 1959. Superior engineering is the key -- a point that Consumer Reports routinely winds up making in their vehicle reviews.
Crash test video.

Digressing a second: It occurs to me that Consumer Reports is probably the popularly available magazine that does the most consistent job of displaying a scientific approach. The typical review article shows what they were testing, how they tested it, adds information about how significant the test differences are like, and so on.

The second video is one on a topic that weather and climate folks are probably more than a little tired of. Namely, the accusation that we didn't realize that there's such a thing as an urban heat island effect. I've never taken up the search seriously, but a few years ago, an urban heat island reference was in the Bulletin of the American Meteorological Society's '50 years ago' column. So, well-known (the referenced article was clearly not the discovery of the effect, just another illustration) by the early 1950s.

The video is from Peter Sinclair's Climate Crock of the week. In it, he carries out a good practice for science -- suppose an argument is correct, then look for observations that will confirm or reject that argument. The argument is that the urban heat island is producing the observed warming trend. Ok, says Peter, if that's the case, we should see that the trend is the strongest (most positive) in urban areas. Now, it isn't hard to figure out where the urban areas are. Nor is it hard to map out what the trends are for different areas of the globe. Compare the two.

In truth, as he illustrates, the warmings are highest in areas that have very few people -- Siberia, the Arctic, and Hudson Bay being leading zones. His figure is for the 2008 anomalies -- after the 'decade of cooling' (what cooling?), rather than the 30 year trends ending that date. If anything, the trend map is worse for the urban heat island fans, as it shows large trends across northern Canada as well.

Nothing obvious connecting the two videos. But the thing is, I have a lot of confidence in engineers to solve engineering problems. One such problem is car safety. Others would be things like more efficient cars, new and better ways of producing energy, and so on. In the 1950s and 60s, it was an article of faith in the car industry that customers did not care about safety. And that if they were forced to engineer safety, they'd go out of business (it would be too expensive). Instead, we have vastly safer cars today, and tens of thousands of people are still alive because of it. The engineers were more than up to the challenge. On the other hand, if the engineers aren't allowed to work on a solution, they won't find it.

I'm not taking up geoengineering in this; that's a topic for a lengthier post of its own. I'm just minded that there are quite a few climate-related technology issues that exist regarding efficiency of old technologies, or new technologies to develop, that we're being told would drive companies out of business, cost jobs, and other alarmist statements -- as there were in the 50s and 60s regarding automobile safety. Yet the engineers found ways of improving safety even as we drove more, drove lighter cars, and so on. And the companies didn't go out of business, indeed make quite a lot of money.

02 December 2009

Fake ice

The ice is real ice, but the satellites are being fooled.  Actually, not even that.  The satellites are correctly reporting what they're seeing, but the humans have been making an assumption that no longer holds true due to change in the Arctic.  A friend pointed to an interview (audio only) of David Barber, on Quirks and Quarks on CBC.  Early on, he mentioned how the satellites were being fooled.  That isn't exactly what's up, but was the right answer for the circumstance.  I'll take a bit more time to discuss the details of what is happening on this part of the story. 

Our standard method for observing sea ice from space is to use satellites to measure the microwave energy emitted from the earth's surface.  Sea water is a very bad emitter, so has a very low brightness temperature.  Sea ice is a pretty good emitter, so has a much warmer brightness temperature.  You go through a couple of elaborations on this, and out pops the fraction of the surface that is sea ice instead of sea water.

You can also be a little more demanding than that.  Particularly in the Arctic, this makes sense.  The elaboration comes from the fact that not all ice emits microwaves equally well.  Salty ice is a better emitter than fresher ice.  In the summer time, when ice floes do some melting (but not enough to get rid of the whole floe for the ones we're interested in), it is the saltier parts of the floe that melt away first.  This is the same thing happening when you salt the sidewalk -- the salt lowers the melting point, and the salty parts melt first.  When you get to this time of year, and the melting stops, what is left is a relatively fresh ice floe.  It is also called 'multiyear' ice, since it's now in its second winter.  With a more detailed analysis of the microwaves, you can try to distinguish between the first year ice (saltier and a better emitter) from the multiyear ice (less salty, but still a better emitter than sea water) from the sea water (very poor emitter).

Analogy for the remote sensing: The satellite is listening to the surface.  Sea ice is much louder than sea water, so you can start by just checking how loud things are.  Louder = more sea ice.  You can then listen a little more carefully.  Multiyear ice is a little quieter than first year ice, and a little different pitch.  So there's a choir of sea ice, and the multiyear ice is, say, the sopranos singing a little quiter than the basses (first year ice), but both are much louder than the baritones (sea water).  If you've listened to a choir, a band, or just a room of people talking (and deciding how many people were in it, and how many of them were men vs. women), you've done the same sort of discrimination that we're doing with the satellite observations.

Now for the spot where we were fooled. 

30 November 2009

Data set reproducibility

Data are messy, and all data have problems.  There's no two ways about that.  Any time you set about working seriously with data (as opposed to knocking off some fairly trivial blog comment), you have to sit down to wrestle with that fact.  I've been reminded of that from several different directions recently.  Most recent is Steve Easterbrook's note on Open Climate Science.  I will cannibalize some of my comment from there, and add things more for the local audience.

One of the concerns in Steve's note is 'openness'.  It's an important concern and related to what I'll take up here, but I'll actually shift emphasis a little.  Namely, suppose you are a scientist trying to do good work with the data you have.  I'll use data for sea ice concentration analysis for illustration because I do so at work, and am very familiar with its pitfalls.

There are very well-known methods for turning a certain type of observation (passive microwaves) in to a sea ice concentration.  So we're done, right?  All you have to do is specify what method you used?  Er, no.  And thence comes the difficulties, issues, and concerns about reproducing results.  The important thing here, and my shift of emphasis, is that it's about scientists trying to reproduce their own results (or me trying to reproduce my own).  That's an important point in its own right -- how much confidence can you have if you can't reproduce your own results, using your own data, and your own scripts+program, on your own computer?  Clearly a good starting point for doing reliable, reproducible, science.

29 November 2009

Last call for submissions

The deadline for submissions to the Openlab 2009 is midnight EST, 1 December.  This is aimed at being a collection of the best blogging from 1 December 2008 through 30 November 2009.  Use this submission form to submit your favorites (from here and elsewhere). The current summary of submitted articles is at Blog Around the Clock. Two of mine, Science Jabberwocky, and Results on Deciding Trends, are already submitted. If there's something else you like as well or better, time to submit them. If those are your favorites from here, no need to do anything.

Though it would probably help my odds if you didn't submit others' articles, I see there are several quite good articles from other blogs that haven't been submitted. I'll be doing some of that submission myself, and I encourage you to do so as well. I'd just like to make coturnix's (the editor) job as hard as reasonably possible :-) -- give him a lot of excellent articles to consider.

28 November 2009

Science Anniversaries

150 and 400 years ago, two major events in the history of science occurred.

400 years ago, the telescope was invented and started to be used for astronomy.  For $100-$150 you can now get a telescope far superior to what Galileo used to carry out a major revolution in our understanding of the universe.  More in a moment.

150 years ago yesterday (November 27th), Charles Darwin's On the Origin of Species by Means of Natural Selection was published. Different major revolution in our understanding of the universe.  You can read this for yourself.  I don't actually recommend reading it unless you are really interested in history of science, and like Victorian-era writing.  (If you like my style, you're a couple steps in that direction.  My wife noted that I write something like Trollope, a prolific Victorian whom she likes.)  We've learned an awful lot in the 150 years since then, and many things that were mysteries to Darwin, such as how inheritance occurs, are well-known to us now.  Instead I'll suggest you read the evolution sections of modern biology texts.  Two such texts recommended by my biologist friends are Futuyma's, and Campbell and Reece.

24 November 2009

PhD Thesis Defended

I've been the (name of employer)-side mentor for a student working on her PhD.  Yesterday, she successfully defended her PhD thesis.  Not sure she wants to be named, so I won't for now.  But it's a lot of work to get to where she is, and I'll congratulate her.  She knows who she is :-)  Good job!

Update:
Jamese Sims successfully defended her thesis at Howard University on Monday.  She'll be writing up something about her experiences for the blog one of these days. 

16 November 2009

Where is the surface?

I just commented on my facebook status that I'm at a meeting about sea surface temperature.  That part was safe.  Rest of the comment was to observe that I'm now back to wondering whether the sea has a surface, where it is if it does, and if it does, whether it has a temperature.  That prompted a friend to comment 'Great ... this is going to bug me now.'  So for him, here's a longer version.

This sort of question is very common to science.  Of course my musing for facebook is overstated.  But there is usually a real question about what exactly it is you've observed when you take an observation.  When you have very different observing methods, they may well observe things that are different from each other.  There are, let's say 4, different ways of observing the sea surface's temperature.  For a diagram, see the wikipedia article on sea surface temperature

The standard method, and reference for others, is calibrated buoys that carry a thermometer at a known depth, typically 1 meter.  A major drawback to this method (all methods of observing have drawbacks!) is that you need a buoy.  They're not cheap, and it would take several million of them to give us a high resolution data set for global sea surface temperature (acronymed SST).

11 November 2009

Veteran's Day

Veteran's Day or, in some other parts of the world, Remembrance Day today. 

My thanks to those who have served.  My daughter and son-in-law are among you.

09 November 2009

Racing again

Saturday I got to meet a barn owl and a red shouldered hawk.  Both were amazingly calm for all the runners who were milling around.  The owl was looking around at all the dogs, deciding whether they were snack sized or not.  (Concluded 'not', though I think a couple of dogs caused some serious calculation.) Fun to watch an owl look around.  They were out as part of the parks and planning commission entertainment for the Jug Bay 10k (and 5k, and 3k walk). 

I and a fellow club member were out for the 10k, with our plan being to run 1 minute, walk 1 minute.  This being a much flatter course than last week's cross-country, I was able to follow the plan pretty well.  Passed the mile 5 marker in 49:45, vs. last week's 8k (a touch shorter) in 54:25.  My rule of thumb for the cross country course worked out pretty well -- about 10% slower than a flat course.  Finished the 10k in 61:23, which also satisfied my check list for the conservative goal at my February 10 miler.  Needed 72 minutes to be in line with the 2:00 goal; this time also meets the more aggressive notion of a 1:45 10 mile, having needed 63 minutes for that.

Bad news, good news being that my calf/achilles acted up again.  Bad news being that it did so.  The good news being that I've now got a better line on what, exactly, is the problem child.  The major muscle in the calf area is the gastrocnemius.  That is the one I had been focusing on when doing my stretching and Alfredson exercises.  Day after the race with the calf complaining as I started to walk, I stretched (as my doctor had advised) the calf -- the gastroc.  Didn't feel any response, no complaint, no difficulty.  So finally I stretched the other muscle down there -- the soleus.  That is where the problem is (now).    I may well have just rehabilitated the gastroc earlier.  Either way, the soleus is what needs the work now.  It's a little harder to stretch, and a little harder to do the Alfredson exercise for.  Not a lot, but enough that I'd been slack about doing it.

To go back to the more typical theme of this blog, I'll observe that probably a fair number of the people running with me were scientists.  In particular, in earth sciences (lumping geology, oceanography, meteorology, glaciology, paleontology, ...) it seems very much the norm that scientists are physically active in one way or another.  Running is not the only sport.  We also have tennis players, swimmers, basketball players, bikers, ....  Team sports are harder to manage later in life, so most people are doing individual sports, even where we like team sports.  But, whatever it is, we get out and do something.  And this is true whether the person does field work (which would require a degree of physical fitness, just to carry out the job) or sits in an office (as I and my coworkers do).  It might be that scientists in other fields don't do as much sport as earth science types do.  I don't know of any research on it.  But we folks interested in the earth also seem to like to run/walk/bike/swim/... around it.

To turn back for a minute to the running ....  In terms of final race times, this 10k was very slow for me.  When walking, I averaged 16-17 minutes/mile (10-11 minutes per km), which is normal for my walking.  In running, I was around 7 minutes/mile (4.5 minutes per km).  If I were in good shape, which is the goal, I'd have run the whole 10k at about that pace.  For my current training level, with the current goal (that 10 miler, 16.1 km) run/walk is the way to get to the finish of a workout or race in best health.  Best health then means I can get out for the next workout, and the ones after that.  It's getting out consistently that is the key for training.  Given the achilles/calf issues, the next workout is tonight -- swimming.  Rest the calf and work the lungs.  The lungs (cardiovascular system) have a long way to go as well. 

Plus, one of these years I'll be doing a sprint triathlon.  My plan being: don't drown in the swim, don't fall off the bike, and then pass a lot of people in the 5k.

05 November 2009

Experimental reading

I've been reading more general audience science books lately, which is part of the reason for relative quiet here.  But it makes for ideas later.

The first book in the experimental line is geared for middle school to junior high students.  It's perfectly useful for older folks as well.  And it will probably be a good idea to have one at hand for some of the experiments if performed by the younger.  101 Incredible Experiments for the Weekend Scientist by Rob Beattie.  The experiments cover a range of things, from making slime to making a cloud.  Some weather and climate examples, but not especially aimed to that.  The experiment descriptions do also contain a 'how it works' section, which I take to be very important.

The second is geared to an older audience, college age, but I think most experiments and observations can be done by middle school to jr. high students.  They just might want someone else to do the translation to more familiar language.  That's Clouds in a Glass of Beer: Simple Experiments in Atmospheric Physics by Craig F. Bohren.  This is the book (chapter 10) that prompted my Tuesday note.  Its 22 chapters include much more by way of explanation of the science behind what you're observing in doing the experiments, and how this ties in to the atmosphere. 

In both cases, the authors mention some things that the experimenter can use for proceeding to further experiments.  They usually aren't laying it out exactly this way, so keep your eye open for comments like 'best results are for doing X' (using a small tube, for instance, to see surface tension).  That's a sign that you can get different results if you use a larger tube, and it can be informative to see just how much the result depends on the size of the tube.

03 November 2009

How CO2 matters

It turns out that the argument that there isn't a lot of CO2 (true, compared to total mass of the atmosphere) and therefore it can't matter much for climate (false) has been around longer than I had thought.  I was just reading Craig Bohren's book Clouds in a Glass of Beer: Simple Experiments in Atmospheric Physics and he's got reference to it (chapter 10, on the Greenhouse Effect),   The collection of experiments was published originally in 1987, and had evolved over some period before that.  So at least 22 years that the argument has been around.

From page 82 in my Dover edition:There seems to be little dispute that carbon dioxide concentrations in the atmosphere have been increasing because of increased burning of carbonaceous fuels such as coal and oil.  At present, for every one million molecules in the atmosphere, about 340 of them ar carbon dioxide (this is written 340 ppm, parts per million).  To those who snort that 340 ppm of anything must surely be of no consequence, I recommend 340 ppm of arsenic in their coffee.  I don't second the recommendation as the lethal dose is somewhere around 1 ppm.  Craig was being sarcastic, and blunt, two common words for describing him.   The 340 ppm was about the Mauna Loa station's reading for 1981, and the last year that would round to that (nearest 10 ppm rounding) is 1984, so it's probably 3-6 years before book publication that Craig was writing.  It's now past 385 ppm.

For climate purposes, we'll consider two different things.  First is, how can a rare thing (CO2) be important to the system?  Second is, is CO2 really all that rare?


02 November 2009

Racing

Yesterday I ran a good, challenging, and distinctly muddy 8k cross-country race.  It's one of my favorite races.  My training, which I've been sparing you, has not been going well, so injury rehabilitation notes are at the bottom.  September was good.  The last Saturday I walked/ran the 5k according to plan, and finished in 28:25.  Not a great time compared to if I were in reasonable training, but a lot better than a few months earlier.  The following Monday, I forgot about a certain gravel path being irritating to my calves, and the fact that I've been on the edge or over it for calf problems (maybe it's Achilles), and went running on the path.  That did seriously annoy the calf/Achilles, which the race hadn't, and October was mostly given over to non-running (and, unfortunately, non-exercising). 

I had, of course, signed up for the cross-country race right before nailing the calf.  On the other hand, even when I was in very good shape (for me), I walked some of the cross-country course.  In years of moderate training, the walk fraction goes up.  This year, it was going to be even more walking, I decided.  It was.  I flew on the downhills (though I can't run for very long at the moment, when I do, I can carry a good pace), took it easy on the uphills (walking more slowly than I would if the cardio system were in condition), and mixed on the flats.  Not a lot of flat to this course.

The plan worked out pretty much.  I did not injure the calf/Achilles, and I did finish the race.  I feel invigorated for getting out and doing more exercise, and, in one of those ironies of peoples' psychologies, am more willing to do 'just' swimming/biking/rowing/....  Final time of 54:37 (my watch -- I didn't start at the front of the pack!)  Which I mention more for establishing it so that when I talk next year of my improvement, you'll know the base.  It would not be unreasonable to improve by 10 minutes in the next year.

28 October 2009

A challenge to the computer folks

Something I'd like to be able to do is to track the citation history backwards from a given paper.  But I want a couple of things that it looks like typical bibliographic sources don't do.  As matters of computer or library science, I don't think they're terribly difficult.  I've seen things done which strike me as much more complex.

Let's start with some paper, call it paper A.  It cites, say, 15 papers (papers B, second generation).  Each of those cites another, say 15, which at least temporarily means a list of 225 papers (C, third generation).  Easy to get the list of papers cited by paper A (the 15 papers B1..B15), but significant manual effort, it seems, to get the collected list of papers C1..C225.  One thing I would like, however, and which seems completely unsupported, is that I'd like a count of how many times each paper shows up in this tree.  Some of the papers in the second generation probably cite others in the second generation.  And it's near certainty that many of the third generation papers are cited by several of the second, and probably a good number of third generation cite each other.  This is pretty much just a simple social network kind of analysis -- some papers have lots of friends, and some not so much.  I'd like to see which papers are highly connected, and which aren't, working within the group established by papers cited by a paper of my interest (actually won't be one of my own in practice) and lines of reference descent from there.


The second sort of thing I'd like to see is for the chart to be continued through enough generations that sources like Newton's Principia start appearing on the list.  I'm curious how many generations, in terms of citation history, modern work is removed from some of the landmark sources.  Unfortunately, it seems that the bibliographic databases I have access to die out in the mid 1980s, which is a long time from when I want to be getting to.

26 October 2009

Doing science, with sea ice

Every so often, I commit an act of science. Like most acts of science, you almost certainly never heard about it. Like many, however, life was eventually improved for some people somewhere. I'm rather pleased about that side of it.

What was at hand was, on one hand (it does help to have many hands if you're in science), a fairly straightforward piece of engineering. On the other hand, a bit of science. Remember that I think both engineering and science are good things, if different. Engineering is mainly aimed at 'apply what is known to achieve benefit for someone', while science is aimed at 'try to understand more about the universe'.

Back in 1993, I was at the National Meteorological Center (NMC), the part of the National Weather Service (in US -- NOAA) that develops the new weather forecast models or tries to make the old ones better. My area was sea ice. Now, one thing we sea ice, polar oceanography, polar meteorology people were entirely confident about was that sea ice mattered, a lot. For, well, everything, or at least enough. If we didn't think it mattered, we'd hardly be spending our time studying it. People outside our little community, including folks working on numerical weather prediction, didn't think sea ice mattered for much. And, if it did matter, surely it was only something that mattered for long time modeling -- climate scale forecasting. Surely the ice was already well enough represented to be good enough for weather prediction purposes.

Partisan as I was, and am, in favor of sea ice, I must confess that there were (and are) good reasons to believe that for short range forecasting, you didn't need very accurate representation of sea ice. It doesn't cover much of the surface area of the earth. And, while it might be very reflective, at the times that there is the most ice that is most reflective, there isn't much sun for the ice to reflect. I could have simply sat back in a wrangle with the weather folks, endlessly asserting that sea ice was important, and how much energy sea ice reflected was still important, and weather is chaotic so it had to matter, vs. endless repetitions of their counter-arguments. Perhaps you've seen that sort of thing happen a time or two on a blog or two.

Instead, time to do some science. Run the experiment and see what happens. This has the downsides that it requires my time, and I have to run the risk of the experiment showing that I was wrong -- that modest changes to how much of the sun's energy sea ice reflects really did not affect weather.

21 October 2009

Antarctic Snow and Ice

The Antarctic has long been a favorite area of mine, going back to graduate school days.  This particular note, however, is prompted by a question over in the question place -- regarding Antarctic mass balance and snow.

The question at hand turns on just what is going on with Antarctic mass balance.  The apparent 'conflict' is between a study showing a recent decline in snow melt, and other studies that Antarctic ice mass is decreasing.  This is a particularly simple conflict to resolve, so I'll note that it really is taken as a serious conflict (per the questioner's link) over at WUWT (haven't we heard that name recently?)

The simple reality that the authors of the snowmelt paper are perfectly aware of, but WUWT ignored, is that there is more than one way for the Antarctic to lose mass.  I grant that melting the snow is the most obvious one.  But, when you're dealing with a continent as incredibly dry as the Antarctic is (the driest, and probably largest, desert in the world), you have to pay attention to more subtle processes.  One of them is not at all subtle -- huge icebergs break off of the Antarctic from time to time.  In these cases, you're talking about chunks of ice several hundred meters (call it 1000 feet for simplicity if you're non-metric) thick, and 50-100 km (30-60 miles) on a side.  Chunks large enough to be the size of entire US states and some countries.  (I have an ancient listing of some iceberg sizes and country, state, lake sizes for your comparisons -- additions welcome.)  There's also the very subtle process of evaporation straight from the surface of the ice sheet (sublimation) into the atmosphere.  And there's the not subtle but easy to forget about fact that Antarctica has ice shelves -- ice floating on the ocean that's fed by the continental (sitting on land) ice sheet -- and the bottoms of those ice shelves can and do melt.

Finally, there is the rather bizarre fact that ice is not a solid.  Once you build up to having an ice sheet, the pressure of the ice above a point near the ground is so enormous that the ice flows.  Ok, it's a really, really, thick fluid (think very cold molasses).  But it flows.  This means that the ice sheet move mass out to the edges -- out to the ice shelves where there can be snow melt, ice evaporation, or ice shelf melting, or massive icebergs can break off.

So, just on a fairly cursory consideration -- there's more than one way to skin a cat, or, rather, there's more than one way for an ice sheet to lose mass -- we already know there's a problem with the WUWT article.  In the science, no real conflict.  More below the fold.

19 October 2009

Sound and Fury at WUWT

From the question place, where a reader noted a high traffic item at Watt's Up With That  and asked for a science response.  Where to begin?  First, I guess I'll note that most of the post is bluster and personal attack.  Once you cross out those parts, it's a much shorter article.

Second, as always, go back to the original source.  In this case, it is a Mann et al. 2008 paper Proxy-based reconstructions of hemispheric and global surface temperature variations over the past two millennia, with supplementary material.

Then, consider exactly what the claims (in this case, at WUWT) are, and just what evidence is produced for it. 

The fundamental claim at WUWT is that the entire reconstruction is upside down.  (We're treated to pictures of other things that are upside down.)  Right off, we know WUWT is wrong. 

18 October 2009

Question Place

About time again for one of these. Questions, suggestions, ideas, and so forth.

17 October 2009

Barb Didrichsen's Blog

I've been remiss about mentioning a friend, Barb Didrichsen, who has been blogging for a while. Her latest note is A Conversation on Climate Change. It was also a contribution to Blog Action Day, where an attempt was made to have climate be the topic of the day on as many blogs as possible.

As you'll see in reading her article, Barb is not a scientist. She's an interested and interesting person and a writer. Take a look.

13 October 2009

Holding place

Not much going on by way of things that I'm writing up. Or at least not that I'm finishing for blog purposes. The better news is that I'm doing more reading, of various sorts. That includes some fun -- Terry Pratchett's _Unseen Academicals_, _The Callendar Effect_ by James Rodger Fleming (biography of Guy S. Callendar, arguably the inventor of the CO2-induced climate change theory) and _Species: A History of the Idea_ by John S. Wilkins (which I've mentioned here before; I should be meeting up with John at the National Museum of Natural History on the 24th). And some also fun, but also more serious, in some respects, reading, including some books on amateur scientist and amateur engineer experiments, and my too-large backlog of Science and Nature (and EOS, Bulletin of the American Meteorological Society, Association for Computing Machinery, ...).

Reading, learning what other people are doing or have found out about the universe, is almost always the start of my own creative activities. At the very worst it is like when I volunteered at the mile 21 water stop for a marathon. At that point, my longest race was 10 miles (16.1 km), and my longest run was a half marathon (13.1 miles, 21.1 km). After seeing the people coming past me, who were still perfectly able to chat, thank us volunteers, ask where to toss their empties, and such, there was just no excuse left for me about finishing my own marathon. Some, I wouldn't have bet a quatloo could run 2 miles from looking at them, much less be cheerfully passing mile 21. They proved to me that if you do the training, you can do the race.

For writing, if I see some very poor stuff, or some stuff that is not all that 'brilliant' (more than one paper has been published on things that I never bothered to write up), then it's a bit of a kick to get up and start my own writing. If it's great stuff, then it's energizing -- look at all that great stuff people are doing out there. Time for me to add a good thought or two. Win-win.

Also to be coming, and in keeping with my aim for educational content, is that I'll be visiting a school at the end of this month. Still working out some details on the what and how for my visit. I want, always, for my visits to classrooms to add to what the teacher was trying to do, and support learning by the students. Plus, obviously, there are some messages of my own I want to get across at the same time -- science is interesting, the universe around us is interesting, and it is understandable, and the students can indeed do some of that understanding and figuring out. Related to that, I'll probably be putting up a note or two. Looks like it's time for something about clouds and hurricanes.

Along with doing my reading, I'll be writing up some thoughts about some of the books. A pair I'll definitely be mentioning shortly are Danica McKellar's Kiss My Math and Math Doesn't Suck. If you are, or know someone who is, in what I take to be the target audience -- teenage girls who are struggling with math, and/or are having boy-induced problems about math -- go ahead and get the books.

Plus the usual odds and ends. Clearly there's a lot more to say about evaluating forecasts, and I'll be doing so. And much more to the world of sea ice, and climate. And ... well, the universe is a very interesting place. Suggestions always welcome too. Several of the notes I've liked most have been from reader suggestions.

In the mean time, for new content I'll suggest again to my adult readers my wife's blog Vickie's Prostitution Blog. Current is a 2 (maybe more, part 1 is up now) part look at How much money do prostitutes make. I give away little indeed in observing that the answer is very, very little. That contrasts starkly with the impression you might have from media, or some economists' write ups (one of which Vickie addresses more directly).

10 October 2009

Sea Ice Finals 2009

The final September figures are in from the NSIDC. In terms of the Connolley-Grumbine bet, William lost. So 50 quatloos will find their way to me. Or, given how close it was (5.38 was our dividing line, and 5.36 was the observation), we could go double or nothing on next year's ice.

I'll also mention a few other predictions, or methods:


MethodPredictionError
Climatology-15a7.231.87
Climatology-306.631.17
Climatology-15b6.160.8
Connolley Line5.840.48
Grumbine Curve4.920.44
Persistence4.680.68



Climatology 15a is the average of the first 15 years. 15b is the average for the last 15 years (not counting 2009!). Climatology 30 is the average of the first 30 years of the satellite record.

Persistence is to say that this year's ice will be the same as last year's. For atmospheric temperatures, persistence is a pretty good forecaster for the first two days (at least in the sense that it is closer to what you see than climatology). For sea ice, it is not so good, beating climatology only 17 of 30 years. It's interesting, however, that its losses are strongly clumped. In the 14 years from 1990-2003, persistence won 2 and lost 12 versus climatology-30. In the remaining 16 years, it went 15-1.

Update: Per William's request, I'll add the ARCUS estimates (as given in the full report) for June's report. I believe the values all were rounded to the nearest 0.1 million km^2, so for consistency will list mine at 4.9 here.
MethodPredictionError
Canadian Ice Service5.00.36
Hori, Naoki, Imaoka5.00.36
Nguyen, Kwok, Menemenlis4.90.46
Lindsay, Zhang, Stern, Rigor4.90.46
Kaleschke and Halfmann4.90.46
Grumbine4.90.46
Fowler, Drobot, Maslanik4.90.46
Stern4.70.66
Arbetter, Helfrich, Clemente-Colon4.70.66
Pokrovsky4.60.76
Stroeve, Meier, Serreze, Scambos4.60.76
Kauker, Gerdes, Karcher, Kaminski, Giering, Vossbeck4.3, 4.61.06, 0.76
Zhang4.21.16

To judge from the graphic that accompanied, however, the bar chart was done with figures that had more precision, as Kaleschke and Halfmann's 4.9 is clearly higher than Fowler and company's.

08 October 2009

Saving lives

A while back, I mentioned the fact I'm still walking around and able to write to you is due to modern science.

I've not going to write often about it (this being only the second post in over a year), but, the truth remains that a lot of us are still walking around because of modern science. The single biggest contributor to that is vaccination. There's a very nice video here, from a pediatrician, Joseph Albeitz (h/t Phil Plait) that outlines some of the magnitude of good that vaccination has done:




In looking at vaccination, as he discusses, you're looking at saving hundreds of millions of lives. The list of things that could contend for saving more is awfully short.

One thing he mentions in passing, which I'll spend a little more time on, is herd immunity. There's a feeling out there, a false sense of security, that as long as 'everyone else' is vaccinated, it doesn't matter if your kids are. If your kids were the only unvaccinated kids, that might be true. But, in reality, you're not the only person who might think that way. Your kids interact with many other children. Once the number of unvaccinated children is high enough (depending on the disease it's in the range 10-30%), the disease can establish itself and spread. The 'herd' is immune only if enough people are immune. Once enough fail to vaccinate, you're a breeding ground for the disease. Worse, you're a breeding ground to infect people who did get vaccinated -- vaccines aren't 100% effective in all people. If enough of you are carriers, then the disease can spread to other kids and kill them.

I know that measles is commonly considered a trivial disease. But that 'trivial' disease kills over a million per year (listen to the video). I'm thinking that not killing off a million children each year would be a good thing. Similarly for the numbers of polio victims -- with vaccination, the number who would get it goes to zero. Both of these diseases are like smallpox in an important way -- they only spread between people. If we reached a point where nobody had the disease, as was the case for smallpox, then nobody would ever again need to be vaccinated against it. It would be gone. As the Dr. mentions, they're about 99% of the way there for polio. Measles have farther to go. Both, amazingly, can be eradicated.

Digressing a second, but not really, is my genealogy. One of my direct ancestors died from smallpox. Lived long enough to have kids, obviously. But died 20-30 years early because of the smallpox. As many a person who looks in to genealogy has observed, you see a lot of very short lives when you look back then (1700s - mid 1800s), many of them children who were never even named. Much of the reason for that change is vaccination. That ancestor (Zeboeth Brittain, how's that for a name?) died before the vaccine was discovered -- 1790, vs. 1796 for Edward Jenner's discovery. But I think he'd be amazed at the idea that the disease that killed him could be erased from the face of the planet -- and it now has been. Measles and polio can be as well.

30 September 2009

Assessing predictions

It's a little premature to make a detailed assessment of the predictions for September's average extent as the final numbers aren't in. They will be soon, but my focus is actually over on the question of how to go about doing the comparisons. Earlier, I talked about testing ideas, but there, the concern was more one of how to find something that you could meaningfully test. Here, with the September's average extent, we already have a well-defined, meaningful thing to look at.

Our concern now is to decide how to compare the observed September average extent with the climatological extent, and a prediction. While mine wasn't the best guess in the June summary at ARCUS, it was mine, so I know what the author had in mind.

Let's say that the true number will be 5.25 million km^2. My prediction was 4.92. The useless approach is to look at the two figures, see that they're different, and declare that my prediction was worthless. Now it might be, but you don't know that from just the fact that the prediction and the observation were different. Another part of my prediction was to note that the standard deviation to the prediction was 0.47 million km^2. That is a measure of the 'weather' involved in sea ice extents -- the September average extent has that much variation just because weather happens. Consequently, even if I were absolutely correct -- about the mean (most likely value) and the standard deviation, I'd expect my prediction to be 'wrong' most of the time. 'Wrong' in that useless sense that the observation differed by some observable amount from my prediction. The more useful approach is to allow for the fact that the predicted value really represents a distribution of possibilities -- while 4.92 is the most likely value from my prediction, 5.25 is still quite possible.

We also like to have a 'null forecaster' to compare with. The 'null forecaster' is a particularly simple forecaster, one with no brains to speak of, and very little memory. You always want your prediction to do better than the null forecaster. Otherwise, people could do as well or better with far less effort than you're putting in. The first 'null forecaster' we reach to is climatology -- predict that things will be the way they 'usually' are. Lately, for sea ice, we've been seeing figures which are wildly different from any earlier observations, so we have to do more to decide what we mean by 'climatology' for sea ice. I noticed that the 50's, 60's, and 70's up to the start of the satellite era had as much or somewhat more ice than the early part of the satellite era (see Chapman and Walsh's data set at the NSIDC). My 'climatological' value for the purpose of making my prediction was 7.38 million km^2, the average of about the first 15 years of the satellite era. A 30 year average including the last 15 years of the pre-satellite era would be about that or a little higher. Again, that figure is part of a distribution, since even before the recent trend, there were years with more or less (than climatology) ice covers.

It may be a surprise, but we also should consider the natural variability in looking at the observed value for the month. Since we're really looking towards climate, we have in mind that if the weather this summer were warmer, there'd be less September ice. And if it were colder, or different wind patterns, there would have been more ice this September. Again, the spread is the 0.47 (at least that's my estimate for the figure).

I'll make the assumption (because otherwise we don't know what to do) that the ranges form a nice bell curve, also known as 'normal distribution', also known as 'Gaussian distribution'. We can then plot each distribution -- from the observed, the prediction, and what climatology might say. They're in the figure:



This is one that makes a lot of sense immediately from the graphic. The Observed and Prediction curves overlap each other substantially, while the curves for Observed and Climatology are so far from each other that there's only the tiniest overlap (near 6.4). That tiny overlap occurs for an area where the curves are extremely low -- meaning that neither the observation nor the climatology is likely to produce a value near 6.4, and it gets worse if (as happened) what you saw was 5.25.

The comparison of predictions gets harder if the predictions have different standard deviations. I could, for instance, have decided that although the natural variability was .47, I was not confident about my prediction method, so taken twice as large a variability (for my prediction -- the natural variability for the observation and for the climatology is what it is and not subject to change by me). Obviously, that prediction would be worse than the one I made. Or at least it would be given the observed amount. If we'd really seen 4.25 instead of 5.25, I would have been better off with a less narrow prediction -- the curve would be flatter, but lower. I'll leave that more complicated situation for a later note.

For now, though, we can look at the people who said that the sea ice pack had 'recovered' (which would mean 'got back to climatology') and see that they were horribly wrong. Far more so than any of the serious predictions in the sea ice outlook (June report, I confess I haven't read all of the later reports). The 'sea ice has recovered' folks are as wrong as a prediction of 3.1 million km^2 would have been. Lowest June prediction by far was a 3.2, but the authors noted that it was an 'aggressive' prediction -- they'd skewed everything towards making the model come up with a low number. Their 'moderate' prediction was for a little over 4.7. Shift my yellow triangle curve 0.2 to the left and you have what theirs looks like -- still pretty close.

To go back to my prediction, it was far better than the null forecaster (climatology), so not 'worthless'. Or at least not by that measure. If the variability were small, however, then the curves would have narrow spikes. If the variability were 0.047, ten times smaller than it is, the curves would be near zero once you were more than a couple tenths away from the prediction. Then the distribution for my prediction would show almost no overlap with the observation and its distribution. That would be, if not worthless (at least it was closer than climatology), at least hard to consider having done much good.

17 September 2009

Sea Ice Bet Status

It looks like the Arctic sea ice extenthas bottomed out, as of the 12th or so. I'm confident that a good storm system could give us a new minimum -- both by slamming up the loose ice in the western Arctic (reducing extent by pushing the ice pack together) and by mixing up warmer water from the ocean (reducing extent by melting the ice). But, as a rule, this sort of thing is rare. A storm would have to hit the right area in the next few days. Otherwise the atmosphere will be cold enough to simply keep freezing new ice.

So, starting to be time to assess our various guesses. William Connolley and I made our 50 quatloo wager on over (his side) or under 5.38 million km^2 for the September average. The minimum, if we have indeed seen the minimum, is about 5.1 million. That looks favorable for my side of the bet. Though it would mean I definitely missed the September average, as I said that would be 4.92. More about that in a moment. The figure below suggests that since the extent dropped below 5.38 right about the start of September, I should be safe. Usually (see the climatological curve) the pack doesn't gain much area in September. But William could still win if we have an unusual last two weeks and the ice pack gains a lot of extent.




I've added a few lines to the NSIDC graphic of 12 September. One is the vertical line, to highlight when it is we dropped blow the climatological minimum. We've been below normal since early August. That in itself suggests a climate change. We're now about 3 standard deviations below the climatological minimum, which again, in such a short record, suggests a climate change. The significance of the extra large amount of ocean being exposed to the atmosphere, for an extra long time, is that it lets more ocean absorb more heat from the sun. Though this year looks to be a higher extent than 2007 and 2008, it's still below any year except 2007 and 2008. If we didn't know about those two years, we'd be surprised by this year being so low -- the 2005 September average extent (record before 2007) was 5.57 million km^2 -- far higher than this year is liable to average.

Still early to decide whether I owe William, or vice versa. Both of us will win our bets with Alastair. Looking down to the poll that I invited you to answer back in June, I'll say that the people who called for 7.5 million (the previous climatology) and 6.0 million km^2 are wrong. Also the 1 who went for 3, the 2 who went for 3.5, and the 4 who went for 4 million km^2 for the month's average. The 12 who went for 4.5 (which means anything in the range 4.25 to 4.75) should be pulling for a really massive storm to hit the western Arctic and obliterate huge amounts of ice extent. The main candidates are the 3 who went for 5, and the 1 who went for 5.5 (ranges of 4.75 to 5.25, and 5.25 to 5.75, respectively).

Something else this brings up (or at least this plus some comments I saw at a different site) is "How do you judge the quality of predictions?" I'll be coming back to this, using the Sea Ice Outlook estimates for my illustrations.

16 September 2009

Title decoding

Title of a recent paper in Science: Motile Cilia of Human Airway Epithelia are Chemosensory (Shah and others, vol 325, pp. 1131-1134, 2009.)

Time to apply the Science Jabberwocky approach, as I'm unfamiliar with many of those terms:

Mimsy borogoves of Human Airway Bandersnatches are Frumious.
(motile) (cilia) of Human Airway (epithelia) are (chemosensory)

4 terms we need to get definition of (those of we who don't already know them, that is).

Cilia, whatever they are, can apparently be motile or non-motile. By the writing, that doesn't seem to be the new observation. But that they can be frumious, er, chemosensory, is apparently news.

The abstract itself tells us what the cilia are -- microscopic projections that extend from eukaryotic cells. (If we know what a eukaryotic cell is, we're set. Otherwise, we have to do a little more research, and discover that eukaryotic cells are those with separate parts to them, including a nucleus -- that covers all animals, plants, and fungi).

We also have to go look up 'epithelial'. We're ahead of the game if we know that epi- tends to have something to do with 'on the surface'. Epithelial cells are those that are on the surface of our body cavities -- lungs, digestive system, etc..

Chemosensory ... well, sensory is nicely obvious. Chemo- as a prefix means that the cells are sensing chemicals.


So with a little decoding work, and perhaps using google search for definitions (enter define:epithelial as your search and you'll get links to the definition of epithelial), we arrive at our understanding of the title. -- There are cells lining the surface of our airways that have little extensions. The authors show that the extensions are sensitive to chemicals.

In reading the paper itself, we find that it is particular kinds of chemicals that these cilia are sensitive to -- 'bitter'. When they detect such compounds in the air, they start getting active and try to flush out the bad stuff they've detected.


The conclusion is not especially a surprise to me. I've long been confident that my airways were sensitive to certain chemicals (though I didn't know which). Walking past a perfume counter has always been a problem for me, as my lungs shut down or at least try to. Folks have said that it's just my imagination, and all that is happening that I'm smelling the perfume and causing the rest. That doesn't work well as a hypothesis because I have an exceptionally bad sense of smell. Usually the way I know the perfume is present is because I start having more difficulty breathing. The paper also corresponds to a different experience of mine. Namely, I don't have such reactions to flowers, even flowers in large masses as we get in spring with the honeysuckle, or lilac bush. The cilia are reactive to bitter compounds (known from the paper) and probably (a point that's very testable) perfumes have more such compounds than flowers do.

Per my usual, I've written the corresponding author about this post. Also, if the sample donation process is quick, easy, painless, harmless, I'm willing to donate a sample of my highly-reactive (I think) epithelial cells for their further research.

15 September 2009

Good science, wrong answer

Sometimes it happens that somebody does good science, but has arrived at a wrong answer. Since most of us think that the answer has to be right (and I'll agree that it's better when it is), this will take some explaining. Let's go back to what science is about -- trying to understand the universe in ways that can be shared. Good science, then, is something that leads to us understanding more about the universe.

For my illustration, I'll go back to something now less controversial than climate. In the 1980s, paleontologists David Raup and J. J. Sepkoski advanced the idea that mass extinctions, such as the one that clobbered the dinosaurs, were periodic. Approximately every 26 million years, for the last about 250 million years, they observed a spike in the extinction rate. Not all were as large as the one that got the dinosaurs.

It so happened that they were at the University of Chicago, in the Department of Geophysical Sciences, and so was I. Further, I was working with time series for my master's thesis. So I asked them about working with their data and seeing what I would find with my very different approach. They were gracious and spent some time explaining what I was looking at, knowing that I didn't think they were right. By my approach, indeed, their idea did not stand. My approach, however, was not a strong one, being susceptible to some important errors. So I never published about it. Still, along the way, I learned more both about time series, and about paleontological data. So that's a plus making the periodic extinction idea 'good science' -- I, at least, learned more about the universe, even if not enough to make original contribution.

One mark of good science is that it prompts further research. Raup and Sepkoski, in their original paper, had made a reasonable case. 'reasonable' being that it could not be shot down by any simple means. 'simple' meaning that the answer was already in the scientific literature. So to knock down the case, itself a normal process in science, the critics had to do some research to show how weaknesses or errors in one or more of the following lead to the erroneous conclusion:
* The statistical methods
* The geological time scale
* The paleontological data (extinction figures, and their dating)

The idea was not sensitive to the geological time scale used, so that fell away fairly quickly. The statistical methods did develop a longer-lasting discussion -- new ones devleoped, flaws in the new and the old methods described (and then discussion about whether the claimed flaws were real).

Most interesting to me, and I think where the greatest good for the science was, was going back to the data. In saying that the extinctions were periodic, one carried the image of something crashing into the earth (like the meteor that did in the dinosaurs) and killing off huge numbers of species (and genera, and families) very quickly. One of the data problems, then, was getting accurate dates for the time of extinction. Often the data could only say that the things went extinct sometime within a several million year window. That's a problem, as then your view of whether it was periodic could depend on whether you put the date of extinction at one end of the geological period or another. So people went to work on getting better dates for when the species went extinct.

Also, I noted above that the original idea applied to the last 250 million years. The reason was, when they started that was as far back as you could go with reasonable data. So work also went in to trying to push back the period of reasonable data.

I don't know what the field ultimately concluded about the idea. I do know that the work to advance or refute the idea resulted in more data about when species went extinct, and better dates for when they did. Further, those newer and better data are themselves useful for learning more about the universe -- there's more to be gained than just answering the original question about whether mass extinctions were periodic.

So, not only did the original publication result in more being learned about the universe, but it was in a way that enables even more learning to happen. That makes it good science. The original idea might have been wrong, but it definitely was good science.

I've focused on the side of scientific merit here. There was a lot of, well, unprofessional, response as well. You can read about both parts in The Nemesis Affair: A Story of the Death of Dinosaurs and the Ways of Science by David Raup. Part of it was because the idea that any mass extinction had to do with things crashing in to the earth was still new, and still widely not accepted. Then this idea comes up and says that not only does it happen (bad enough) but it had happened many times, and happens regularly.

14 September 2009

An Intro to Peer Review

I didn't really mean to present an object lesson in why peer review is a good thing. But, having done so, it seems a good time to use it to illustrate what the process looks like.

First step is, somebody has to put something forward for consideration. In this case, my note on field relevance last week. One important aspect of this is, the 'something' has to be said concretely enough that people can point to the mistakes you've made.

The second step is that the comments (reviews) have to point to specific things that are wrong. Ranting about leftists (happened elsewhere) doesn't count. Saying that I grossly understated the relevance of biologists because -- and give reasons for that 'because' -- does.

The third step is for the original author to revise the article in response to the reviewer comments. That doesn't necessarily mean 'do what every reviewer wants', not least because the reviewers (c.f. gmcrews and John Mashey) may disagree. But there should be at least some response, if only to add some explanation in the article that addresses why you're not doing (what reviewer X wanted). I'll be doing that later, but am waiting for words from the biology folks about how the field applies to deciding whether and how much of recent climate change is due to human activity.

To summarize the comments some here (do read the originals if you haven't already):
  • Many fields are missing

  • Many fields are placed too high or too low
  • (mostly too low)
  • I conflated two different questions -- whether and how much warming there has been, with whether and how much of it has been from human activity
  • (Some irony there, as one of the things I did say was that the picture changes depending on what exactly the question is.)
  • Irrespective of whether the previous points were addressed, the approach itself is not useful


  • Each of this is a common general sort of comment to see in a peer review. To rephrase it more generally:
  • Incomplete

  • Inaccurate

  • Question is not specific enough

  • Question is not interesting, approach is not useful


  • In terms of my rewriting process, the first two are pretty easy to deal with. Many people made many good comments. Those can be incorporated fairly straightforwardly, along with the fields that the comments prompted me to remember even if they weren't directly mentioned.

    The second two, however, aren't quite so obvious. The third is taken care of if I go clearly to making the question addressed "How much of the recent warming is due to human activity?" And that is what the graphic actually tried to address (though still with some issues with respect to the first two sorts of comment).

    But, is it useful to address that question in this way? My thought was that for non-experts, it could be a useful guide when encountering, say, a 'conference' whose speakers were almost entirely from the lower ranges. On the other hand those antiscientific conferences are seldom so specific about what they're addressing. Either the figure is focussed on too narrow a question, or many separate such figures would be needed. Experts, or at least folks at, say, K6 and above in Mashey's scale, should just go read the original materials to decide.

    I haven't decided which way to go on this. Comments, as always, welcome. I also realized that it's a long time since I wrote up my comment policy, and link policy, so they are now linked to from the upper right (in the 'welcome' section).

    In the mean time, I'm taking down my version of the figure and asking those who have copied it to remove it as well.

    But, to come back to peer review:
    All this illustrates why it is you want to read peer-reviewed sources for your science. Nobody knows everything, so papers can otherwise be incomplete, inaccurate, etc.. People can also think that something is obvious, but have forgotten about things that they themselves do know (like my temporary brain death about biology as a field for knowing that climate is changing). Or they know certain things so well themselves that they don't write it up well for the more general audience. (Even in a professional journal, most of the readers aren't in your particular sub-sub-sub-field. 'more general' may only mean make it accessible in the sub-sub-field instead, but that can still be a challenge.) In a productive peer review process, these questions are all addressed.

    10 September 2009

    Climate and Computer Science

    I'll pick up John Mashey's comment from the 'relevance' thread, as it illustrates in another way some of what I mean regarding relevance, and about who might know what. He wrote:

    As a group, computer scientists are properly placed in the last tier.

    Once upon a time, computer scientists often had early backgrounds in natural sciences, before shifting to CMPSC, especially when there were few undergraduate CMPSC degree programs.

    This is less true these days, and people so inclined can get through CMPSC degrees with less physics, math, and statistics than one would expect.

    Many computer scientists would fit B3 background, K2-K3 level of knowledge on that chart I linked earlier.

    On that scale, I only rate myself a K4, which corresponds roughly to Robert's Tier 5. Many CMPSC PhDs would rate no higher than K2 (or even K1, I'm afraid, on climate science).


    Of course John is one who has been spending serious effort at learning the science, so although our shortcut puts him on a low tier in this area (he's high for computer science!), the earned knowledge is higher. Best, of course, is to work from the actual knowledge of the individual. On the other hand, presented a list of 60 speakers at a meeting, and seeing few from fields in the upper levels (applicable to the topic at hand), it's not a bad bet that the meeting isn't really about the science (or whatever expertise is involved).

    If we're talking specifically about climate modellers, we're talking about people who use computers a lot, and make the computers run for very long periods. So, does that mean that all climate modellers are experts about computers the way that computer scientists are? Absolutely not. Again, different matters. Some climate modellers, particularly those from the early days, are quite knowledgeable about gruesome details of computer science. But, as with computer scientists and climate models, that's not the way to bet.

    I'll link again to John's K-scale. A computer scientist spends most time learning about computer science. At low levels, this means things like learning programming languages, how to write simple algorithms, and the like. Move up, and a computer scientist will be learning how to write the programs that turn a program in to something the computer can actually work with (compilers), how to write the system that keeps the computer doing all the sorts of processing you want it to (operating systems), interesting (to computer scientists, at least :-) things about data structures, data bases, syntactic analysis (how to invent programming languages, among other things), abstract algorithms, and ... well probably quite a few more things. It's a long time since I was an undergraduate rooming with the teaching assistant for the operating systems class. Things have changed, I'm sure.

    Anyhow, on that scale of computer science knowledge, I probably sit in the K2-K3 level. I use computers a lot. And, on the scale of things in my field, am pretty good with the computer science end of things. But, considered as matters of computer science, things like numerical weather prediction models, ice sheet models, ocean models, climate models, etc., are just not that involved. The inputs take predictable paths through the program (clouds don't get to change their mind about how they behave, unlike what happens when you're making the computer work hard by making it do multiple different taxing operations at the same time and do what you like to the programs as they run). Our programs are very demanding in terms of it takes a lot of processing to get through to the answer. But in the computer science sense, it's fairly simple stuff -- beat on nail with hammer a billion times; here's your hammer and there's the nail, go to it.

    The climate science, figuring out how to design the hammer, what exactly the nail looks like, and whether it's a billion times or a trillion you have to whack on it -- that part is quite complex. So, same as you can do well in my fields with only K2-K3 levels of knowledge of computer science, computer scientists can do well in theirs with only K2-K3 knowledge of climate science (or mechanical engineering, or Thai, or Shakespeare, ...).

    Again, what the most relevant expertise is depends on what question you're trying to answer or problem you're trying to solve. If you want to write a climate model, you should study a lot of climate science, and a bit of computer science. To write the whole modern model yourself, you'll want to study meteorology, oceanography, glaciology, thermodynamics, radiative transfer, fluid dynamics, turbulence, cloud physics, and at least a bit (these days) of hydrology, limnology, and a good slug of mathematics. On the computer science side, you need to learn how to write in a programming language. That's it. It would be nice to know more, as for all things. But the only thing required, from a computer science standpoint, is a programming language. No need for syntactic analysis, operating system design, or the rest of the list I gave above. Not for climate model building, that is. If you want to solve a different problem, they can be vital. (I include numerical analysis in mathematics -- the field predated the existence of electronic computers. Arguably so did computer science. But the modern field, as with modern climatology, is different than 100 years ago.)

    09 September 2009

    Vickie is now blogging

    My wife has started, tonight, blogging. It is about her experiences volunteering at one of the few nonprofit organizations that works with prostituted women. For the adults in my audience, I strongly recommend reading. Excellent writing, and a real problem. (Yes, I'm probably biased, as Vickie will be the first to tell you. But the Maryland State Arts Council is beyond my radius of influence, and they awarded her the major first prize a couple of years ago, as did the MD Writer's Association. Read for yourself.)

    Her blog is Vickie's Prostitution Blog.

    I've also established a facebook group for her, 'Vickie Grumbine Writing'
    Update: Vickie Grumbine Writing. Thanks thingsbreak.

    08 September 2009

    What fields are relevant?

    I've never met someone who knew everything. Certainly I've met some very bright people, and people who knew quite a lot. But nobody has known everything. Conversely, I'm a bright guy, and know a lot of stuff, but I've never met anybody who didn't know things that I didn't. That including an 8 year old who was pointing out to me how to identify some animal tracks (they'd talked about this in her science class recently).

    People know best what they've studied the most is my rule of thumb. That's why I go to a medical doctor when I'm sick, but take the dogs to a veterinarian when they're sick. I call up a plumber when the water heater needs replacing, and take my car to an auto mechanic when it needs work. And not vice versa on any of them. It might be true that the auto mechanic is also a good plumber. But, odds are, the person who focused on learning plumbing is the better plumber.

    None of this should be a surprise to anybody, yet it seems in practice that it is once we come to climate. Let's be a little more specific in that -- make it the question of whether and how much human activity is affecting climate. There are many other climate questions, but it's this one that attracts the attention, and lists of people on declarations and petitions. If you look only at the people who have professionally studied the matter and contributed to our knowledge of the matter, then the answer to the question is an overwhelming 'yes', and a less overwhelming but substantial 'about half the warming of the last 50 years'.

    I've tried to set up a graphic (you folks who have actual skills in graphics are invited to submit improved versions!) of 'the way to bet'. The idea is to provide a loose relative guide as to which fields most commonly have people who you can have the greatest expectations that they have studied material relevant to the question of global warming and human contributions to it from a standpoint of the natural science of the climate system.

    Climatology, naturally, is on the top tier -- many people in that field will have relevant background. Not all, remember. Some climatologists look no further than their own forest (microclimatology of forests -- how the conditions in the forest differ locally from the larger scale averages) or other small area, or small time scale. Still, many will be relevant.

    Second tier, fewer of the people will be climate-relevant, but still many. Oceanography, meteorology, glaciology.

    Third tier, most people will not be climate-relevant. But some have made their way, at least, from those fields over to studying climate. That includes areas like Geomorphology (study of the shape of the surface of the earth) and quantum physics (the ones who come to climate were studying absorption of radiation).

    Fourth tier, almost nobody is studying things relevant to the question I posed. The extremely rare exception does exist -- Judith Lean has come from astrophysics and done some good work (with David Rind, a more classically obvious climate scientist) regarding solar influence on climate. Milankovitch was an astronomer/mathematical analyst who developed an important theory of the ice ages.

    Fifth tier, I don't think anybody has studied the question I posed directly. I do know a couple of nuclear physicists who have moved to climate-relevant studies. But they essentially started their careers over with some years of study to make the migration. In this, it's more a matter that they once were nuclear physicists. After some years of retraining, they finally were able to make contributions to weather and climate. At which point, really, they were meteorologists who happened to know surprisingly large amounts about nuclear physics.

    Sixth tier, I wouldn't include at all except that they show up sometimes on the lists. My doctor is a good guy, bright, interested, and so on. But it takes a lot of work studying things other than climate to become a doctor, and more work after the degree is awarded to stay knowledgeable in that field. That doesn't leave a lot of time to become expert in some other highly unrelated field.

    [Figure removed 14 September 2009 -- See Intro to Peer Review for details]


    Suggestions of areas to add, or to move up or down, are welcome. I'm sure I have missed many fields and others are probably too high or low.

    For now, though, if you're not an expert on climate yourself, I'll suggest that if the source is in the first two tiers, there's a fair chance that they've got some relevant background. If they're in the bottom 3, almost certainly not -- skip these. And the third level, is probably to skip but maybe pencil them in for later study, after you've developed more knowledge yourself from studying sources on the first two levels.

    This ranking, of course, applies to the particular question asked. If the question is different, say "What are the medical effects of a warmer climate?", the pyramid would be quite different and MD's would be the top tier. Meteorology would move down one or two levels. Expertise exists only within some area. As I said, nobody knows everything.

    Update:
    frequent commenter jg has contributed the following graphic:


    A general good change he's made is to split between general skills, that can transfer to studying climate, as well as what particular sorts of detailed skills or knowledge one might have. Almost everyone, for instance, involved in studying climate knows some statistics and mathematical analysis. Many fields also require such knowledge, so those would find it easier to move over to climate.

    Different good change he made was to put the question directly into the graphic. This is important. As I said, but didn't illustrate, the priority list depends on exactly what question is at hand.

    04 September 2009

    One dimensional climate models

    Some time back, I described the simplest meaningful climate model, and then gave a brief survey of the 16 climate models.

    The next 4 I'll take up are the 4 1 dimensional climate models. These are the models that vary only in longitude, only time, only in latitude, or only in the vertical. It'll be in that order. This turns out to be the order of difficulty, and the order of interest. It isn't until the vertical that we'll get to how exactly it is that the greenhouse effect works.

    On the other hand, with the model in latitude we'll see some powerful statements about the fact that energy has to move from the equator towards the pole. Not just the fact, but how much, and how it changes with latitude.

    In the model with only time, we can look a little more at things we were thinking towards with the simplest model -- what happens if the solar output varies, or if the earth's albedo does. More is involved, and required, than just that. We'll have to start paying attention to how energy is taken up in the atmosphere, ocean, ice, and land. Not a very large amount of attention -- we can't tell the difference between the poles and the equator, or upper vs. lower atmosphere or ocean. But it's a start.

    But for now, let's look at the simplest model in longitude only. As with any of our models, they start with the conservation of energy. The energy coming in is, as before, from the sun. How much energy arrives does not depend on what longitude we're at. Remember, even though the sun rises in the east and sets in the west -- east and west being matters of longitude -- the sun does eventually rise everywhere.

    Energy coming in has to be balanced by energy going out. If it weren't, things would be changing over time and there is no time in this model. One part of the energy going out is the solar energy that gets bounced straight out. This fraction is called the albedo. Now albedo is something that can depend on longitude. For instance, land is more reflective than ocean. And along, say, 30 E, the earth is mostly land, while along, say 170 W, it is almost entirely ocean. Clouds can be anywhere. So ... we arrive at one of those unpleasant realities -- we have to get some data.

    Normal business. The process arrives at telling us that we need to find averaged albedo over time (say some years) and all latitudes for each longitude. (We don't have to average over elevation because albedo is defined as the energy bounced out -- from whatever level of the atmosphere -- divided by the energy coming in.)

    Once we have that, we can compute the temperatures at each longitude that will permit us to balance, with terrestrial radiation out, the incoming energy. These temperatures should be something like the blackbody temperature of the earth we found in the simplest model. But they'll vary some.

    The next piece of data we'll need are the observed blackbody temperatures, by longitude. Then we'll compare the simplest model to the observations.

    One thing which is possible, and we'll be looking for in our comparison, is that now we've added longitude, a new thing can happen. In the simplest model, the energy coming in had to be balanced, right there, by energy going out. Now that we have longitude, it's possible for energy to shift from one longitude to another. The Gulf Stream and North Atlantic Currents, for instance, move a lot of energy from west to east. If no energy is being transported, on the average, then the temperature for a longitude will be just what we expect. If there's a mismatch, energy has to be getting moved from one longitude to another.

    I haven't collected the data yet, so I don't really know how it will turn out. I expect that clouds will cover the albedo differences between land and ocean to a fair extent, so the temperatures we'll compute will be fairly constant. I also expect that heat transport by longitude will be small -- the Gulf Stream's eastward warm current is balanced at least partly by a cool current (relative to local temperatures, that is!) at the equator.

    On the other hand, I haven't looked at the data yet, so there is room for surprise. That'll be fun. Means we get to learn more than we expected.

    02 September 2009

    Models and Modelling

    "All models are wrong. Some models are useful." George Box

    Box was a modeller, and the sentiment is widely spread among modellers of all kinds. This might be a surprise to many, who imagine that modellers think they're producing gospel. The reality is, we modellers all acknowledge the first statement. We are more interested in the second -- Some models are useful.

    But let's back up a bit. What is a model? In figuring out some of this, we'll see how it is that models can be imperfect, but still useful.

    There are several sorts of model, is one thing to remember. On fashion runways or covers of magazines, we'll see fashion models. In hobby shops, we can get a model spacecraft or car. We could head more towards science, and find a laboratory model, or a biological model animal, statistical model, a process model, numerical model, and so on.

    Common to the models is that they have some limited purpose. A fashion model is to display some fashion to advantage -- making the dress/skirt/make up/... look good. She's not to be considered an attempt to represent all women accurately. The model spacecraft is not intended to reach the moon. But you can learn something about how a spacecraft is constructed by assembling one, and the result will look like the real thing.

    In talking about a laboratory model, read that as being a laboratory experiment. You hope that the set up you arrange in the lab is an accurate representation of what you're trying to study. The lab is never exactly the real thing, but if you're trying to study, say, how much a beam flexes when a weight is put in the middle, you might be able to get pretty close. If you want to know the stability of a full-size bridge with full size beams and welds and rivets assembled by real people, it'll be more a challenge -- represent the 1000 meter bridge inside your lab that's only 10 meters long. It won't be exact, but it can be good enough. Historical note for the younger set: Major bridges like the Golden Gate Bridge, Brooklyn Bridge, Tower Bridge, and such, were designed and built based on scale models like this. The Roman Aqueducts designed over 2000 years ago, still stand, and never came near a computer. They were all derived from models, not a single one of which was entirely correct.

    In studying diseases, biologists use model animals. They're real animals of course. They're being used a models to study the human disease. Lab rats and such aren't humans. But, after extensive testing was done, it was discovered that the rats for some diseases, and other animals for other diseases, reacted closely enough to how humans did. Not exactly the same. But closely enough that the early experiments and tests of early ideas could be done on the rats rather than on people. The model is wrong, but useful.

    Statistical models seem to be the sort that the most people are most familiar with. My note Does CO2 correlate with temperature arrives at a statistical model, for instance -- that for each 100 ppm CO2 rises, temperature rises by 1 K. It's an only marginally useful model, but useful enough to show a connection between the two variables, and an approximate order of magnitude of the size. As I mentioned then, this is not how the climate really is modelled. A good statistical model is the relationship between exercise and heart disease. A statistical model, derived from a long term study of people over decades, showed that the probability of heart disease declined as people did more aerobic exercise. Being statistical, it can't guarantee that if you walk 5 miles a week instead of 0 you'll decrease your heart disease chances by exactly X%. But it does provide strong support that you're better off if you cover 5 miles instead of 0. Digressing a second: Same study was (and is still part of) the support of the 20-25 miles per week running or walking or equivalent (30-40 km/week) suggestion for health. The good news being that while 20 is better than 10, 10 is better than 5, and 5 is way better than 0. (As always, before starting check with your doctor about your particular situation, especially if you're older, have a history of heart problems already, or are seriously overweight). This model is wrong -- it won't tell you how much better, and in some cases your own results might be a worsening. But it's useful -- most people will be better off, many by a large amount, if they exercise.

    Process models started as lab experiments, but also are done in numerical models. Either way, the method is to strip out everything in the universe except for exactly and only the thing you want to study. Galileo, in studying the motion of bodies under gravity stripped the system, and slowed it down, by going to the process model of balls rolling down sloping planes. He did not fire arrows, cannon balls, use birds, or bricks, etc.. Simplified to just the ball rolling down the plane. The model was wrong -- it excluded many forces that act on birds, bricks, and all. But it was useful -- it told him something about how gravity worked. Especially, it told him that gravity didn't care about how big the ball was, it accelerated by the same rules. In climate, we might use a process model that included only how radiation travelled up and down through the atmosphere. It would specify everything else -- the winds, clouds, where the sun was, what the temperature of the surface was, and so on. Such process models are used to try to understand, for instance, what is important about clouds -- is it the number of cloud droplets, their size, some combination, ...? As a climate model, it would be wrong. But it's useful to help us design our cloud observing systems.

    Numerical models, actually we need to expand this to 'general computational models' as the statistical, process, and even some disease models now, are done as computational models. These general models attempt to model relatively thoroughly (not as a process model) much of what goes on in the system of interest. An important feature being that electronic computers are not essential. The first numerical weather prediction was done by pencil, paper, and sometimes an adding machine -- more than 25 years before the first electronic computer. Bridges, cars, and planes are now also modelled in this way, in addition or instead of scale models. Again, all of them are wrong -- they all leave out things that the real system has, or treat them in ways simpler (easier to compute) than the real thing. But all can be useful -- they let us try 'what if' experiments much faster and cheaper than building scale models. Or, in the case of climate, they make it possible to try out the 'what if' at all. We just don't have any spare planets to run experiments on.

    Several sorts of models, but one underlying theme -- all wrong, but they can be useful. In coming weeks, I'll be turning to some highly simplified models for the climate. The first round will be the four 1-dimensional models. Two are not very useful at all, and two will be extremely educational. These are 4 of the 16 climate models.

    28 August 2009

    Catching up

    Catching up with posts, comments, and things in general since coming back from vacation.

    New comments in:
    What is scientific literacy?

    Summary 1 of Simplest Climate Model

    John Mashey: Now that I'm back, any time you're ready to send or post about your K-scale, I'm ready to look or post.

    From the trip: Mountain goats keeping cool on some remnant snow/ice at Logan Pass. The rise is a glacial moraine -- bunches of junk, mostly loose rock, the glacier had pushed out ahead when it was larger.

    17 August 2009

    Off to the glaciers

    By the time you see this, I'll be on my way to see some glaciers. Family vacation time. If I have a network connection, I might post some pictures. Otherwise, see you at the end of August.

    14 August 2009

    Science Jabberwocky

    Twas brillig and the slithy toves
    did gyre and gimble in the wabe.
    All mimsy were the borogoves
    and the mome raths, outgrabe.

    Beware the Jabberwock, my son!
    The jaws that bite, the claws that catch!
    Beware the Jubjub bird, and shun
    The frumious Bandersnatch!


    ... The start of Jabberwocky by Lewis Carroll. No, I'm not going to try to persuade you that there are some deep underlying scientific meanings behind it.

    Rather, it provides some suggestions on how to read science in areas that you're not familiar with. I have to confess that in areas outside mine, there seems to be a terrible array of words no more obvious than 'brillig' and 'slithy'. And words that look familiar, like 'gyre and gimble', but which don't look like they are supposed to mean what I'm used to them meaning.

    Still, even with most of the words being unfamiliar, we can read this can know quite a lot. That's part, after all, of what makes Jabberwocky readable at all. So, let's take line by line, work our way through, and see what we can extract even from intentional nonsense.

    1) Twas brillig and the slithy toves
    a) Brillig probably means something about the weather. We might expect 'twas sunny' (or cloudy, etc.) in a poetic start.
    b) toves are things that can be slithy. We don't know what either of the terms is, but we can get that far. Probably there are also non-slithy toves. Poetry doesn't follow the rules of scientific writing, but usually you won't see a modifier (slithy) unless it's possible for the thing to not be that way.

    2) did gyre and gimble in the wabe.
    a) normally gyre and gimble would mean something about spinning. A gyre is a rotating mass of fluid, a gimble is a sort of bearing that permits pivoting -- but both would be nouns, and in this case they're clearly verbs. The toves are gyring and gimbling, or at least they did gyre and gimble. Pretty often when a noun is turned to a verb, it doesn't mean exactly what it used to. And, when it's applied to something that we don't know, we should tread cautiously about it having acquired a different meaning than we're used to. English is nothing if not free with having multiple meanings for words.
    b) wabe ... probably some kind of place. We might be unsurprised with 'field' 'park' or the like here. On the other hand, it could also be more of an event -- a party, ballgame. Or could even be abstract ('the ether', 'the astral plane').

    3)All mimsy were the borogoves
    a) Borogoves are things that can be mimsy
    b) Mimsiness occupies some kind of range, from not mimsy at all, to all mimsy. These particular borogoves are all mimsy. I can just see the scientific writing here: "We examined a sample (N = 30) of borgoves, and found their average mimsiness to be 45% with a range of 30 to 95% mimsy." Replace borogove with 'meteorological station' and mimsiness with 'completeness' and there's many a paper on the subject.

    4) and the mome raths, outgrabe.
    a) likewise, raths can be mome. There are some non-mome raths out there probably.
    b) further, raths, or at least mome raths, can be outgrabe.
    c) We might also guess that raths and borogoves have some tendency to be near each other.

    5) Beware the Jabberwock, my son!
    The jaws that bite, the claws that catch!

    a) Beware the Jabberwock. Easy enough; if you see a jabberwock, beware.
    b) Jabberwocks are things that have jaws and claws. These are probably either why you should beware of a jabberwock, or how it is that you'll identify one. (They could also spit venom, but you won't know that until it's too late. The jaws and claws should be obvious much earlier.)

    6) Beware the Jubjub bird, and shun
    The frumious Bandersnatch!

    a) Ok, there's a Jubjub, which is some kind of bird. It'd be nice to know what one looks like, so that we can properly beware.
    b) More interesting, and helpful, is 'shun the frumious Bandersnatch'
    i) Bandersnatches are things than can be frumious, and generally are (compare 'shun the poisonous rattlesnake' -- all rattlesnakes are poisonous, but in giving a warning, we do much more often use the redundant modifiers)
    ii) we should shun them. Now this is interesting. Jabberwocks and Jubjubs, we should beware, but Bandersnatches we should shun. Shun is a social word, meaning we should not socialize with Bandersnatches. We would not say 'shun the poisonous rattlesnake', it'd be 'beware', 'flee', and the like. Shunning, we'd do with someone who was socially unacceptable 'shun the bore', 'shun the self-involved', and so on. Bandersnatches, apparently, are some kind of social creature that one could interact with, but you shouldn't. The reason we should not socialize with them is probably that they're frumious -- that's why the redundant modifier got used. So now we know that frumious describes some socially unacceptable behavior (at least to the person speaking).

    I won't go through the whole thing. It's a piece of Through the Looking Glass, and you can find the whole Jabberwocky here.

    Even with a torrent of unknown words, we can infer quite a lot about the things being discussed. In reading scientific work, unknown words will be common, so getting used to inferring what you can (couldn't make much headway on the Jubjub bird, but a fair amount on those frumious Bandersnatches) is a very good idea. Then keep reading and see how things get elaborated on. In Jabberwocky we never do hear more about the Jubjubs, so we're stuck at they're some kind of bird to beware. In a scientific paper, you'll usually see the same terms come up repeatedly, in different forms and contexts, so that it is often possible to build up a pretty good image by the time you slog through it. I confess it's a slog, since I've read papers outside my fields, and they're much more work to read than papers in my fields. Still, I get there.