Running xppaut in Windows

Running xppaut in Windows is sometimes tricky. My new book on nonlinear dynamics gives brief instructions on installing the Cygwin X server and xppaut in Windows, but I’ve often had trouble getting xppaut to play nice with Cygwin/X. After playing around with today, I think I’ve come up with a set of instructions that will work every time. And of course I expect to be proven wrong almost immediately… However, I’m still happy to share what I’ve learned.

What I’m trying to achieve here is a low-fuss installation that will let you run xppaut from the command line. Because I’m much more familiar with Unix shell programming than with DOS batch files, my solution involves the former. You’re going to be installing Cygwin anyway, so we might as well take advantage of its full power.

I will be leaving you to read the documentation for the details of how to accomplish some of the tasks below. None of them exceed the intelligence of an average person, and links to the documentation are given. Here are the steps:

  1. Install Cygwin. In the package installer, select the latest versions of xinit, xset and xhost for installation.
  2. Get the xppaut for Windows zip file. Unzip the package and put the xppall folder that it contains somewhere sensible. Bard Ermentrout recommends the top level of the boot (C:) drive, but I don’t think that’s necessary.
  3. Add the xppall folder’s location to your PATH environment variable. If you put this folder at the top level of your C: drive, you would add C:\xppall to your PATH variable.
  4. Create the following file using a file editor (Windows Notepad, or a Unix editor like vi or emacs; emacs must be installed first with the Cygwin installer if you want to use that) in the xppall folder that you just installed:
#!/bin/bash

# Script to run xppaut in Cygwin using the Cygwin/X server.
# You can call this script xpp, then invoke it on the command line as you would xppaut.

# Start X server if one isn't already running.
export DISPLAY=127.0.0.1:0.0
if ! xset q >&/dev/null; then
    startxwin -- -listen tcp >&/dev/null &
    # The following 5-second pause will slow down startup, but ensures that the
    # Xwindows server is up before trying to call xhost, which otherwise may hang.
    sleep 5
    xhost +127.0.0.1 >&/dev/null
fi


# Run xppaut, passing through any command-line parameters supplied to this script.
xppaut $1 $2 $3 $4 $5

I recommend that you save this file to into the xppall folder, using the file name xpp. Now open a Cygwin terminal and issue the following commands (assuming you put xppall at the top level of the C: drive):

cd /cygdrive/c/xppall
chmod u+x xpp

This will make this file executable. (It may already have been, but you might as well make sure.) If all went well, you should now be able to run xppaut by typing ‘xpp file.ode’ in a terminal window where, obviously, ‘file.ode’ would be replaced by the name of an ode file in the current working directory. There are a bunch of ode files in xppall/ode. I usually test a new installation of xppaut using lorenz.ode.

Note that this will work provided you do not start the XWin Server from the Start menu.

By all means let me know if you try this and run into problems. Within reason, I will try to help.

Frequently confused words

Some words are very frequently confused. Sometimes, this makes the writer’s intent unclear. In other cases, the meaning of the sentence may be clear, but it’s still distracting to those readers who know the difference. So it matters.

This little blog entry focuses on words that commonly appear in scientific writing and that are often confused or misused. There is a longer list of words commonly confused in general writing here: http://writing2.richmond.edu/writing/wweb/conford.html. By all means consult this source in addition to this post.

Principle/principal: “Principle” is a noun that means a fundamental rule, truth or law. It is never an adjective. The adjectival form of this word is “principled“. “Principal” can be either an adjective or a noun. As an adjective, it means “main” or “most important”. So all of you PIs out there are “Principal Investigators”. I hope that you are also “principled investigators”, but “Principle Investigator” would mean someone who carries out research into principles, which I suppose might be applied to ethicists, although it would be unusual to do so. As a noun, “principal” can have one of two meanings: It can mean the main person involved in some affair or transaction, as in “the principal in a lawsuit”, who might be the main plaintiff or defendant, or it can be the title of the leader of an educational institution, e.g. the “Principal of Queen’s University”.

Adapt/adopt: A thing that is adapted is changed to suit some particular purpose. For example, a figure that was adapted from a source was not just copied. Some details of the figure were changed, or else the original was used as a model for a new figure that still retains some resemblance to the original. On the other hand, something that is adopted is just used as is, without modification. You can, for example, adopt the procedure of Smith et al. (1902), which means that you used their procedure exactly as they described it. You can also adapt Smith et al.’s (1902) procedure if you need to change it to use it in a new context, or to work with a different set of instruments, etc.

Affect/effect: This pair can be confusing because both of these words can either be a noun or a verb, but with different meanings. I’m going to focus here on the most common uses of these words in scientific writing. If you’re a psychologist, you’re going to need to do additional reading on this topic because in that discipline, the noun forms of these words have highly technical meanings that you simply have to get right.

Almost always in scientific writing outside of psychology, you’re going to use “affect” as a verb and “effect” as a noun. If you just remember that, you should be in good shape. The verb “affect” means “to produce an effect in”. (Note the use of the nouneffect” in the definition of the verbaffect”.) So, for example, the weather affects the timing of plant flowering. The noun “effect” designates a consequence of some causative event or agent. Late flowering is an effect of cool weather. Similarly, we talk of cause and effect, not cause and affect, unless you’re a psychologist.

Complimentary/complementary: In scientific writing, you want “complementary”. “Complimentary” refers to receiving praise, or being given something free-of-charge, as in “complimentary drinks”. “Complementary” has the sense of one thing completing another. Thus we have complementary angles, complementary base pairs, etc.

Refute: This word isn’t a member of a simple pair, but lately I have noticed it being misused quite a lot. “Refute” has exactly one meaning: it is to prove an argument or hypothesis wrong. Note the word “prove”. To refute something is not merely to argue against it, or to provide a counterargument, or to present contradictory data. If you have refuted a hypothesis, it’s dead. It’s a very strong word, and rarely applicable. But good for you if you have managed to refute something. It’s probably a significant achievement. If it’s still at the stage where the thing is debatable, then you need a different word. It’s hard to give specific advice here, because there are many possible nuances, but here are some possible phrases you might use: “argue instead/against”, “provide a counterargument/rebuttal”, “reply”, “respond”, “cite as evidence against”, “deny”, “contradict”, “dissent”, “reject”. The variety of nuance in just these options hopefully suggests one of the problems with misusing “refute”: if it’s clear you don’t actually mean that something was conclusively disproved, what do you in fact mean? If you’re tempted to use “refute”, I would strongly suggest that you think carefully about what you really mean, and then use plain language, which may involve a complete rewriting of your sentence. For example, “Jones and Wang (2001) refuted Amato and Sveshnikov’s (1998) hypothesis”, if it doesn’t actually mean that they disproved the hypothesis, might be rewritten in any of the following ways, among many others, depending on what you’re trying to say: “Amato and Sveshnikov’s (1998) hypothesis was contradicted by Jones and Wang’s (2001) interpretation of the data”; “Jones and Wang (2001) showed that Amato and Sveshnikov’s (1998) hypothesis was more plausibly consistent with…”; Jones and Wang (2001) argued that Amato and Sveshnikov’s (1998) hypothesis was incompatible with…”

Delivery of a clear message requires clear language, and that means using the right words to express a thought.

Climate change mitigation measured in gas tanks

A lot of the discussion around what we need to do to slow down climate change is described to us in tonnes of CO2. The trouble is of course that most of us don’t know what a tonne of CO2 looks like. I thought I would try to bring this discussion into terms that most of us would understand by rephrasing it in terms of gas tanks. Keeping in mind that not all carbon emissions come from burning gasoline in a car, a gas tank is still probably a more useful visualization for most of us than a tonne of CO2. Note also that what we really care about is the total warming potential of all greenhouse gases released into the atmosphere, which is usually measured in CO2 equivalents. But since the basic unit of measure is still a tonne of CO2, the discussion below is framed in terms of CO2.

First of course we have to decide how big a tank we’re going to use. Because there’s a precedent for using a 50 L tank, that’s what I’m going to use as my standard tank. That’s the size of tank you have in a typical smaller car. At 2.3 kg of CO2 per liter of gasoline, a 50 L tank will produce 115 kg of CO2 when burned in your automobile engine. Conversely, a tonne of CO2 would be equivalent to about 8.7 tanks.

To meet its Paris accord commitments, Canada needs to cut its emissions by about 205 million tonnes of CO2 between now and 2030. (Some of you will say, “but our Paris commitments aren’t enough!” You’re right, of course, but it’s a baseline to aspire to in the short run.) As I write this, the population of Canada is about 37.6 million, so that’s 5.5 tonnes per Canadian per year. That’s about 48 gas tanks per person per year. Note that this figure includes CO2 emissions from industry and from private use, but keep in mind too that this does not include all of the carbon emissions you are responsible for through your purchases of foreign-made goods, which are accounted for in the country where these emissions are produced. So, for example, if you buy a pair of shoes made in Vietnam, those are Vietnam’s emissions, even though you are the person driving these emissions. StatsCan tried to estimate household contributions to greenhouse gas emissions (not including foreign emissions for goods imported into and consumed in Canada) a bit over a decade ago, and found that households were responsible for about 46% of Canada’s greenhouse gas emissions, either directly or indirectly. Assuming a similar ratio still holds, each of us is on the hook for about 22 gas tanks per year.

I don’t know about you, but I don’t think I fill up my gas tank 22 times per year. Remember: those are 50 L tankfuls. A lot of the times I “fill” my tank, I’m only buying 30 or 40 L of fuel. So I could stop driving completely, and that wouldn’t do it, especially when you consider that I live in a three-person household with just one car, so I can’t count on my wife and son to cut 22 fill-ups of cars they don’t have! The idea here isn’t to think in terms of literal gas tanks, but in terms of gas-tank equivalents. Between the three of us, my wife, son and myself need to cut about 66 gas-tank equivalents out of the emissions we’re responsible for.

There are plenty of web sites that will tell you what you can do to reduce your personal carbon emissions. Clearly, if I can drive less and use my bike or public transit more, that helps. Equally clearly, that alone won’t get us there. One of the things that will make a big difference that politicians don’t like to talk about is that we’re probably all going to have to just buy less stuff. I’m going to pull a few figures from Mike Berners-Lee’s excellent book How Bad Are Bananas? to make this point.

Let’s say that building the car you want to buy will produce 15 tonnes of CO2, about what it takes to build a midsize car. That’s 130 gas tanks. You could of course avoid causing those emissions by buying a used car, which won’t cause any extra emissions. But of course, eventually someone has to buy a new car (assuming we don’t all start riding public transit, but that only works for urban dwellers), and let’s suppose that you decide that you really want a new car. You could just buy a smaller car. Some cars have an emissions impact of as little as 6 tonnes of CO2, or 52 gas tanks. Even if you don’t go to the smallest car available, you could easily shave 30 or 40 gas tanks from your emissions just by buying a smaller car.

But wait! Those emissions should be amortized over the time you own the car, right? The average Canadian owns a new car for about 6 years before trading it in. So the impact of your 130-tank car over your period of ownership is about 22 gas tanks per year. Coincidentally, this is how much you need to cut out of your annual emissions, so if you can go car-free, you’ve pretty much done your part (but you might have to find other reductions if a family is sharing a car, as in our case). Going to a smaller car might save 7 gas tanks per year, which is about a third of the 22 tanks per year you need to cut out of your lifestyle. Not bad! But what if you really want that 130-tank car? If you keep it an extra two years, the impact of your new car becomes about 16 tanks per year, so you are reducing your carbon emissions by about the same amount as you would by buying a smaller car, just by keeping your car a bit longer. And obviously, this emissions reduction strategy just gets better the longer you keep the car.

And what about those Vietnamese shoes I mentioned earlier? Making the average pair of shoes and transporting it to a store near you results emissions of about 11.5 kg of CO2, or about a tenth of a tank of gas. I probably buy two to three pairs of shoes per year, so for me, this isn’t worth thinking about. But if you’re a shopaholic who loves shoes, well, I’ll let you do your own calculation…

I suspect that if you’re going to buy clothes, shoes and accessories and are actually going to wear them until they’re ready for disposal, there probably aren’t significant emissions savings to be made by changing your shopping habits. However, some of us, and you know who you are, do buy stuff we won’t wear much before putting it into the basement. Then those fractions of a gas tank really start to add up. As a general rule, buy less, and buy used if you want to cut your carbon footprint. This applies not only to clothes, but to anything else we buy on a whim and then barely use.

And the general idea of buying what you need and using it applies to food, too. Food waste is a massive contributor to greenhouse gas emissions: Because food is wasted, it is necessary to overproduce food, which leads to deforestation, i.e. loss of an important carbon sink. Moreover, agriculture has a direct energy cost, so more food grown means more emissions from the agriculture sector. Then there is the transport of food that will never be eaten. And rotting food often produces methane, an even more potent greenhouse gas than carbon dioxide. A rough estimate is that household food waste (as opposed to food that is wasted somewhere in the supply chain) amounts to about a quarter tonne of CO2 per person per year in Canada, or 2.2 gas tank. Not a huge number, but still about 10% of the emissions you need to cut per year. Roughly speaking, to reduce the amount of food you waste, you have to buy things you plan to eat, and make sure you actually do use it before it goes bad. Sounds simple, but it does take a bit of a mental adjustment to our shopping and cooking habits.

So there you have it. Climate footprint and emissions reductions conceptualized in gas-tank equivalents. Hopefully this helps you understand the size of the problem a bit better, and also puts in perspective some of the things you can do to reduce your climate impact. A lot of the advice comes down to buying less stuff and using it for longer (or using it at all in the case of food). And as an added bonus, if you spend less, you’ll have more money in your bank account for a rainy day. Win-win.

Publications in CVs

I’m currently chairing the Ph.D. program committee at the University of Lethbridge, and I just finished reading the files of the applicants who have applied to our program for admission later this year. At the UofL (and elsewhere), students applying to the Ph.D. program have to submit a CV. And of course, if you have publications, they should be in your CV. The trouble I’m having with many files I’m reading is that students don’t give full bibliographic details for their papers, which means that I sometimes have to do some additional digging if there is something I want to check on. Here are some things I sometimes find missing:

  1. A page range or article number. Yes, I know, the DOI should be enough, but if I decide to go looking for your paper for some reason, it’s often more convenient to have the first page number or article number (along with the volume number) than the DOI. Why? Because some journals make it particularly efficient to find papers with the volume and page number.
  2. The DOI. At the risk of contradicting myself, it’s sometimes easier to have a DOI. The DOI is especially useful if the journal is a bit obscure.
  3. The volume number. Well, duh!, you might say. But a surprising number of people forget to put that in.
  4. The year. Ditto.
  5. The issue number can be useful, depending on the journal, so by all means include that, too.
  6. For articles in journals that use article numbers rather than pages, the number of pages. This gives me some idea whether I’m looking at a letter-style publication or a full paper. I know it’s not foolproof, but it does help.

The point is that the more bibliographic details you include, the easier you make it to find your paper should someone wish to do so.

Finally, make sure that those bibliographic details are correct! You would be surprised at how many slightly mangled journal titles there are in people’s CVs, for example. That makes it hard to find the paper. It might cast doubt on whether the paper exists at all. Or it might just convince a person reading your CV that you don’t pay much attention to detail. Probably not the impression you want to leave.

On a related note, if you have multi-authored conference presentations in your CV, please clearly indicate if you were the presenter or not. You can use a special mark (asterisk, boldface or italics) for the presenter, or you can separate your presentations into ones you have made and ones that author people presented. Without this, long lists of multi-authored presentations are uninformative, and may be seen as padding your CV.

Before you write your thesis, read the instructions

I have a little tip today for those of you preparing to write a thesis: Before you start, read your university’s or department’s thesis guidelines. There are some things that are easy to do as you’re writing your thesis, but a pain to do after, like compiling a table of abbreviations, which is usually required. If you read the thesis guidelines before you start writing, you can make notes of the things that you will need to do, and probably save a lot of time later on. It’s quite likely that you will discover things you’re supposed to do that you wouldn’t otherwise have thought of on your own.

I would also suggest that you frequently go back to those guidelines during the writing process. If you’re wondering how you’re supposed to format figure captions, the thesis guidelines probably answer this question. If you’re not sure what is expected in a thesis abstract (it varies from school to school), or whether you need to write a longer summary in addition to the abstract (required in some places), look no further than your university’s thesis guidelines.

Every School of Graduate Studies has a person whose job is to make sure that theses meet the local requirements. This person generally doesn’t look at your thesis until you have defended it and have completed your revisions. It’s a lousy time to find out that you need to add something, or rewrite the abstract, or reformat the whole thing. A few minutes of reasonably careful reading ahead of time will save you all these headaches. It’s a smart investment of your time.

Incidentally, the same principle applies to lots of other things: Reading instructions for scholarship or grant applications, or instructions in job ads about what you are supposed to submit in your application, will generally repay handsomely the small amount of time you devote to this activity. In the case of a thesis, the worst that will happen if you mess something up is that you will be told to fix it. For a grant or job application, not following the instructions may mean that your application isn’t even considered.

So just “read the instructions, that’s how you get it right”, as the Doodlebops so eloquently put it.

SIAM Review 60th volume

This year marks the publication of the 60th volume of the venerable SIAM Review. As has become traditional when journals mark anniversaries, the editors of SIREV have compiled a list of the journal’s 10 most read articles. These lists are always interesting, both for what shows up and for what is missing (from my purely subjective point of view).

Number 1 on the list is a modern classic, The Structure and Function of Complex Networks by Mark Newman. At the time this paper appeared in 2003, network science was just getting hot. Newman’s review, which laid out all of the foundational ideas of the field in a very clear way, quickly became the standard reference for definitions and basic results about various kinds of networks. It didn’t hurt that Newman had recently made a splash in the scientific community by analyzing scientific collaboration networks: given that everyone’s favorite topic is themselves, scientists were naturally intrigued by a quantitative study of their own behavior. All kidding aside, Newman’s SIAM Review article has been hugely influential. All kinds of networks have been analyzed using these methods, ranging from social networks to protein interaction networks. As if having the number 1 paper in this list wasn’t enough, Newman is also a coauthor of the 2009 paper Power-Law Distributions in Empirical Data, which is number 6 on the list. The latter paper deals with statistical methods for determining whether or not a data set fits a power-law distribution.

Desmond Higham has the singular distinction of having two singly authored papers on this list, both of them from the Education section of SIAM Review, but both wonderful introductions to their topics for young scientists, or for old scientists who need to learn new tricks. At number 3 on the list, we have An Algorithmic Introduction to Numerical Simulation of Stochastic Differential Equations, which presents the simplest introduction to stochastic differential equations I have ever had the pleasure to read. Then at number 4, we have Modeling and Simulating Chemical Reactions. In the latter, Higham walks us through three levels of description of chemical equations, as Markov chains in the space of species populations, then using the chemical Langevin equation, and finally in the bulk mass-action limit. He derives each method from the preceding one, essentially by focusing on computational methods for simulating them, and then showing that these methods simplify as various assumptions are introduced. I think that these two papers of Higham’s have been successful not only because of his exceptionally clear writing, but because he also provided Matlab code for all his examples. The interested reader can therefore go from reading these papers to doing their own calculations rather quickly. I learned a lot from these papers myself, and I’ve used both of them in a graduate course on stochastic processes. They’re just fantastic resources.

One paper that didn’t appear, and that I had guessed would be there before I looked at the list, is the classic 1978 paper Nineteen Dubious Ways to Compute the Exponential of a Matrix by Cleve Moler (original developer of Matlab, and founder of MathWorks, the company that sells Matlab) and Charles Van Loan (author with the late Gene Golub of the book Matrix Computations, known by people in numerical analysis simply as “Golub and Van Loan”). It’s possible that it didn’t make the list because an expanded version of the original was published in the SIAM Review in 2003, and that this paper’s reads are therefore split between the two versions. However, it’s still a surprise. This is one of those papers that is often mentioned, in part I’m sure because of its mischievous (if accurate) title, but also because it discusses an important problem—matrix exponentials show up all over the place—and does so with exceptional clarity.

There have been lots of papers on singular perturbation theory and the related boundary-layer problems in the SIAM Review over the years, which is perhaps not surprising given how central these methods are to a lot of applied mathematics. In fact, in 1994, the SIAM Review published an issue that contained a collection of papers on singular perturbation methods. I would have thought that at least one paper on this topic would have made the list. My all-time favorite SIREV paper is in fact Lee Segel’s Simplification and Scaling, which I routinely assign as reading to graduate students who need an introduction to the basic ideas of singular perturbation theory, followed closely by Lee Segel and Marshall Slemrod’s The Quasi-Steady-State Assumption: A Case Study In Perturbation, which derives the steady-state approximation for the Michaelis-Menten mechanism using the machinery of singular perturbation theory. The full power of these methods is made evident when they derive a more general condition for the validity of the steady-state approximation than had previously been obtained. The late Lee Segel was one of the great pioneers of mathematical biology. He worked on every important problem in the field, from oscillators to pattern formation, and left us some beautiful applied mathematics. He also left us an absolutely wonderful book, Mathematics Applied to Deterministic Problems in the Natural Sciences, coauthored with Chia-Chiao Lin, who has sadly also left us. Marshall Slemrod is, fortunately, still very much alive. Marshall is probably best known for his elegant work in fluid dynamics, but he has worked on quite a variety of problems in applied mathematics over his long and distinguished career.

It’s interesting to compare SIAM list of “most read” papers to the most cited papers from SIREV (Web of Science search, Oct. 8, 2018). Here they are:

  1. Mark Newman’s The Structure and Function of Complex Networks, cited 8333 times, more than twice as often as any other paper published in SIREV. No great surprise there.
  2. Fractional Brownian Motions, Fractional Noises and Applications by Benoit Mandelbrot and John van Ness (3554 citations). Perhaps this one should have been on my radar, although I’ll admit that I have never read it. I’ll put it on my reading list now.
  3. Power-Law Distributions in Empirical Data, Newman’s other entry on the most-read list, which interestingly comes out much higher in the most-cited ranking that in the most-read list, where it occupies the number 6 spot, with 2885 citations.
  4. Semidefinite Programming by Lieven Vandenberghe and Stephen Boyd (2086 citations)
  5. Tensor Decompositions and Applications by Tamara G. Kolda and Brett W. Bader (2042 citations, number 2 on the most-read list)
  6. Analysis of Discrete Ill-Posed Problems by Means of the L-Curve by Per Christian Hansen (1870 citations)
  7. The Mathematics of Infectious Diseases by Herbert W. Hethcote (1813 citations)
  8. Atomic Decomposition by Basis Pursuit by Scott Shaobing Chen, David L. Donoho, and Michael A. Saunders (1647 citations)
  9. On Upstream Differencing and Godunov-Type Schemes for Hyperbolic Conservation Laws by Amiram Harten, Peter D. Lax, and Bram van Leer (1493 citations). This is the kind of paper we often see on most-cited lists because it discusses practical issues in the numerical solution of PDEs.
  10. Mixture Densities, Maximum Likelihood and the EM Algorithm by Richard A. Redner and Homer F. Walker (1256 citations)

It’s interesting, and perhaps a little surprising, how little overlap there is between the most-read and most-cited lists. Just three papers show up on both lists! This is another manifestation of the well-known problem of trying to use any single metric to determine the influence of a paper.

There are other SIREV papers that I really love, even though I wouldn’t have expected them to make this list, sometimes because of their real-world applications, and sometimes just because they describe very clearly some beautiful applied mathematics.

Bryan and Leise’s The $25,000,000,000 Eigenvector: The Linear Algebra behind Google, explains the mathematics behind the Google search engine. It’s both a great educational article on large, sparse matrix eigenvector calculations, and an interesting peek into the workings of one of the most important technologies of our time.

James Keener’s article on The Perron-Frobenius Theorem and the Ranking of Football Teams is a great read, and a fun way to introduce students to the powerful Perron-Frobenius theorem. James Keener has been one of the leading figures in mathematical biology over the last several decades, and is the author, with James Sneyd, of the highly regarded textbook Mathematical Physiology.

I also really enjoyed Diaconis and Freedman’s Iterated Random Functions, which describes some lovely mathematics that connects together Markov chains and fractals, among other things. Persi Diaconis is perhaps best known for his analysis of card shuffling and other games of chance. In fact, another paper of his in the SIAM Review (with Susan Holmes and Richard Montgomery) on Dynamical Bias in the Coin Toss is also a fantastic read.

I could go on, but I think I’ll stop here.

You may have noticed some recurring themes in this post. One is that there is some great writing in the SIAM Review. In fact, I would say that this is a hallmark of SIREV. Regardless of the author or topic, the final published paper always seems to be a great piece of scientific literature. Of course, I might be a little bit biased, having published a Classroom Note in the SIAM Review myself. Another theme of this post is the number of outstanding scientists who have written for SIREV. SIREV makes room for up and comers, but it also regularly gives us the benefit of reading papers by people who have spent decades deepening their knowledge of their respective areas.

So happy birthday, SIAM Review, and many happy returns!

Using RSS to help you keep up with the literature

This is a followup to my 2014 post about “Keeping up with the literature“. I’m a strong advocate of arranging for the information to come to you rather than you having to go looking for it. I’ll look at stuff that shows up in my mailbox, or is otherwise put right in front of me, but I’m unlikely to do literature searches unless I’m looking for something fairly specific. One trick that I have perhaps not used as much as I should is RSS feeds. An RSS feed sends you a one-line summary of new content added to a web site. Some RSS feeds allow you to narrow what is sent to you according to your field of interest. Some journals provide RSS feeds. You might find this a useful alternative to receiving tables of contents by email. In some cases, RSS feeds might be useful because they will only show you content from a specific section of a journal, so you don’t get overwhelmed with lots of irrelevant stuff.

I currently subscribe to a couple of RSS feeds from the Physical Review journals. The Physical Review journals cover a huge range of topics, most of which are of no interest to me. Getting their complete tables of contents would waste a lot of my time. However, they have specialized RSS feeds broken down by area of interest. I subscribe to the Physical Review Letters Soft Matter, Biological, and Interdisciplinary Physics feed, as well as to the Physical Review E Biological Physics feed. The volume on these feeds is very manageable, and I can quickly find the few articles of interest to me.

To get started, you need to install an RSS reader application. Journals (or other web sites) with RSS feeds will display this logo:

(The logo may be quite small, and may not be colored.) If you click on this logo, you will typically end up at the actual RSS feed page. You want to copy the URL of this page, and then give it to your RSS reader application. The RSS reader will typically sit in your toolbar (or equivalent for your computer’s OS) and let you know when something new appears in your feed. And that’s it! When you see material in which you’re interested, you just have to click on it, and you will be taken to the article.

English words with Latin and Greek-derived plurals

I guess I’m on a language kick… After my recent post about the misuse of “similar to”, I’m going to tackle some Greco-latin plurals that lots of people don’t know how to use.

English is a lovely mongrel of a language, having adopted words and grammar from every invader who ever set foot on the island of Great Britain. The Romans ruled over England and Wales for about 350 years, so naturally, Latin left its mark on English. Some words were also adopted from Greek through the scholarly community. Most Latin and Greek words were eventually anglicized, and English-style pluralization rules applied, but a few retained their Greco-latin plurals. Some of these are heavily (mis)used in scientific writing. I’m going to try to sort out for you some of the ones that are most often used in the texts I read.

If you have a standard for judging something, you have a criterion. That’s right. Criterion. It’s possibly you have never seen this word and would have expected criteria instead, but criteria is the plural of criterion. It’s particularly important to get this right because English has just one definite article, “the”. Thus, “the selection criterion” and “the selection criteria” imply, respectively, one criterion and many. The meaning of the sentence is therefore altered if you use the wrong word. As another example, “a criteria”, which I see a lot, is wrong because “a” is a singular indefinite article, and “criteria” is plural. If you have one rule you use for making a decision, you have a criterion.

Erosion is a natural phenomenon. It’s one of the many geological phenomena that shape our Earth. So again, you would never write “a phenomena”.

Do you grow cells in a medium, or in a media? Hopefully, you would choose the singular “a medium“, media being the plural of medium. We might prepare media (if we are preparing several different media, or possibly several batches of a particular medium), but more commonly we might prepare a medium. It’s surprising how often media is used given how rarely it’s actually the syntactically and grammatically correct choice.

We can search for minima on a potential energy surface, on the assumption that there might be more than one, but when we find one, it’s a minimum. Obviously, the same comment would apply to maximum and its plural maxima, as well as optimum/optima. Incidentally, outside of science, people tend to say minimums and maximums for the plurals of these words—a usage that is sanctioned by modern dictionaries—so perhaps it’s time for us to stop trying to sound learned by using the Latin plurals. Errors in the singular would almost certainly vanish if we did so.

But please, no “criterions”, “phenomenons”, or “mediums”. Unless, in the latter case, you want to get together a group of people who can talk to the dead.

Similar and similarly: are they similar?

As a professor, I see a lot of student writing, some good, some not so good. And I’m one of those people who think that a professor’s job includes teaching writing, regardless of the discipline one belongs to. So here is my first foray into advice on writing.

In the last couple of years, I have noticed that many students use “similar” incorrectly. I often see sentences structured like the following:

Similar to protein A, protein B binds to protein C.

So what’s the problem? To understand that, we have to ask what “Similar to protein A” modifies. What the writer is trying to say is that protein B behaves like protein A in that both bind to protein C. It’s the entire action of protein B modifying protein C that is similar to the action of protein A. Therefore, “Similar to protein A” is modifying the entire principal clause. However, “similar” is an adjective, so it should modify a noun. “Similar” therefore can’t be right.

A modifier of a clause can only be an adverb, so a correct version of the above sentence would be

Similarly to protein A, protein B binds to protein C.

“Similarly” (note the -ly ending) is an adverb, so it can modify an entire clause. Problem solved.

Of course, this isn’t the only solution. It’s always good to have more than one way to say something so you can vary the style of your text a little bit. Sometimes, the simplest way to say something is the best, so one alternative is to replace the adverb by a common preposition:

Like protein A, protein B binds to protein C.

The truth is, though, that neither of the above sentences probably says what the student who wrote it wanted to say. All these sentences really say in the end is that both A and B bind C. However, these constructions often show up in text where a student is actually trying to say that the two proteins bind C in a similar way (using similar contact surfaces, etc.). Why not just say that?

Protein B also binds protein C. B and C make similar contacts as A and C in the respective complexes.

Note that I turned one sentence into two. My meaning is now completely clear and unambiguous. This is another lesson: unless you’re strictly space limited for some reason, sometimes it’s better to use a couple of sentences and a few extra words in order to make your meaning completely clear. Similarity, for example, is a slippery complex. Saying that two things are similar really doesn’t tell us much unless we say in which ways they are similar. Similar comments apply to many other constructions. When writing, ask yourself what you want to say, and then make sure that the words you use convey your meaning without ambiguity.

What exactly do you mean by “stable”?

Stability is a highly context-dependent concept, and so it often leads to confusion among students, and sometimes among professional chemists, too.

If I say that a certain molecule is “stable”, I might mean any of a number of things:

  1. It’s possible to make it, and it won’t spontaneously fall apart.
  2. It’s possible to isolate a pure sample of the substance.
  3. It won’t react with other things. This is often qualified, for example when we say that something is “stable in air”.

The trick is to pick up which one is meant from context. A recent example arose on a test question in my Chemistry 2000 class, where I asked, in a question on molecular orbital (MO) theory, if argon hydride, ArH, is a stable molecule. In this case, the “context” was in fact a lack of context: I simply asked about the stability of this molecule, without any mention of holding it (the isolable substance definition) or of bringing it into contact with anything else. Thus, I was relying on the first definition of stability. Unexpectedly, simple pen-and-paper MO theory predicts that ArH has a bond order of ½, and so is predicted to be stable, although clearly not by much. This ought to be quite a surprise to anyone who has studied chemistry since we normally think of noble gases like argon as being quite unreactive (stable in the third sense), and so unlikely to form compounds. And when we do get compounds of noble gases, they are usually compounds with very electronegative elements such as fluorine. Moreover, ArH would violate the octet rule. Students do run across non-octet compounds from time to time, but the octet rule is deeply ingrained from high school. Finally, ArH would be a radical, and students are often taught to think that radicals are “unstable”, in the sense that they are highly reactive.

As it turns out, the simple MO theory we learned in class is sort of right: excited states of argon hydride are stable enough to be studied spectroscopically—in fact the first such study was carried out at Canada’s National Research Council by JWC Johns1—but the ground electronic state is unstable in the first sense: it dissociates into H and Ar atoms. So our chemical instinct is right about this compound, too. Welcome to the nuances of chemistry.

For the sake of argument, suppose that ArH had a stable ground electronic state, as predicted by simple MO theory. It would fail to be stable in the second sense because the meeting of two ArH molecules would result in the energetically favorable reaction 2 ArH → 2 Ar + H2. And of course, ArH would react with a great many substances. In fact, we could think of this compound as a source of hydrogen atom radicals.

Before we move on from ArH, let’s talk about some of the reflexes that would have led us to predict it to be unstable. The fact that a material is normally unreactive doesn’t mean it won’t form a compound with something else under the right conditions. If I want to make ArH, I won’t try to react argon with hydrogen molecules because the atoms in H2 are held together by a strong bond, so it would be energetically unfavourable to swap that bond for an Ar-H bond. I will need a source of hydrogen atoms. If I do expose argon atoms to hydrogen atoms, the very reactive radical hydrogen atoms may well react with the normally unreactive argon, which is in fact what happens. But none of that is directly relevant to the question of the stability of the ArH molecule. If I ask about that, I just want to know if the thing will hold together assuming it has been made.

The octet rule is deeply embedded into the psyches of anyone who has studied chemistry. It is, indeed, an excellent rule of thumb in many, many cases, especially in organic chemistry. But students are soon exposed to non-octet compounds, so clearly the octet rule is not an absolute. And yet we often hear people talk about an octet as being a “stable electronic configuration”. There’s that word again! But what do people mean when they say that? The answer is, again, highly dependent on context. In s- and p-block atoms, an octet fills a shell, and so the next available atomic orbital is quite high in energy, and it will likely be energetically unfavourable to fill it. In molecules, the octet rule just happens to often result in electronic configurations with an excess of bonding over antibonding character, so they are stable in the first sense. And because eight is an even number, the resulting molecules often have all of their electrons paired, so they are less reactive than they might have been if they had an odd number of electrons. But you may recall that oxygen, on which more below, has two unpaired electrons, even though its Lewis structure satisfies the octet rule. We should always remember then that it’s the octet rule, and not the octet law. Arguing that something is especially stable because it has an octet is just not a very good explanation. Now having said that, the octet rule generally holds for compounds from the second period, largely because trying to add more electrons to these small atoms is energetically unfavourable. But even that is a contingent statement since it depends on where those electrons are coming from and whether they have anywhere else to go. Certainly, you can measure an electron affinity for many molecules with octet-rule structures.

As for the argument that radicals are “unstable” (which you will hear from time to time), it’s not true. Many radicals are very reactive. But a great many radicals are stable in the first and often in the second sense, too. This includes many of the nitrogen oxides, notably nitric oxide, which is stable enough to serve as a neurotransmitter, and can be stored in a gas cylinder, but is conversely reactive enough to be used as part of your body’s immune response. Again we see that stability and reactivity do not necessarily coincide, even though the word “stability” is sometimes used in the sense of “reactivity”.

Of course, ArH is an extreme, and NO is not a terribly familiar compound to most of us, even though our bodies make it. So let’s talk about a more mundane molecule. Oxygen has not one but two unpaired electrons. So despite its Lewis diagram, oxygen is a radical. Nevertheless, oxygen is certainly stable in the first and second senses. There are lots of oxygen molecules in the atmosphere, and they don’t just fall apart on their own. (They do fall apart if supplied with enough energy, for example in the form of an ultraviolet photon, but that is another question altogether.) You can store oxygen in a gas cylinder, so it is certainly isolable. But oxygen is highly reactive, in part because of its unpaired electrons, at least towards some substances and in some circumstances. It’s a fairly strong oxidizing agent for example. Many metals, if left standing in air, will become coated very quickly in a layer of their oxide. And if provided with a little heat, oxygen will react vigorously with many materials. We call these reactions of oxygen “fire”.

The very different meanings of “stable” mean that we have to think when we hear this word. Ideally, we would also banish the third meaning mentioned above in favour of more specific language, such as “reactive towards”. Conflating questions of stability and reactivity just makes it harder to think precisely about what we mean when we say that a molecule or substance is stable.

References:
1J. W. C. Johns (1970) A spectrum of neutral argon hydride. J. Mol. Spectrosc. 36, 488–510.