Three ways to write a book

My third textbook, Foundations of Chemical Kinetics: A Hands-On Approach, was just published a few weeks ago. (The link in the previous sentence will be convenient if your institution has a subscription to the IOP books. If you want to buy a copy, try https://store.ioppublishing.org/page/detail/Foundations-of-Chemical-Kinetics/?K=9780750353199. And if you’re thinking of adopting this book in one of your courses, the latter page also contains a link to order an inspection copy.) The publication of this book caused me to start thinking about the very different ways my three books have come about. Those of you thinking of writing a book, or just curious about the book writing process, may want to continue reading. Otherwise, you can wait for the next installment in this increasingly irregular blog.

Adagissimo: A Life Scientist’s Guide to Physical Chemistry

When I arrived at the University of Lethbridge, I was handed the second-year thermodynamics course as one of my teaching assignments. When choosing the textbook for this course, I made the proverbial rookie mistake: I picked a book I liked, forgetting that the book is for the students, and not for me. Among other things I didn’t think about were: What background do the students bring to this class? What are their likely scientific interests? And of course, how might my perspective differ from that of a student taking their first steps in themodynamics? I won’t say what book I picked because these were my mistakes and not the author’s. Let’s just say that I picked a book that would have been great for students in a specialist program with more of a mathematics background than is required of the group of Chemistry and Biochemistry students in my class.

Once I realized my mistake, I started looking more seriously at the range of physical chemistry textbooks on the market. I was looking for a book that was approachable, that included some biochemical topics (because the majority of the students in the course were Biochemistry majors), but that still didn’t compromise too much on rigor. I wanted students to understand the principles of thermodynamics, and not just learn some formulas and how to apply them. I eventually settled on Tinoco, Sauer and Wang’s Physical Chemistry: Principles and Applications in Biological Sciences, an excellent book that, I thought, struck roughly the right balance. TSW was the required textbook in my thermo course for about three years.

I honestly don’t remember exactly when I decided to write my own book. I do remember thinking that what I was writing was a set of course notes, and that I needed to write these because Tinoco, Sauer and Wang didn’t go into quite enough depth on some topics that I thought were important. The idea that this could be a publishable book didn’t enter my head for quite a while. From the time stamps on the files, it looks like I started writing sometime during my first term of teaching in the Fall of 1995. By the Spring of 1998, the third time I taught the course, I had written a set of notes titled Practical Thermodynamics that I distributed through the UofL bookstore and that was offered to students as a supplement to Tinoco, Sauer and Wang. By the Fall of 1999, the roles were reversed: my book was required and Tinoco, Sauer and Wang was recommended. By the Fall of 2001, I had stopped recommending a second textbook.

In the meantime, I had also started teaching our chemical kinetics course, which was also a second-year offering. I used Tinoco, Sauer and Wang in Spring 2000, when I first took over the course, then switched to Laidler and Meiser’s Physical Chemistry the next year. (Laidler was a kineticist, so it shouldn’t be a great surprise that the kinetics in his physical chemistry textbook is excellent.) By 2002 I was distributing a self-written textbook through the bookstore entitled Chemical Kinetics, and not requiring a traditional textbook.

Once in a while, academics will decide to shake up a curriculum. We went through this process in the mid-2000s in order to try to sort out some problems we were having delivering our courses. In a nutshell: too many required courses, which didn’t give us a lot of flexibility in terms of teaching assignments and could be a problem when people were not available. One of the results of this curricular shakeup was the merging of the thermodynamics and kinetics courses. While this wasn’t exactly a do-over for me, it did require a lot of work to rearrange what I had, jettisoning about half of the material, resulting in something more-or-less like the book that was eventually published.

By this time, what I had was clearly a textbook. Not only did I have carefully constructed text laying out the ideas and key equations, but I had a large collection of problems from assignments and tests which I had been integrating into the book over the years. Sometime in 2009, I started looking for a publisher. One of the publishers I contacted was Cambridge University Press. Some decades earlier, they had published Morris’s A Biologist’s Physical Chemistry, a book that I thought had a lot in common with mine. Much to my delight, Cambridge agreed to be my publisher. They provided lots of support and advice along the way. The final version of the book was sent to Cambridge in May 2011.

I started working on the book (although I didn’t know it at the time) in 1995 and was done in 2011. There was progress on this book every year during that period, albeit not always at the same intensity. You can think of A Life Scientist’s Guide to Physical Chemistry as a book written at an adagisimmo tempo over a period of 16 years.

Staccato andante: Nonlinear Dynamics: A Hands-On Introductory Survey

This book also started out as a set of lecture notes, in this case for a graduate course in nonlinear dynamics that I first taught in 2004, with a second offering in 2005. These notes were posted to my web site, where they still live. And that’s where they sat for a long time.

In 2018, I received an email from Nicki Dennis, who at the time was an acquisitions editor for the Institute of Physics (IOP) Concise Physics series. Nicki had somehow run across the lecture slides (just slides, not notes) for my Foundations of Chemical Kinetics course on my web site, a course that had been offered just once, in 2012. (At small universities, what you teach and when you teach it has more to do with what the Department and the students need than with what might be optimal for the instructor. Advanced courses in particular can be taught at very long intervals.) She thought I might want to turn this course into a book. I didn’t say no, but I knew that turning the Foundations course into a book would be a lot of work, especially since I was keen to revise the course after offering it once. As I always tell junior faculty members, the second time you offer a course is often when you put the most work into it because by then, you actually know what you want to do. Getting back to the story, I didn’t say no, but I didn’t say yes. Instead, I pitched Nicki the idea that I would turn my nonlinear dynamics lecture notes into a book, and that we could talk about the Foundations of Chemical Kinetics book later. Nicki and the IOP agreed, and I got to work revising my notes and turning them into a short book. That process took a very short time. I added some examples to my notes, expanded the treatment in a few places, converted assignments from the course into problem sections in the book, and in just a few months I was done.

You can think of this one as a stop-and-start (staccato) time investment, with each period of work on the book lasting just a few months. So a book produced andante, even though there is a long period from the first time I set fingers to keyboard to the completion of the manuscript.

Presto: Foundations of Chemical Kinetics: A Hands-On Approach

In May 2021, I received an email from John Navas at the IOP, who mentioned the possibility of a new edition of the nonlinear dynamics book. I took this as an opportunity to bring up the kinetics book again, since I was going to be teaching my Foundations course the following Fall term. I had in mind a complete reworking of the course with, as the title of the book suggests, more hands-on instruction and exercises than existing textbooks in the area provided. So my utterly daft plan was to write the book as the course was unfolding (with some work done in the summer to get ahead of the lectures). I would feed the chapters to the students as they were completed.

If you’ve ever taken a single-term University course, you will know that the term goes by quickly. It feels even faster for the course instructor, and not just because we’re older. The students quickly caught up to the little bit of a head start I had built up in the summer, and then the chapters were coming out just before we covered the material in class, and eventually a little bit after. But I got through it! The result of this initial round of writing was, as you can imagine, not very polished, but over a few months, I cleaned it up, and now it’s out into the world!

Definitely a book written at a presto tempo. Perhaps even vivace. I’m not sure I would recommend writing a book this way to anyone else. But it’s doable, provided you allow a few months afterwards to clean up your first draft.

Some reflections

I’m sure there are many other ways to write books, but these three definitely span the range of timescales over which one might write something worth reading: slowly refined over many years, written and refined over multiple short bursts, or the strike while the iron is hot approach of Foundations in Chemical Kinetics. In the end, I think that there are a few keys to writing a book that all of these different scenarios share:

  1. You won’t write anything if you don’t actually sit down and start typing. Perhaps you don’t even intend to write a book, but anything you type and preserve is potentially material for a book, even if it’s just a set of lecture slides, or some original problems that you designed for your students.
  2. It may not be when you first start out, but you eventually need to develop a clear concept of the book, who it’s for, what approach you will take, and the style you intend to use. My books tend to use an informal style and, as the titles of my two most recent books suggest, to include a fair bit of hands-on practice. I’m particularly keen on teaching students computing skills which, weirdly at this point in the 21st century, is often a neglected dimension of their educations. Both the Nonlinear Dynamics and Foundations of Chemical Kinetics books include instruction in some general computational skills (e.g. programming in Matlab/Octave or symbolic computing in Maple) and some instruction in discipline-specific software (Xppaut in one case, Gaussian in the other).
  3. At some point, you need to decide that you’re ready to crank out a book. When you contact a publisher, they’re looking for something they can publish sooner rather than later. At that point, you need to be able to set time aside to meet mutually agreed deadlines. The more is already done, the better shape you will be in to deliver. And note that they will generally want to see sample chapters before they offer you a contract, unless they already have a relationship with you.
  4. There are going to be some long nights, no matter what your starting point.

Writing the conclusions chapter of your thesis

“What do I need to put in the conclusions chapter of my thesis?”

This is probably the most commonly asked question about thesis writing other than questions about using the first person singular. (About the latter: it’s your thesis. Use the first person sparingly, but if you really want to emphasize that something is your opinion or your idea, go ahead, provided your thesis advisor doesn’t object. Some of them really have a problem with first-person writing. It was probably beaten into them as graduate students.) The good news is that it’s not that hard to write the conclusions chapter, but it is a bit of work because it requires that you go back to the beginning.

Summary

The first thing you’re going to want to do is to write a section that summarizes the major findings of your thesis. You should generally start this section by reminding your readers of the major question(s) or hypotheses that you started with. Go back to the part of your introduction where you laid out your questions or hypotheses. (You did have such a section, didn’t you? If not, you need to write that, probably near the end of your introductory chapter before you lay out the plan of your thesis. These questions or hypotheses should follow logically from your introduction to the problem area contained in the introduction. But I guess that could be the topic of another blog post.) Paraphrase your original question(s) or hypotheses, then summarize how your thesis addressed these. As you are writing this, keep notes about any ways in which your thesis may have stopped short of fully answering your question(s). You will need these later.

This section tends to be highly variable in length from one thesis to the next, depending on how efficiently you summarize your work. For some types of theses, this section can be a few paragraphs. In other cases, it runs to several pages. You want to review major lines of evidence (not every single calculation or experiment) and how they contribute to your conclusion. Your conclusion should be stated reasonably precisely. Your conclusion may be any of the following, depending on how things worked out:

  • Here is the answer to the question I asked or, analogously, I have proven/disproven my hypothesis.
  • My work provides a partial answer to the question I asked or, for hypothesis-driven work, my work supports my hypothesis. For this kind of conclusion, you want to make sure you summarize what parts of your questions were answered and therefore what gaps still exist. Don’t go into detail about those gaps here. Just acknowledge them. And again, the corresponding writing for a hypothesis-driven thesis would be to discuss how strongly your work supports the hypothesis.

Your work in context

Your work probably connects to many other issues in your field. If you can, it’s a good idea to try to tie things together a bit in your concluding chapter. This section (or these sections, depending on how much you have to say) will probably have a specialized title emphasizing how your work fits into your field. Is there similar work, perhaps mentioned in your introduction, that your work now puts in a different light? Are there other areas in your larger field where similar issues arise and where your work now provides at least some insights? For example, if you were working on object permanence in pigeons, you could have a section entitled “Object permanence in other vertebrates” where you discuss whether your work provides insights into this problem for the broader field. To do this properly, you would probably need to talk a bit about the evidence showing that object permanence functions similarly across a range of species. You probably did that in your introduction, so here you would briefly remind readers of this evidence before trying to argue that your conclusions might extend to non-pigeon vertebrates as well.

In some ways, this is an optional section, because it won’t always be obvious how your work connects to the rest of the field. I would really want to see some writing along these lines in a Ph.D. thesis. I would like to see it in an M.Sc. thesis, but because of the scope of M.Sc. projects, it might be harder to do there.

Future directions

Very, very few theses (or scientific studies of any sort) provide completely definitive answers applicable to a wide range of situations. You will want to discuss those limitations, but also indicate that this opens up avenues for future research. You may already have developed some ideas for this section while writing the summary section. However, you now need to go back and reread your entire thesis carefully. This is especially the case if your thesis contains papers to which others contributed. As you are reading, ask yourself these questions:

  • What gaps are left by my work? In other words, what parts of my original questions were only partially answered, and how might these gaps be addressed? For hypothesis-driven work, what are the pieces of evidence missing to fully confirm the hypothesis and, again, how might this evidence be gathered?
  • What are the assumptions that your work makes, or the approximations used? Might these assumptions or approximations have affected the answers you obtained? If so, what further studies could be done to determine if similar answers would result these assumptions or approximations were removed or modified? Even if you don’t think that your assumptions affected your study, can you imagine studies that would answer different questions by removing these assumptions, or making different ones?
  • What questions come to mind as you are rereading your thesis? Sometimes, interesting questions will come to you while reading the review of the literature in your introduction. For example, it might occur to you that your work creates a foundation for studying problems you mentioned there. A discussion of these related problems and how they might be addressed by building on your work could go into the future directions section, or in the work in context section discussed above, but either way this is great material to include. It is very likely that you will find questions that you didn’t touch in the sections of your thesis that report on your work as well. How could they be addressed? You don’t need to write a lengthy and detailed proposal here, but do discuss questions raised, either directly or indirectly, by your work.
  • Could the methods or models you developed be built on and used to answer additional questions? For example, if you develop a mathematical model of a process, it is likely that other modeling studies could build on yours, either by applying your model in a different context, or by adding details you left out.

Your thesis contains the seeds of your concluding chapter.

You will notice that I am essentially asking you to comment on things that are already in your thesis, in one way or another. At the point of sitting down and writing the concluding chapter, the raw material for writing this chapter is already written. You just have to go back and read your thesis with your critical and questioning faculties fully active. Take notes about things you might write about as you go, and then sit down and write the concluding chapter based on your notes.

And as everything else about writing a thesis, it’s a highly individualized document. You should try to cover the points I am describing above, but you should feel free to organize the material in a way that makes sense to you, as long as it will make sense to your readers, too.

Lethbridge’s Covid-19 R number

I teach physical chemistry at the University of Lethbridge. I even wrote a textbook that we use in my class. The course includes a module on chemical kinetics, but as I explain to the students, kinetics shows up in a lot of places. With the Covid-19 pandemic being top of mind for everyone this year, and given that it’s a fairly straightforward extension to material I already teach in this class, I decided to teach the students how to compute those R numbers we keep hearing about in the news. The calculation is fairly easy (if you know a bit of kinetics), so as a public service, I’m going to be calculating weekly R numbers for Lethbridge and posting them here.

The R number is an estimate of how many new infections we are seeing for each infected individual, on average. Thus, an R number above 1 means that the number of infections is growing. An R number below 1 means that the number of infections is shrinking. Of course, we can just look at the daily case counts to get this information, but R gives you one simple number to look at, and moreover the calculation method has the effect of smoothing out the day-to-day fluctuations in case counts.

The method I initially used to calculate R was crude. The method had one free parameter, namely the average period of time than an individual is infectious. This parameter has considerable uncertainty, and it depends on behavior. For example, a person no longer counts as “infectious” if they are self-isolating. In order to estimate this parameter, I used provincial values for the number of active cases along with the province’s estimate of R to calculate an effective infectious period for each week from March 15 to April 30, inclusive. The mean infectious period calculated from these data was 3.8±2.3 days. To my surprise, this value is very low compared to the biological infectious period of about two weeks. But there it is. The low value suggests that most people are doing the right thing and staying away from other people when they think they might be infected.

Eventually, I decided to use an SIR model to calculate R. This has the advantage that I don’t need to use the provincial data to calibrate any of the parameters. It has the disadvantage that a realistic model for Covid-19 is much more complicated than the simple SIR model would have it, so there is some amount of what we call “modelling error” in the estimate. R values starting the week of May 3 were calculated from an SIR model.

One more brief note on SIR models: the R variable in an SIR model doesn’t distinguish between different ways of exiting the I class. Thus, R = recovered + dead. Because relatively few people die from Covid-19, the difference isn’t large, but it’s probably not insignificant.

Note that I will not be providing confidence intervals, which I feel would give an undeserved air of statistical certainty to these calculations. Finally note that these calculations are retrospective. They do not necessarily predict what will happen next week.

I will be updating this table on a weekly basis. I’m using the data published daily in the Lethbridge Herald, for a few of reasons, one of which is that it’s convenient, and the other being that it’s about the right amount of data to get a decent estimate of R. Because of the Herald’s publishing schedule, this also leaves out weekend data which are often off-trend and might cause some statistical difficulties otherwise. (In principle, leaving out the weekend data shouldn’t affect the R value, which is based on how fast case counts are growing, and not on exactly when we started counting or how long a stretch of data we use. As an analogy, think about your speed as you go down the highway, which you can get by dividing distance by time. As long as you keep a constant speed, it doesn’t matter exactly when you start or stop measuring time and distance travelled. However, if you wanted to calculate a typical speed of travel, you wouldn’t include a period of time when you took your foot off the gas. Similarly, lower testing rates over the weekends would result in data that I would have to throw out because they are off the weekly trend line. The Monday “catch up” data point sometimes has to be discarded because it is off the trend line in the other direction.)

I would finally note that R values calculated from January 2022 onward are somewhat suspect because of the restriction of testing to select groups as the omicron wave overwhelmed the Province’s testing capacity.

Hopefully some of you will find these local R values useful.

R values for Lethbridge (2021). Asterixes indicate points with larger uncertainties, sometimes due to statutory holidays reducing the number of available data points for the week, and sometimes to unusual scatter or other data anomalies.

In praise of the late H. T. Banks

H. Thomas Banks is one of those people I wish I had had a chance to meet. Unfortunately, he died December 31st of last year, so that won’t be happening. Given that I greatly admired his work, and on the assumption that some young scientists read this blog, I thought I would say a few words about some of Banks’ papers that I particularly enjoyed.

H. T. Banks, for those of you who may not have heard of him, was an outstanding applied mathematician. He had wide interests, but most interesting to me was his extensive work on delay-differential equations, given my own interest in the subject.

The first Banks paper I read was a 1978 joint paper with Joseph Mahaffy on the stability analysis of a Goodwin model. Looking for oscillations in gene expression models was a popular pastime in those days. In some ways, it still is. This paper stood out for me as a careful piece of mathematical argument showing that a certain class of models could not oscillate. The paper also contained a solid discussion of the biological relevance of the results. Discovering oscillations in a model may be fun for those of us who enjoy a good bifurcation diagram, but most gene expression networks probably evolved not to oscillate. How much of that lovely discussion was due to Banks, and how much to Mahaffy, I cannot say. But a lot of Banks’ work was just as careful about the relevance of the results to the real world.

Much more recently, Banks was involved in a lovely piece of mathematics laying down the foundations for sensitivity analysis of systems with delays, particularly for sensitivity with respect to the delays. Sensitivity analysis is a key technique in a lot of areas of modelling. The basic idea is to calculate a coefficient that tells us how sensitive the solution of a dynamical system is to a parameter. There are many variations on sensitivity analysis, which you can read about in a nice introductory paper by Brian Ingalls. The Banks paper provided a basis for doing this with respect to delays, and was a key foundation stone for our work work on this topic.

Some years ago, we developed a method for simulating stochastic systems with delays. Our intention was for this method to be used to model gene expression networks. I was therefore pleased and surprised when I discovered that Banks had used our algorithm to study a pork production logistics problem. That just shows what an applied mathematician with broad interests can do with a piece of science developed in another context. Banks and his colleagues went a bit further than just studying one model, looking a models with different treatments of the delays, and finding that these led to different statistical properties, which would of course be of great interest if you were trying to optimize a supply chain.

The few examples above show a real breadth of interests, both mathematically and in terms of applications. You can get an even better idea of how broad his interests were by scanning his list of publications. There are papers there on control theory, on HIV therapeutic strategies, on magnetohydrodynamics, on acoustics, … Something for just about every taste in applied mathematics. There is a place for specialists in science, but often it’s the people who straddle different areas who can make the most important contributions by connecting ideas from different fields. I think that Banks was a great example of a mathematician who cultivated breadth, and was therefore able to have a really broad impact.

So I’m really sorry I never got to meet H.T. Banks. I think I would have enjoyed knowing him.

(If you’re wondering why I’m so late with this blog post: I found out about Banks’ passing from an obituary in the June SIAM News, which because of the pandemic I didn’t get my hands on until about a month ago.)

50 years of Physical Review A

In the beginning, there was the Physical Review, and it was good. So good in fact that it soon started to grow exponentially. At an event celebrating the 100th anniversary of the Physical Review in 1993, one unnamed physicist quipped that “The theory of relativity states that nothing can expand faster than the speed of light, unless it conveys no information. This accounts for the astonishing expansion rate of The Physical Review” (New York Times, April 20, 1993). (At the risk of sounding like Sheldon Cooper, if this physics joke went over your head, this post is probably not for you.) As a result of the rapid growth of the Physical Review, in 1970, it was split into four journals, Physical Review A, B, C and D. One factor that drove this split was that many scientists had personal subscriptions to print journals at that time. (I still have one, although not to a member of the Physical Review family.) In its last year, the old Physical Review published 60 issues averaging over 400 pages each. That’s another 400-page issue roughly every 6 days. Most of the material in each issue would have been completely irrelevant to any given reader. You can imagine the printing and shipping costs, the problem of storing these journals in a professor’s office, not to mention the time needed to identify the few items of interest in these rapidly accumulating issues. So splitting the Physical Review, which in some sense had started in 1958 when Physical Review Letters became a standalone journal, was perhaps inevitable.

The new journals spun out of the Physical Review were to be “more narrowly focused”, which is, of course, a relative thing. Four journals were still to cover the entire breadth of physics. Each of the sections was correspondingly broad: PRB covered solid-state physics, C covered nuclear physics, D covered particles and fields, and Phys. Rev. A covered… everything else: the official subtitle of PRA at the time was “General Physics”, which included atomic and molecular physics, optics, mathematical physics, statistical mechanics, and so on.

Physical Review A, now describes itself as “covering atomic, molecular, and optical physics and quantum information”, other topics having over time been moved out to other journals. Physical Review E in particular was split out from PRA in 1993 to cover “Statistical Physics, Plasmas, Fluids, and Related Interdisciplinary Topics”. (That description has changed over the years as well, as the process of splitting and redefining journal subject matter continues. PRE is now said to cover “statistical, nonlinear, biological, and soft matter physics”. Physical Review Fluids was born in 2016 to pick up some of the material that would formerly have been in PRE.) Despite the evolution of PRA, one thing that hasn’t changed is that it has been an important journal for chemical physics right from the day it was born. This year marking the 50th anniversary of Physical Review A, and given that I trained in chemical physics at Queen’s and at the University of Toronto, I thought it would be a good time for me to write a few words about this journal. As with all of my blog posts, this will be a highly idiosyncratic and personal history.

I thought it would be fun to start by looking at the contents of the very first issue of PRA. Atomic and molecular physics featured prominently in this issue, with several papers either reporting on the results of theoretical calculations, or on the development of computational methods for atomic and molecular physics. Interestingly, the entire issue contained just one experimental paper. I suspect that this is an artifact of the period of time in which this first issue appeared. The atomic and molecular spectroscopy experiments that could be done using conventional light sources had mostly been done, and lasers, which would revolutionize much of chemical physics in the decades to follow, were not yet widely available in physics and chemistry laboratories.

One of the things that struck me on looking at this first issue is how short papers were in 1970. Excluding comments and corrections, the first issue contained 27 papers in 206 pages, so the average length of a paper in this issue was just under 8 pages. The papers in the first issue ranged from just 2 pages to 16. Eleven of these papers ran to four pages or less. And remember, Physical Review Letters was spun out more than two decades earlier, so there was already a venue for short, high-priority communications. Other than in letters journals like PRL, we don’t see many short papers anymore, and even in PRL, two- or three-page papers are a rarity. The “least publishable quantum” has grown over time, and the ease with which graphics can be generated has resulted in an explosion of figures in modern papers. I suspect, too, that concise writing isn’t as highly valued now as it was in 1970.

As is often the case in anniversary years, Phys. Rev. A has created a list of milestone papers. This list includes several classic papers on laser-cooling of atoms, a technique for obtaining ultra-cold atoms in atom traps, i.e. atoms very close to their zero-point energy within the trap. Because this almost entirely eliminates thermal noise, this technique allows for very high precision spectroscopic measurements, and therefore for very sharp tests of physical theories. Interestingly, in ion traps, the mutual repulsion of the ions causes them to crystallize when they are sufficiently cooled, which was the topic of one of my two papers in Phys. Rev. A.

The list of milestone papers also includes Axel Becke’s classic paper on exchange functionals with correct asymptotic behaviour. I have mentioned Becke’s work in this blog before, in my post on the 100 most-cited papers of all time, a list on which two of his papers appear. And as I mentioned there, Axel Becke was the supervisor of my undergraduate senior research project, resulting in my first publication, which also appeared in Phys. Rev. A. If you pay any attention at all to lists of influential papers and people, Axel’s name keeps popping up, and not without reason. He has been one of the most creative people working in density-functional theory for some decades now. Interestingly, Axel has only published three times in PRA, and I’ve just mentioned two of those papers. (Axel’s favourite publication venue by far has been The Journal of Chemical Physics.) His only other paper in PRA, published in 1986, was on fully numerical local-density-approximation calculations in diatomic molecules.

Many beautiful papers in nonlinear dynamics were published in Phys. Rev. A before the launch of Phys. Rev. E. I will mention just one of the many, many great papers I could pick, namely a very early paper on chaotic synchronization by Pecora and Carroll. Chaotic synchronization, which has potential applications in encrypted communication, became a bit of a cottage industry after the publication of this paper. I believe that the Pecora and Carroll paper was the first to introduce conditional Lyapunov exponents, which measure the extent to which the response to chaotic driving is predictable.

Currently, my favourite Phys. Rev. A paper is a little-known paper on radiation damping by William L. Burke, from volume 2 of the journal. This is a wonderful study in the correct application of singular perturbation theory that also contains a nice lesson about what happens when the theory is applied incorrectly. If you teach singular perturbation theory, this might be a very fruitful case study to introduce to your students.

I could go on, but perhaps this is a good place to stop. PRA has been a central journal for chemical physics throughout its 50 years. While PRE picked up many topics of interest to chemical physicists, PRA remains a key journal in our field. Until the Physical Review is reconfigured again, I think it’s safe to say that PRA will continue to be a central journal in chemical physics.

Scientists’ social networks

In some older posts, I mentioned some strategies for keeping up with the scientific literature, one of which was to use RSS. In recent years, social networks for scientists have emerged. These allow for both targeted and serendipitous discoveries of literature that is relevant to you. I want to emphasize that these networks are not enough. It’s still important to know how to search for specific information, for example. However, they do nicely complement the other techniques I have mentioned and, as an added bonus, they can raise your profile in the scientific community too.

There are lots of specialized social networks for scientists, but only three that cover all of the sciences and are open to all that I know about: ResearchGate, Mendeley, and Academia.edu.

I’m not going to talk about less-specialized social networks, but of course, they have their uses too. In particular, if you’re eventually going to be looking for a job, a LinkedIn profile is not a bad thing to have. I have just one piece of advice for you there: if you do get a LinkedIn profile, make sure you maintain it. At the very least, make sure that your current employment is up to date. Potential employers will look you up on the web. Having an out-of-date LinkedIn profile makes it look like you’re not taking a professional approach to your career. If you don’t think that you can adequately maintain a LinkedIn profile, you would be better off not having one at all.

I should say before I go any further that this post reflects my views, based on what I’ve found effective for me. The choice of social networks is, in the end, a personal one.

ResearchGate

I like ResearchGate. It’s free. (They pay for themselves using ad revenue.) It’s easy to use. And it doesn’t clutter your mailbox with lots of unwanted emails. And despite the fact that they support themselves with ads, the ads are neither intrusive nor excessive in number. I’m not alone in thinking that ResearchGate is the scientists’ social network of choice. Most of the scientists whose work I try to follow are on this site.

ResearchGate’s basic paradigm is not that different from Facebook‘s: You follow researchers or specific research projects. Updates from these researchers or projects show up in your ResearchGate home page, so all you have to do is to check in once or twice a week to see what has been going on among the people you follow. Based on your activity, ResearchGate will add papers into your feed that it thinks you might find interesting. Most of those suggestions are quite reasonable and useful. Once in a while, you also get recommendations for projects or researchers you might want to follow. I personally find that a bit less useful, although once in a while someone will pop up that it would make sense for me to follow and that I wasn’t already following.

Like most social networks, ResearchGate will be most useful if you restrain your enthusiasm for following everybody in sight. Follow researchers whose ideas and research you find useful. Maybe follow a friend or two. Don’t automatically follow back everyone who follows you. If your home feed is full of useless junk, ResearchGate will become much less useful to you.

From the point of view of advertising your own presence, ResearchGate has some really nice features. You can add your publications manually, but it also scours the journals for papers you might have written. When you first sign up, you may find that you receive a lot of notifications that it may have found papers you authored. However, this dies down fairly quickly, and once it learns who you are (how you sign your papers, what universities you have worked at), it not only suggests fewer and fewer papers you didn’t author, it also tends to find your papers and suggest you add them before you have time to add them yourself.

ResearchGate also has question-and-answer forums, where you can ask questions (e.g. on techniques), or answer them. You can also follow questions when someone asks one that is of interest to you.

Mendeley

Mendeley is interesting because it’s not just a social networking site. It’s also a reference manager. I can’t say I’ve looked into it a lot. But I know that people who like it say very good things about it. It’s worth a look if you haven’t settled on a reference manager and want a Swiss-army knife that both keeps your bibliography and lets you find interesting references.

Academia.edu

I’m not a fan of this one. It has a free version that has very limited features, and a pay version they are forever trying to get you to sign up for. If you sign up for Academia.edu, you will receive many, many emails from them. It’s probably possible to control this behaviour, but Microsoft Outlook’s Clutter feature does a good job of keeping these emails out of my sight, so I haven’t bothered. I think that some universities have subscriptions to Academia.edu. I would tend to stay away from this one unless you work at a place that has a subscription.

Some tips for research scholarship applications

Last term, I sat on a graduate scholarship committee for the first time in a few years. I noticed a few common errors, and at the encouragement of a colleague, I have turned this experience into the blog post you are now reading.

Many scholarship applications will require a brief research proposal. Here are some things you should think about if you have to include a proposal in your application:

  1. The proposal has to be well written. If you’re not naturally a good writer, show your proposal to someone who is. Bad spelling and grammar reflect badly on you. Poorly constructed sentences and paragraphs that obscure the point you are trying to make are even worse. They suggest that you don’t care enough to proofread your work carefully and/or to get someone to proofread it for you. This advice of course extends to other parts of your application.
  2. It should be clear how your work fits in a larger context. Here’s a made-up example: Student X wants to synthesize molecules containing some weird new functional group. That’s great, but unless you explain it to me, I don’t know why anyone would want to do that. Are these molecules theoretically interesting? Do they have potential applications? Do they extend our knowledge of chemistry in a new direction, and if so, what is that direction and why should I care? This comment is, of course, more general than the example above, and would extend to a proposal to prove a theorem, to study distant galaxies, etc.
  3. Almost all scholarships and postdoctoral fellowships are judged by panels of non-experts, so write your proposal for a general scientific audience. In part, this connects to my previous point: It may seem self-evident to you why you would want to study protein Y, and perhaps it is to people in your field, but it may not be obvious to a scientist outside of your field. Beyond that, you need to define non-obvious abbreviations, avoid highly specialized jargon if possible, etc.
  4. The proposal’s scope should align with the level at which you are applying. Don’t propose 20 years of work in an M.Sc. application. Don’t propose something very limited (in time and/or intellectually) in a Ph.D. or postdoc proposal. The latter is a surprisingly common (and fatal) error. We might forgive the over-eager M.Sc. applicant, but we can’t forgive a Ph.D. applicant whose proposal doesn’t look exciting. If you are competing for a scholarship, you are competing with other people who have proposals that have some real intellectual interest. If you are making systematic measurements of some property, unless you tell me otherwise, it might look like work for a technician.How does your work tie in to major theories in your field? What is the potential for it to change how we think about certain issues? Do you need to develop new measurement methods that will be more broadly applicable?

Some Canadian (especially Tri-Council) scholarship applications ask you to comment on your most significant contributions. Other scholarship competitions may ask for something like this with different wording. Such a section is not about why the work is significant to you. It is about the significance of your work to your field. In some cases, especially if you’re just getting started in research, your most significant contribution may be a conference presentation. If it is, nobody cares that you really enjoyed presenting your work to leaders in your field. What we care about is if your work represents a real advance. Interest from leaders in your field may be evidence of that, especially if they followed up with you after your talk. But the emphasis is on what they got out of it, not what you got out of it. If you can, try to tell us how your work requires new thinking about some issue or other in your field. Or maybe tell us how your work opens up new vistas. The same goes for publications. I’m sure it was exciting to get your paper published in the Elbonian Journal of Science, but what I really care about is the science in the paper, and whether you can tell me why it was important. (In fact, I probably care more about whether you are effectively communicating the importance of your work than whether I fully buy your argument. When I sit on these committees, I’m evaluating you. One of the things I want to know is whether you can craft a coherent argument.) Since you probably don’t have any experience writing this kind of text, it is imperative that you get an experienced pair of eyes (e.g. your supervisor’s) on this section of your application.

Many scholarship applications will ask for a summary of most recent completed thesis (or equivalent). When an application has a section like this, we expect you to use most of the space to tell us about your past work. What did you do? How did you do it? Why was this a hard thing to do? What was learned? And yes, why was it important? If you write three lines when we gave you a page, it’s not good. You need to give us some details here. It’s your work. You should be able to was poetic about it.

In fact, as a rule, you should use most of the space allowed for any given part of your application, provided of course the section is relevant. (On occasion, there will be sections that you can’t use. For example, if you’re asked to list publications and you don’t have any, you clearly can’t use this space.) Don’t make stuff up, but not having much to say about yourself or your work is generally considered a negative.

Academia is slowly becoming more progressive. Accordingly, most scholarship applications will have a section in which you can talk about any obstacles life threw in your way, good or bad, that might have affected your performance. I know that some people are afraid of using these sections, but in fact you should if there is something we should know about. We are genuinely trying to take life circumstances into account when we evaluate scholarship applications, among other things. The kinds of things you might want to let us know about include having a disability (that you could document on request), taking time off to start a family, having to look after a sick parent or child, and so on. If anything has held you back from taking a full course load, completing a degree in the “usual” amount of time, or negatively affected your grades over some period of time, let us know. We can’t take it into account if we don’t know about it.

Maybe I can close with a bit of general advice: The best way to learn to write good proposals is to work with someone who has been successful at this skill. Ask your supervisor or other mentors who are more advanced than you to look over what you have produced. Take their advice to heart. Don’t take it personally if they are very critical. In fact, you should especially thank the people who are very critical of your applications. They’re usually the ones who are giving you the most important feedback.

Running xppaut in Windows

Running xppaut in Windows is sometimes tricky. My new book on nonlinear dynamics gives brief instructions on installing the Cygwin X server and xppaut in Windows, but I’ve often had trouble getting xppaut to play nice with Cygwin/X. After playing around with today, I think I’ve come up with a set of instructions that will work every time. And of course I expect to be proven wrong almost immediately… However, I’m still happy to share what I’ve learned.

What I’m trying to achieve here is a low-fuss installation that will let you run xppaut from the command line. Because I’m much more familiar with Unix shell programming than with DOS batch files, my solution involves the former. You’re going to be installing Cygwin anyway, so we might as well take advantage of its full power.

I will be leaving you to read the documentation for the details of how to accomplish some of the tasks below. None of them exceed the intelligence of an average person, and links to the documentation are given. Here are the steps:

  1. Install Cygwin. In the package installer, select the latest versions of xinit, xset and xhost for installation.
  2. Get the xppaut for Windows zip file. Unzip the package and put the xppall folder that it contains somewhere sensible. Bard Ermentrout recommends the top level of the boot (C:) drive, but I don’t think that’s necessary.
  3. Add the xppall folder’s location to your PATH environment variable. If you put this folder at the top level of your C: drive, you would add C:\xppall to your PATH variable.
  4. Create the following file using a file editor (Windows Notepad, or a Unix editor like vi or emacs; emacs must be installed first with the Cygwin installer if you want to use that) in the xppall folder that you just installed:
#!/bin/bash

# Script to run xppaut in Cygwin using the Cygwin/X server.
# You can call this script xpp, then invoke it on the command line as you would xppaut.

# Start X server if one isn't already running.
export DISPLAY=127.0.0.1:0.0
if ! xset q >&/dev/null; then
    startxwin -- -listen tcp >&/dev/null &
    # The following 5-second pause will slow down startup, but ensures that the
    # Xwindows server is up before trying to call xhost, which otherwise may hang.
    sleep 5
    xhost +127.0.0.1 >&/dev/null
fi


# Run xppaut, passing through any command-line parameters supplied to this script.
xppaut $1 $2 $3 $4 $5

I recommend that you save this file to into the xppall folder, using the file name xpp. Now open a Cygwin terminal and issue the following commands (assuming you put xppall at the top level of the C: drive):

cd /cygdrive/c/xppall
chmod u+x xpp

This will make this file executable. (It may already have been, but you might as well make sure.) If all went well, you should now be able to run xppaut by typing ‘xpp file.ode’ in a terminal window where, obviously, ‘file.ode’ would be replaced by the name of an ode file in the current working directory. There are a bunch of ode files in xppall/ode. I usually test a new installation of xppaut using lorenz.ode.

Note that this will work provided you do not start the XWin Server from the Start menu.

By all means let me know if you try this and run into problems. Within reason, I will try to help.

Frequently confused words

Some words are very frequently confused. Sometimes, this makes the writer’s intent unclear. In other cases, the meaning of the sentence may be clear, but it’s still distracting to those readers who know the difference. So it matters.

This little blog entry focuses on words that commonly appear in scientific writing and that are often confused or misused. There is a longer list of words commonly confused in general writing here: http://writing2.richmond.edu/writing/wweb/conford.html. By all means consult this source in addition to this post.

Principle/principal: “Principle” is a noun that means a fundamental rule, truth or law. It is never an adjective. The adjectival form of this word is “principled“. “Principal” can be either an adjective or a noun. As an adjective, it means “main” or “most important”. So all of you PIs out there are “Principal Investigators”. I hope that you are also “principled investigators”, but “Principle Investigator” would mean someone who carries out research into principles, which I suppose might be applied to ethicists, although it would be unusual to do so. As a noun, “principal” can have one of two meanings: It can mean the main person involved in some affair or transaction, as in “the principal in a lawsuit”, who might be the main plaintiff or defendant, or it can be the title of the leader of an educational institution, e.g. the “Principal of Queen’s University”.

Adapt/adopt: A thing that is adapted is changed to suit some particular purpose. For example, a figure that was adapted from a source was not just copied. Some details of the figure were changed, or else the original was used as a model for a new figure that still retains some resemblance to the original. On the other hand, something that is adopted is just used as is, without modification. You can, for example, adopt the procedure of Smith et al. (1902), which means that you used their procedure exactly as they described it. You can also adapt Smith et al.’s (1902) procedure if you need to change it to use it in a new context, or to work with a different set of instruments, etc.

Affect/effect: This pair can be confusing because both of these words can either be a noun or a verb, but with different meanings. I’m going to focus here on the most common uses of these words in scientific writing. If you’re a psychologist, you’re going to need to do additional reading on this topic because in that discipline, the noun forms of these words have highly technical meanings that you simply have to get right.

Almost always in scientific writing outside of psychology, you’re going to use “affect” as a verb and “effect” as a noun. If you just remember that, you should be in good shape. The verb “affect” means “to produce an effect in”. (Note the use of the nouneffect” in the definition of the verbaffect”.) So, for example, the weather affects the timing of plant flowering. The noun “effect” designates a consequence of some causative event or agent. Late flowering is an effect of cool weather. Similarly, we talk of cause and effect, not cause and affect, unless you’re a psychologist.

Complimentary/complementary: In scientific writing, you want “complementary”. “Complimentary” refers to receiving praise, or being given something free-of-charge, as in “complimentary drinks”. “Complementary” has the sense of one thing completing another. Thus we have complementary angles, complementary base pairs, etc.

Infer/imply: All of the words we have looked at so far had similar spelling. This pair falls into a different category of words that are semantically related. Inferring is a logical deduction made by a person. Note that a person infers something. Lately, I’ve been noticing people using infer when they should be using imply. To imply something is to suggest it. Data can imply a particular conclusion. But only a person can infer that the data implies something. A person infers. Data implies.

Roll/roleRoll has to do with the action of rolling. For example, one can roll dice, or roll across the countryside in a car. A role, on the other hand, is a part that something plays. So mitochondria play a central role in the energy metabolism of a cell, for example.

Refute: This word isn’t a member of a simple pair, but lately I have noticed it being misused quite a lot. “Refute” has exactly one meaning: it is to prove an argument or hypothesis wrong. Note the word “prove”. To refute something is not merely to argue against it, or to provide a counterargument, or to present contradictory data. If you have refuted a hypothesis, it’s dead. It’s a very strong word, and rarely applicable. But good for you if you have managed to refute something. It’s probably a significant achievement. If it’s still at the stage where the thing is debatable, then you need a different word. It’s hard to give specific advice here, because there are many possible nuances, but here are some possible phrases you might use: “argue instead/against”, “provide a counterargument/rebuttal”, “reply”, “respond”, “cite as evidence against”, “deny”, “contradict”, “dissent”, “reject”. The variety of nuance in just these options hopefully suggests one of the problems with misusing “refute”: if it’s clear you don’t actually mean that something was conclusively disproved, what do you in fact mean? If you’re tempted to use “refute”, I would strongly suggest that you think carefully about what you really mean, and then use plain language, which may involve a complete rewriting of your sentence. For example, “Jones and Wang (2001) refuted Amato and Sveshnikov’s (1998) hypothesis”, if it doesn’t actually mean that they disproved the hypothesis, might be rewritten in any of the following ways, among many others, depending on what you’re trying to say: “Amato and Sveshnikov’s (1998) hypothesis was contradicted by Jones and Wang’s (2001) interpretation of the data”; “Jones and Wang (2001) showed that Amato and Sveshnikov’s (1998) hypothesis was more plausibly consistent with…”; Jones and Wang (2001) argued that Amato and Sveshnikov’s (1998) hypothesis was incompatible with…”

Delivery of a clear message requires clear language, and that means using the right words to express a thought.

Climate change mitigation measured in gas tanks

A lot of the discussion around what we need to do to slow down climate change is described to us in tonnes of CO2. The trouble is of course that most of us don’t know what a tonne of CO2 looks like. I thought I would try to bring this discussion into terms that most of us would understand by rephrasing it in terms of gas tanks. Keeping in mind that not all carbon emissions come from burning gasoline in a car, a gas tank is still probably a more useful visualization for most of us than a tonne of CO2. Note also that what we really care about is the total warming potential of all greenhouse gases released into the atmosphere, which is usually measured in CO2 equivalents. But since the basic unit of measure is still a tonne of CO2, the discussion below is framed in terms of CO2.

First of course we have to decide how big a tank we’re going to use. Because there’s a precedent for using a 50 L tank, that’s what I’m going to use as my standard tank. That’s the size of tank you have in a typical smaller car. At 2.3 kg of CO2 per liter of gasoline, a 50 L tank will produce 115 kg of CO2 when burned in your automobile engine. Conversely, a tonne of CO2 would be equivalent to about 8.7 tanks.

To meet its Paris accord commitments, Canada needs to cut its emissions by about 205 million tonnes of CO2 between now and 2030. (Some of you will say, “but our Paris commitments aren’t enough!” You’re right, of course, but it’s a baseline to aspire to in the short run.) As I write this, the population of Canada is about 37.6 million, so that’s 5.5 tonnes per Canadian per year. That’s about 48 gas tanks per person per year. Note that this figure includes CO2 emissions from industry and from private use, but keep in mind too that this does not include all of the carbon emissions you are responsible for through your purchases of foreign-made goods, which are accounted for in the country where these emissions are produced. So, for example, if you buy a pair of shoes made in Vietnam, those are Vietnam’s emissions, even though you are the person driving these emissions. StatsCan tried to estimate household contributions to greenhouse gas emissions (not including foreign emissions for goods imported into and consumed in Canada) a bit over a decade ago, and found that households were responsible for about 46% of Canada’s greenhouse gas emissions, either directly or indirectly. Assuming a similar ratio still holds, each of us is on the hook for about 22 gas tanks per year.

I don’t know about you, but I don’t think I fill up my gas tank 22 times per year. Remember: those are 50 L tankfuls. A lot of the times I “fill” my tank, I’m only buying 30 or 40 L of fuel. So I could stop driving completely, and that wouldn’t do it, especially when you consider that I live in a three-person household with just one car, so I can’t count on my wife and son to cut 22 fill-ups of cars they don’t have! The idea here isn’t to think in terms of literal gas tanks, but in terms of gas-tank equivalents. Between the three of us, my wife, son and myself need to cut about 66 gas-tank equivalents out of the emissions we’re responsible for.

There are plenty of web sites that will tell you what you can do to reduce your personal carbon emissions. Clearly, if I can drive less and use my bike or public transit more, that helps. Equally clearly, that alone won’t get us there. One of the things that will make a big difference that politicians don’t like to talk about is that we’re probably all going to have to just buy less stuff. I’m going to pull a few figures from Mike Berners-Lee’s excellent book How Bad Are Bananas? to make this point.

Let’s say that building the car you want to buy will produce 15 tonnes of CO2, about what it takes to build a midsize car. That’s 130 gas tanks. You could of course avoid causing those emissions by buying a used car, which won’t cause any extra emissions. But of course, eventually someone has to buy a new car (assuming we don’t all start riding public transit, but that only works for urban dwellers), and let’s suppose that you decide that you really want a new car. You could just buy a smaller car. Some cars have an emissions impact of as little as 6 tonnes of CO2, or 52 gas tanks. Even if you don’t go to the smallest car available, you could easily shave 30 or 40 gas tanks from your emissions just by buying a smaller car.

But wait! Those emissions should be amortized over the time you own the car, right? The average Canadian owns a new car for about 6 years before trading it in. So the impact of your 130-tank car over your period of ownership is about 22 gas tanks per year. Coincidentally, this is how much you need to cut out of your annual emissions, so if you can go car-free, you’ve pretty much done your part (but you might have to find other reductions if a family is sharing a car, as in our case). Going to a smaller car might save 7 gas tanks per year, which is about a third of the 22 tanks per year you need to cut out of your lifestyle. Not bad! But what if you really want that 130-tank car? If you keep it an extra two years, the impact of your new car becomes about 16 tanks per year, so you are reducing your carbon emissions by about the same amount as you would by buying a smaller car, just by keeping your car a bit longer. And obviously, this emissions reduction strategy just gets better the longer you keep the car.

And what about those Vietnamese shoes I mentioned earlier? Making the average pair of shoes and transporting it to a store near you results emissions of about 11.5 kg of CO2, or about a tenth of a tank of gas. I probably buy two to three pairs of shoes per year, so for me, this isn’t worth thinking about. But if you’re a shopaholic who loves shoes, well, I’ll let you do your own calculation…

I suspect that if you’re going to buy clothes, shoes and accessories and are actually going to wear them until they’re ready for disposal, there probably aren’t significant emissions savings to be made by changing your shopping habits. However, some of us, and you know who you are, do buy stuff we won’t wear much before putting it into the basement. Then those fractions of a gas tank really start to add up. As a general rule, buy less, and buy used if you want to cut your carbon footprint. This applies not only to clothes, but to anything else we buy on a whim and then barely use.

And the general idea of buying what you need and using it applies to food, too. Food waste is a massive contributor to greenhouse gas emissions: Because food is wasted, it is necessary to overproduce food, which leads to deforestation, i.e. loss of an important carbon sink. Moreover, agriculture has a direct energy cost, so more food grown means more emissions from the agriculture sector. Then there is the transport of food that will never be eaten. And rotting food often produces methane, an even more potent greenhouse gas than carbon dioxide. A rough estimate is that household food waste (as opposed to food that is wasted somewhere in the supply chain) amounts to about a quarter tonne of CO2 per person per year in Canada, or 2.2 gas tanks. Not a huge number, but still about 10% of the emissions you need to cut per year. Roughly speaking, to reduce the amount of food you waste, you have to buy things you plan to eat, and make sure you actually do use it before it goes bad. Sounds simple, but it does take a bit of a mental adjustment to our shopping and cooking habits.

So there you have it. Climate footprint and emissions reductions conceptualized in gas-tank equivalents. Hopefully this helps you understand the size of the problem a bit better, and also puts in perspective some of the things you can do to reduce your climate impact. A lot of the advice comes down to buying less stuff and using it for longer (or using it at all in the case of food). And as an added bonus, if you spend less, you’ll have more money in your bank account for a rainy day. Win-win.