Navigating alphabet soup

Navigating alphabet soup isn’t easy, says this item in the Virginia Railway Express Ride magazine. In the first two paragraphs, it hits the reader with 12 unexplained abbreviations, initials, and acronyms (acronyms are abbreviations that are pronounced like a word, not by saying the letters, as the late William Safire, the “On Language” columnist of the New York Times, explained so well; FAMPO in this item is an acronym, pronounced as though it were a word; VRE is not: it’s pronounced by saying the letters V-R-E).

”Navigating alphabet soup” makes me picture someone floating on a cracker and paddling across the soup.

My good old Associated Press Stylebook and Libel Manual also refers to “alphabet soup”: “In general, avoid alphabet soup. Do not use abbreviations or acronyms which the reader would not quickly recognize.” In this Ride magazine item, readers would quickly recognize VRE: it’s the initials of the commuter rail service, Virginia Railway Express. Readers might assume that CEO means “chief executive officer,” but there was plenty of room to spell it out, and the fascinating website Acronym Finder offered 72 possible meanings, including “customer experience officer,” “corporate executive officer,” “chief ethics officer,” and “chairman and executive officer”—all plausible. Readers also would know CSX, the name of a railroad over which VRE trains operate. It was established in 1980 in a merger of two railroad systems, Chessie and Seaboard. “C can stand for Chessie, S for Seaboard, and X actually has no meaning,” according to a 2016 article by William C. Vantuono in Railway Age, so trying to spell out CSX would be pointless. RIDE, despite being presented in all capital letters, is not an acronym, as far as I know.

Although I’ve been a rail passenger advocate in Virginia for more than 20 years, I could not identify with certainty the exact names represented by all the other initials, only most of them.

If writers want to be understood, they need to match their vocabulary to their readers’ knowledge. Editors often are a bridge between writer and reader, and we editors need to be sure that readers aren’t left paddling on a cracker, trying to navigate the alphabet soup.

What Day Is This Anyway?

Virginia Railway Express sent out these Sunday alerts a few weeks in a row. The problem? VRE doesn’t run on Sunday.

The Weather Channel spread an alert about possible snow on a Monday, with the alert expiring Sunday afternoon.

Maybe Sunday is a bad time for composing, editing, and sending out alerts. Before you send a message out to the world, it’s important to read what you have written and see whether it is correct. It’s distressing to see how many people do not do this.

More tiny important info

When I posted about tiny important information in December 2020, I wondered how it gets proofread. Maybe it doesn’t. This coupon for a free package of grape tomatoes from Safeway says, in tiny words that I could almost read without assistance, that the coupon can’t be used to get alcohol, tobacco, lottery tickets, or many other things. But I wanted fermented grape tomatoes that I could smoke!

Is it really necessary to say that a coupon for tomatoes can’t be used to obtain a hunting license or amusement park tickets? It appears that the coupon wasn’t proofread at all. Or maybe an editor questioned the wording but was told that a lawyer insisted on it.

Socially distanced hike

“What’s wrong with this picture?” you might ask. But this blog is about editing, so instead I’ll ask, “What’s wrong with this caption?” In case you can’t read it, the relevant part says, “Members of a St. Charles men’s small group take a socially distanced hike …”

Does this photo, which appeared on the Arlington, Virginia, Catholic Herald website on December 9, 2020, show men taking a hike? No, it shows them standing in a meadow. “On a hike” would be OK; hikers might stand in a meadow while on a hike.

Much more important, does it show a group hiking safely during a pandemic? They aren’t standing 6 feet apart, and only one is wearing a mask. I think it would be right and responsible for an editor to question whether the hikers indeed practiced safety to protect themselves from infection and add that information to the story or the caption. Maybe the caption could say that the hikers took a short photo break but for the rest of the day kept a safe distance apart and wore masks—if that is true. (The story in the Herald didn’t say anything about the hike.)

When so many people are careless about spreading infection, let’s make sure, when editing, that “socially distanced” indicates actual precautions and is not just a phrase, especially when a photo suggests otherwise.

Who gets to be called ‘doctor’ in the news?

Editors need to know! If the answer is “anyone who wants to be called ‘doctor,’” we’ going to have a big communication problem.

Please pardon me for quoting my book The Editor’s Companion, but I can’t think of a better way to say it: “I think the Associated Press style is sensible—refer to people by last name unless further identification is important, and don’t call people ‘doctor’ without specifying the degree unless the person is a medical doctor or dentist, because that’s what most readers take ‘doctor’ to mean. And don’t add ‘Dr.’ in front of a name that is followed by a degree; that would be redundant.”

Naturally, some people disagree—strongly. The Chief Executive Officer of the American Psychological Association wrote in 2008: “The use of the term ‘doctor’ recognizes psychologists’ extensive education and training as well as their positions in medical settings as supervisors and managers of patient care at the highest levels.” I think it’s fair to sum up the argument as “psychologists want recognition.” Please note: it’s not a question of whether their patients or students or colleagues call them “doctor.” It’s whether news stories refer to them as “doctor” without saying what their doctoral degree is in. (By the way, I don’t have the choice here of putting Dr. in front of the CEO’s name, because the name wasn’t given with the commentary. Maybe the Chief Executive Officer of the American Psychological Association was so famous that the person’s name didn’t need to be given. Would “Doctor Who?” be sufficient?)

Another perspective: The Associated Press “rule assumes that people aren’t smart enough to differentiate medical professionals from subject matter experts and that a title alone means you can trust one opinion over another,” wrote Mariana Grohowski in Is There a Doctor in the House? on the Michigan Tech Unscripted Research Blog, March 26, 2018.

You might think she was about to dismiss the Associated Press rule as dumb, but, no, she followed up with informative insights. She gave reasons for using Dr. in news stories involving people who aren’t medical professionals: to “garner respect,” to signify authority, to “acknowledge … hard work and expertise,” and to “illustrate the diversity of PhD holders, which is especially important for women and minorities.” I think it’s fair to say that at least three of those four reasons are about recognition.

“Some doctorate holders see the title as a failsafe for garnering respect from students and colleagues,” said Grohowski, citing Chronicle of Higher Education writer Stacy Patton, but “others consider it a graceless method of asserting an otherwise ignored or devalued status.” In 2007, Judith Martin (Miss Manners) replied to a reader who got a curt correction from a cousin for addressing a Christmas card to Mr. rather than Dr., and Miss Manners answered that “in the higher levels of the academic world, it is taken for granted that one has a Ph.D. and considered silly for anyone not in the medical field to use the title of doctor.”

Grohowski also gave “reasons not to use Dr. for PhD holders”: “so as not to mislead or confuse vulnerable individuals seeking advice,” to avoid “appearing snobby,” to avoid aloofness, or to avoid “exerting authority”; she noted that this “doesn’t matter much outside academia.”

As Grohowski noted, use of Dr. for non-medical doctors could “mislead or confuse vulnerable individuals”; she noted that “physicians in Arizona, Delaware, Florida and New York” were against “individuals with doctorates in nursing introducing themselves as doctor to patients.” So it’s not just that some “people aren’t smart enough.” Not everybody who wants to be called “doctor” ought to be, at least not in all situations.

The Associated Press also stated (my edition is from 1996, but I’m sure the rule still applies): “Do not use Dr. before the names of individuals who hold only honorary doctorates.” I’ve encountered people who wanted to be called “doctor,” and then I learned that their doctorates were honorary. Editors need to beware of that. I personally have a diploma saying I’m a doctor of philosophy in theology; it may be genuine, but I certainly didn’t earn it.

Another trap for editors involves people with Ph.D.’s who want to be called “doctor” when others who have doctorates don’t call attention to their degrees. Accommodating the squeaky wheel could make it appear that the noisy person is the only one in a story who has a doctorate.

And does “a title alone” mean that “you can trust one opinion over another”? I’ve seen doctors cited as authorities with no mention of what their doctorate was in. Having a Ph.D. doesn’t make you an authority on everything. Remember Dr. Linus Pauling? He won two Nobel Prizes, and he also became famous for claiming that large doses of vitamin C could prevent colds. Some people insist that he was right, but he was a doctor of chemistry, not of medicine.

It sounds great that in a school system that has students returning to classrooms during a pandemic, the superintendent of schools is a doctor. In a Feb. 16, 2021, story on the website of WTOP, Washington, DC, “How DC-Area Catholic Schools Are Faring with in-Person Learning,” by Dick Uliano, we are told that “Dr. Joseph Vorbach” is “superintendent of schools for the Catholic Diocese of Arlington” in Virginia. Nowhere does this news story say what Vorbach is a doctor of (international relations, according to the diocesan website). I don’t think people read a news story about schools during a pandemic to see the superintendent recognized for his “extensive education,” which is about all the reference to Dr. Vorbach accomplishes; this seems to me like implying greater knowledge of public health than the man actually may have, kind of like giving a placebo to readers.

My suggestions for editors:

  • If you have a style guide, follow it
  • Keep the reader in mind; maybe in your situation, you have to communicate respect or deference, but make sure you communicate facts
  • If something you’re editing mentions a doctor, be sure to specify what the person is a doctor of
  • If someone has an honorary doctorate and insists on mentioning it, list it as additional information, not by attaching Dr. or Ph.D. (or anything else that suggests an earned doctorate) to the person’s name.

And take your vitamin C, as Dr. Pauling told you to. Listen to Dr. Dunham! And (seriously) do listen to Miss Manners.

Tiny important info

Can you read the “Warning!” on the Kidde smoke detector or the footnote associated with “free*”?

This looks like important information, but it’s presented in a way that makes it hard to read. I could barely read the Giant footnote with a magnifying glass. I couldn’t read the smoke detector warning. Maybe it said, “Do not place dimes next to the lid!” Actually, the same warning (I think) was on a piece of paper attached to the smoke detector (in tiny type, like the Giant footnote).

Was this important information proofread? How?

An editor who gets to check important information should also check its presentation. Will it be legible to the people who receive it? When people give me exclusive offers, I wonder who is excluded. Maybe anyone who can’t read the footnote is excluded.

What do the Covid-19 numbers say?

With numbers about the Covid-19 coronavirus pandemic in the news every day, writers want to say something. Editors should be aware of what the numbers actually measure and make sure that writers don’t make statements that aren’t backed up by the numbers.

An example of numbers in the news:

The Washington Post graph for Nov. 27, 2020, showed 31 known deaths and cases (oddly lumped together) for Virginia. If you clicked on the graph, the data were revealed to represent 29 “new reported” cases and 0.23 deaths (averaged for the past seven days) per 100,000 population. Virginia’s population is about 8 and a half million, so that’s about 2,465 new reported cases per day and 62 deaths per day.

I’m guessing that just about all the deaths are being reported but not all the new infections. The reported number of cases does not give a clear picture of how many people are infected with the virus; the numbers in the news rarely say who was tested. As Denise Dunbar explained in an excellent story, “Analysis: What to make of COVID-19 data?” in the Nov. 24, 2020, Alexandria (Virginia) Times, the testing is not a scientific sample of the whole population:

Case numbers are largely a function of how widespread testing is in any state or locality. As more testing is done, a larger number of cases are going to be recorded. Conversely, if a wider net is cast on testing, particularly if asymptomatic people are tested, the positivity rate should go down.…

As more tests are administered, we will get closer to an accurate sense of how many people within each community actually have COVID-19, since the positivity rate merely tells the percentage of those tested who have the disease, not the percentage of overall residents who have it.…

[The] numbers don’t account for people who were asymptomatic but infected and weren’t tested, or for residents with symptoms who haven’t sought testing or medical care. This means the actual percentage of residents who either have had or currently have COVID-19 in Alexandria is almost certainly higher.

I think it’s fair to say that the people being tested for the coronavirus are mostly self-selected: they ask to be tested because they have symptoms or because they want to be sure they are not infected, thinking they can then confidently travel or visit other people safely. I said mostly because some people are chosen for testing: because, for example, they were admitted to a hospital for any reason.

So editors should be sure that writers do not state how many people have the virus; all they can do is guess, and guesses should not be presented as facts.

Pasta for Pilgrims

Pasta with an expiration date of July 1623? That would be three years after the Pilgrims landed. And the “Hag” part doesn’t sound nice.

Actually, because the letters and numerals are jammed together, you can read them any way you like. But would an expiration date of July 16, 2023, make any more sense? What kind of pasta should be good one day and not the next? Maybe I should have put the pasta aside to see what would happen to it on July 17, 2023.

Round suspects

“Round up the usual suspects,” said the character Captain Renault in the film Casablanca. As an editor, when I see round numbers, especially round percentages, I am suspicious. People casually say 90% all the time when they mean “almost all.” But when it comes to science or surveys, a percentage indicates a level of specificity. When I see 90% or all round numbers in something that’s supposed to be factual, I immediately wonder whether there are any actual measurements behind the percentage or whether it’s somebody’s wild guess.

After I saw the sign that is partly shown above on the outside of a Washington, DC, Metrobus (the 80% part is enlarged so you can read it), I wrote to the Nearest Green Foundation asking where the number 80% came from. I also mentioned that it doesn’t say 80% of deaths from the Covid-19 pandemic, just 80% of deaths. I politely asked for the numbers behind the percentage: total deaths and total black and Latino deaths. The foundation, or at least the computer running its website, assured me of a prompt response. That was almost a month ago. (Why didn’t I capitalize Black just now? I don’t always follow Associated Press style, but the guidance, which I discussed in another post, is to capitalize it when referring to people who self-identify as Black, and who knows whether all the people labeled “black” in the death count identified themselves that way?)

Meanwhile, I started searching on my own for the source of the 80% number, because I saw it mentioned in other places but without any further information. Georgetown University kept coming up, and I found a June 9, 2020, article in the Georgetown Hoya, “Georgetown Report Highlights Racial Disparities in Health in DC,” and a June 2, 2020, press release from the Georgetown University School of Nursing & Health Studies, “New Georgetown Report Highlights Health Disparities and Calls for Racial Equity in the District of Columbia.” Both the Hoya article and the press release cited a report from the School of Nursing & Health Studies: Health Disparities in the Black Community: An Imperative for Racial Equity in the District of Columbia, dated 2020 but, as the Hoya article notes, “prepared from findings gathered before the COVID-19 pandemic.”

The report “notes that Black residents account for 80% of deaths caused by the coronavirus in the District [of Columbia],” stated the Hoya article.

The School of Nursing & Health Studies had a different take on what Health Disparities in the Black Community says: “Approximately three quarters of the deaths associated with COVID-19 in the nation’s capital have been among the African American community.”

What the report actually says is “At the time of this report, Black residents represented close to 80% of deaths caused by the virus in the District.” It also states, “The report is a synthesis of findings and does not include new quantitative content.” The only source given for the figure of 80% is an end note: “Coronavirus Data | coronavirus. Accessed April 17, 2020. https://coronavirus.dc.gov/page/coronavirus-data.” “Coronavirus Data” is a vague source, and the hyperlink is dead. In this case, the Health Disparities in the Black Community authors got lucky—sort of. The dead link has a redirect to another web page of the Washington, DC, government, but the information for April 17, 2020, consists of the numbers of tests, positives, lives lost, and recovered, but nothing about ethnicity. The web page has a link to a spreadsheet with more numbers but no mention of black, white, or other racial data. If the Washington, DC, website cited by Health Disparities in the Black Community ever had such information, it seems to be gone.

But there are other sources (or suspects).

The DCist website has a May 6, 2020, article by Becky Harlan of WAMU, the radio station of American University in Washington, DC: “Black Washingtonians Make Up Less than Half of D.C.’s Population, but 80% of Coronavirus Deaths.” However, the article includes a pie chart indicating that 86%, not 80%, of the people in Washington, DC, who died from Covid-19 were black. (My book The Editor’s Companion has a sample editing checklist; one item on the list is “Check repeated information.” This is a good example of repeated information: the title says 80%, but the pie chart says 86%.)

Another news story, on July 16, 2020, by Biba Adams in The Grio, “Washington D.C. Has the Worst Racial Disparity in COVID-19 Deaths in US: Report,” stated, “In the District of Columbia, in the shadow of The White House, more than 550 people have died from COVID-19, more than 74% of them are Black.” If you’re going to die in the shadows, it might as well be the shadow of the White House. Leaving aside the poor grammar and the political complaint tossed in by mentioning the shadow of the White House, did the proportion of black people dying of Covid-19 really go from 80% in April to (maybe) 86% in May and down to 74% in July? The Grio in turn cited a July 15, 2020, news story from APM (American Public Media) Reports, “Failing to Protect Black Lives” by Christopher Peak, which said that “the coronavirus had left … 570 dead in Washington, D.C.” So The Grio rendered 570 as “more than 550,” which technically it is, but the APM Reports story had numbers of black and white deaths (though you have to place the cursor over the bar chart to see the numbers). It said there were “421 Black deaths,” and 421 is 73.9% of 570, so the percentage (if those numbers were correct) was slightly less than 74%, not “more than 74%,” as The Grio put it. (The Grio certainly got its information from the APM Reports story, which The Grio cited, because the APM Reports story said, “The fatality rate among the city’s Black residents is 5.9 times higher than for white residents,” which The Grio changed to “The fatality rate for Black residents in D.C. was 5.9 higher,” leaving out the word times.)

At least we got away from the round numbers. But who got left behind on the bus? The sign sponsored by the Nearest Green Foundation said that the 80% included Latinos.

And I found statistics that included them, on the COVID Tracking Project’s Racial Data Dashboard, updated twice weekly with data reported by U.S. states and territories (and the District of Columbia). As of October 21, 2020, 75% of those who had died from Covid-19 in Washington, DC, were “Black or African American” people, and 13% were “Hispanic or Latino” people. So the big, scary round number on the side of the bus actually understated the problem: of the people in Washington, DC, who have died from Covid-19, approximately 88%, not 80%, were black or Latino.

The Georgetown School of Nursing & Health Studies press release was correct after all in stating that “approximately three quarters of the deaths associated with COVID-19 in the nation’s capital” were among blacks, but I don’t give the school credit for accuracy in this case, because the sources the school’s press release cited did not contain that information.

In the case of black and Latino people in Washington, DC, there were actual numbers behind the percentages that some people were reciting, but in other cases, sometimes I doubt that there are any real numbers involved. For example, I saw these statements in print:

“More than 1 in 5 mobile searches are pornographic in nature.”

“50% of Christian men and 20% of Christian women admit being addicted to pornography.”

Maybe I have underestimated the prevalence of pornography, but the numbers seemed high. I tried to find the sources that were cited to find out what they said.

The first statement had a footnote citing “A Large Scale Study of Wireless Search Behavior: Google Mobile Search,” by Maryam Kamvar and Shumeet Baluja of Google. The copy I found online was undated, but the most recent source it cited was from 2006, so it may have been published soon after that. The data for the report were sampled in 2005. It examined only “Google’s mobile search interface.” Maybe the results would have applied to all search engines in the first years of this century, but the report doesn’t say that. The report did indeed say that more than 20% of Google mobile searches were for “adult” content (I’m not using adult as a euphemism for pornographic; I put the word in quotation marks because it’s the word the report used for the “most popular type of query that users performed.”) However, the report also cited a 2002 article in IEEE (Institute of Electrical and Electronics Engineers) Computer, according to which, in the words of the Google report, “pornographic queries only accounted for less than 10%” and “found that the proportion of pornographic queries declined 50% from 1997 to 2000.… The high percentage of pornographic queries may be on a declining curve.”

If such queries may have been on a declining curve whenever the Google report was published (maybe more than ten years ago), I don’t think that the statement “More than 1 in 5 mobile searches are pornographic” is supported today .

The other source was more problematic. The percentages are suspiciously round: exactly half of men (50%) and exactly one-fifth of women (20%) supposedly admit being addicted. The footnote read, “ChristiaNet, Inc., ‘ChristiaNet Poll Finds that Evangelicals Are Addicted to Porn.’ Marketwire, Aug. 7, 2006. http://www.marketwire.com/press-release/christianet-poll-finds-that-evangelicals-are-addicted-to-porn-703951.htm (accessed Dec. 27, 2012).” The link for the 2006 Marketwire press release is dead, but I found a ChristiaNet press release online that says, “Copyright© 2017”; however, it might be the press release from 2006; maybe 2017 is when the web page was last updated. This press release says that ChristiaNet “conducted a survey asking site visitors eleven questions about their personal sexual conduct … there were one thousand responses” (another suspiciously round number). The other results mentioned in the press release are similar: 60% and 40%. The press release doesn’t say what the eleven questions were; whether the answers were yes or no, open ended, or multiple choice; or how the one thousand people were led to the poll: was it on the website’s home page or maybe associated with an article aimed at people who use pornography? The people who were polled may not have been representative of average Christians: the press release says it was a poll of Evangelicals, which it equates with Christians, but there are many other kinds of Christians, and even if the poll was valid, it might not be representative of Christians in general.

I could not find the poll itself or the results. With the percentages of responses being all round numbers, it sounds like something clumsily made up. Christia.net seems to be defunct too. As an editor, I would say that the claims are unsubstantiated.

So after chasing some suspicious round numbers down rabbit holes, my advice to other editors is this: ask for the sources that support the numbers. The round numbers might be correct, they might be someone’s wild guess, or they might be entirely made up. In fact, I think that 90% of them are made up. (That’s a joke.) And make sure the reference notes have full details about the source, because if they rely on a hyperlink, the link might be dead soon.