Dimitri Rotov has a fiery post up that evaluates Joseph Glatthaar’s recent scholarship – specifically his use of statistical analysis in his recent studies. It’s a worthwhile read, though Rotov chose to embeds his analysis in his vaguely-defined “Centennialist” school paradigm. He begins with this little gem:
“Joseph T. Glatthaar is an early middle-aged Centennialist being groomed by Gary Gallagher to walk in the shoes of himself, Sears, McPherson, and the old storytellers – Williams, Williams, Catton, etc.”
I’m sure Glatthaar would find such an evaluation of his career as laughable, but this sort of critique is standard in Rotov’s arsenal. In the end, it fails to shed any light at all on Glatthaar’s scholarship. We do get closer to a formal critique re: Glatthaar’s citing of casualty figures in General Lee’s Army: From Victory to Collapse. Rotov begins by taking Glatthaar to task for his imprecise citation of casualty figures and his failure to utilize Thomas Livermore’s Numbers and Losses. Rotov didn’t bother to look up Glatthaar’s references for his Cedar
Creek Mountain, but it only takes a few seconds to learn that they were pulled out of one of the appendices in Robert K. Krick’s, Stonewall Jackson at Cedar Mountain. It’s not clear to me what exactly is problematic with citing one of the authorities on this particular battle.
The real target, however, is Glatthaar’s companion volume to General Lee’s Army, which includes all of the statistical tables compiled during the research process. I have not yet received my review copy of Soldiering in the Army of Northern Virginia: A Statistical Portrait of the Troops Who Served under Robert E. Lee, but I freely admit that I am not trained in statistics, which is why I believe it is worth reading Rotov’s review. At the same time it would be easier to wade through without the snide remarks, but that is something we’ve all come to
love accept about his blog. I guess what I am looking for is a review of the review. I assume that the academic journals will find a qualified reviewer to examine the methodology and analysis contained in the book, but that will take some time. In the mean time I don’t expect the more popular review forums to do much beyond surface analysis.
Finally, I find Rotov’s post to be an excellent example of the potential of blogging to speed up the formal review process.
It seems odd to be chiming in on an ancient post, but the criticisms of gratuitously caustic is more than fair and my mental bookmark is “am I annoying Kevin Levin and his readers?” That bookmark often gets lost.
Having said that, the blog is for me and if there are like-minded out there, so much the better. If non-like minded also get the occasional morsel of worth, that’s a bonus.
I encourage those who reject my views and personal style to get writing, get indexed, and they will get Googled.
Kevin’s description of Dimitri Rotov’s critique of Professor Joseph Glatthaar’s “Soldiering in the Army of Northern Virginia: A Statistical Portrait of the Troops Who Served under Robert E. Lee” accurately notes that Rotov utterly failed to evaluate the central question, which is the reliability of Glatthaar’s sample, and its capacity to produce valid conclusions about the Army of Northern Virginia (ANV). As a university professor who teaches social science methodology and statistics at the undergraduate and graduate level, as well as publishes on Civil War and other topics, I examined the work critically, and had a number of questions. Ultimately, all questions were answered to my satisfaction. If you take the time to understand how the data were derived, I think you will join me in appreciating Glatthaar’s two volumes as excellent contributions to our understanding of Lee’s army.
First, Glathaar compiled a list of all infantry and cavalry regiments, and artillery batteries that served in the ANV, and its 1861 predecessor, the Army of the Potomac. Using a random number table, he then chose 75 infantry, 50 cavalry and 50 artillery units to provide soldiers for analysis. From those units, he randomly selected 4 foot soldiers, 3 cavalry troopers, or 3 artillery men to produce a sample of n=600 Confederates (4*75+3*50+3*50). Next, Glathaar researched the Compiled Service Records (CSR) for details about each soldier’s military history, such as promotions, wounds, desertion, etc. He also combed the U.S. Census records for information on each soldiers’ background, such as whether each soldier came from poverty, or lived in a slave-holding household. Once that information was obtained, the figures were weighted by the proportion of soldiers in each branch of service, when relevant.
Glatthaar analyzed the composition of Lee’s army from 1861 to 1865, and determined that 81.8% of the men served in the infantry, 11.3% rode with the cavalry and 6.9% manned the guns in the artillery. I did a first pass at verification by examining the ANV’s order of battle at Gettysburg, and counted 172 infantry regiments, 19 cavalry regiments, and 70 artillery batteries. At full strength, the infantry represents 81.7% of total manpower, the cavalry represents 13.8% and the artillery represents 4.6%. My numbers from Gettysburg were close enough to Glatthaar’s figures for the entire war that I found his weights to be highly credible.
The weights were needed to adjust the data, because only 6.9% of ANV soldiers were in the artillery, for example, but 25% of Glatthaar’s sample were artillery men. If Glaathaar’s sample was limited to the strict proportions of the ANV, then there would be only 41 artillery men (6.9% of 600), which is too few to be fully reliable. As a consequence, the reported figures for the army as a whole are adjusted by these weights. For example, Glaathaar’s census data revealed that 42.3% of the infantrymen who he could locate in the census came from slaveholding households, compared to 54% of the households of the cavalry and artillery soldiers. Producing the slaveholding figure for the army as a whole required the use of the weights on the separate outcomes for the three branches of service, i.e. [81.8 * (42.3% infantry from slaveholding households) + 11.3 * (54.0% cavalry) + 6.9 * (53.5% artillery)]/100 = 44.4%. Such figures are important, because they offer a different perspective on the circumstances of Lee’s troops compared to merely knowing that only 13% of soldiers personally owned slaves. Incidentally, when differences between branches of service occurred, Glatthaar conduct chi-square tests to determine if such differences were statistically significant (a probability of 5 in 100, or smaller) or were within the margin of error could have occurred by chance
I am very satisfied with Glatthaar’s sample and statistical analyses. That does not mean that this is the last statistical study on the ANV. For example, regimental line and staff officers were included only if they emerged through random selection. The n=44 who were in Glatthaar’s sample appear to be in proper proportion to the 600 men overall, but may be too few to be completely reliable. The figure of 24.7% of ANV officers killed, compared to 13.7% infantry and 10.7% across the three branches, for example, will require confirmation with a larger sample of officers (such as n=150). Yet, anyone who undertakes that study will quickly recognize the substantial time and effort that Glatthaar devoted to obtaining lists of units, personnel rosters, CSRs and census data for the 175 units and 600 soldiers in “Soldiering in the Army of Northern Virginia”. It is the most accurate representation we have of the background and experience of the ANV rank and file, and is already being recognized as a major milestone in Civil War historical research.
Thanks for taking the time to share your thoughts re: Glatthaar’s sample. It was very helpful.
Thanks for this. His methodology in his earlier book seemed reasonable to me, based on my very rudimentary understanding of stats and sampling, but your explanation makes it much clearer.
I will only add a bit of advice a colleague of mine, a professor in biostats, gave me once: any time someone complains about how statistical sampling isn’t a viable technique for drawing larger conclusions, tell them next time they go to the doctor, and they want to do a blood test, make them take it all!
I’m just so put off by the snarkt tone and air of superiority in the first several paragraghs, that I refuse to read any further. I could respect Rotov if he allowed comments on his blog—obviously he is not interested in having his opinions challenged.
As a book review editor, I tended to give my reviewers wide latitude, which on one occasion resulted in my being threatened with lawyers. But even I would not have printed Mr. Rotov’s attack on Joe Glatthaar. Others have referred to the most obvious problems: the birtherish notion that Centennial Skull and Bones has designated Joe Glatthaar as some heir apparent, the entirely unsupported and potentially libelous notion that others did the author’s work, and the other needless personal attacks all written from within a “no comments” Green Zone. No reputable book review section would allow that. What I found just as disturbing, however, was his assertion that Glathaar should have had a qualified social scientist review the numbers, followed by his mean-spirited dismissal of the eminently qualified social scientist who helped build the sample in the first place, complete with a gratuitous link to the man’s Rate My Professors page. I’ve sometimes profited from reading Rotov over the years, but “this is rum business” that in the end sadly marginalizes both the reviewer and his medium.
I appreciate the comment and I completely agree with the thrust of it. There seems to be some question – no doubt fueled by a sentence that did not convey my point – that I am somehow endorsing D’s assessment of Glatthaar. Nothing could be further from the truth. I’ve highlighted and condemned his treatment of certain individuals on a number of occasions.
All I wanted to do here was ask if there was anything to his claims about the G’s handling of the statistics. I chose to do so, in part, because D does not allow comments on his blog. I was hoping that someone would call him out on it if the claims didn’t merit serious attention. A few readers have suggested just that.
Thanks again for the comment, Ken.
Not to worry, I didn’t think you were endorsing him. My only bone to pick was your last comment about speeding the formal review process. This has little in common with professional reviews, which embody rules and standards, chief of which is reviewing the book and not the reviewer.
As a grad student, I took a course on historical statistics with Vernon Burton, a master of the craft. I was not so proficient that I worked chi-squares into my last, numbers-based book, but I know what to look for, and I have no problem with the methodology in General Lee’s Army.
You said: “This has little in common with professional reviews, which embody rules and standards, chief of which is reviewing the book and not the reviewer.”
Agreed, but as we’ve learned even academic historians are not immune from failing to abide by such advice: http://cwmemory.com/2010/03/25/john-stauffer-strikes-again/
I have some basic training in statistics, enough to have a feel for sample size and what you can and can’t derive from that. I’d want to go back and read Glatthaar’s appendix on his sample and it’s currently packed away in a moving box. And of course I haven’t read Soldiering yet.
Overall, Rotov’s critique is ill-founded as I read it. Sure, a larger sample size is always better. Sure, there can be odd effects in any sample less than the whole. Yes, artillery and cavalry are over-represented with respect to the total population. But the sample size is large enough to indicate basic demographic trends, to tell us when soldiers entered the army and what broad-stroke differences there were.
And Glatthaar is correct that you needed more artillerymen and cavalrymen to get usable numbers of individuals.
As Brooks points out, complaining about quoting round figures about the Civil War is rather silly. The numbers we get are imprecise. Are the cited numbers correct to within 10%? If so, then we’re doing well. To pretend otherwise is as bad as what Rotov is complaining about.
Thank for the comment, Stephen. This is the kind of response that I was hoping to get.
First, someone’s confusing Cedar Creek and Cedar Mountain.
Second, I don’t see how Dimitri Rotov’s blog post speeds up the formal review process. Reviews by whom and for whom? What are his credentials in this regard? Has he even read the book?
Third, I don’t understand the link between the lack of formal training in quantitative analysis and an interest in Dimitri’s opinion of a book he has not read. The body of the post concerns General Lee’s Army and the fact that Dimitri does not like it.
Fourth, Livermore is not authoritative. We’ve seen that as book after book has looked over records to give different results as to numbers and losses. Sometime, as in the case of the June 3 assault at Cold Harbor, the digging of Gordon Rhea and Robert Krick has revealed different results, although those findings have not yet made it into some ill-informed mainstream accounts.
As always, I await Dimitri’s first book.
I wasn’t suggesting the Dimitri’s blog per se speeds up the review process, just that a properly written blog post could do just that. Second, I was assuming that D had an advanced copy of the book. Of course, I could be wrong. I was not making any kind of claim about Livermore.
Finally, I wouldn’t hold your breadth for a forthcoming book from Dimitri 🙂
You may not have been making any claim about Livermore, but Dimitri was, and it seems that opens him to criticism. Indeed, the problem here is that you’ve posted something, given it a context, and, in the absence of Dimitri having a comments section, some comments here are going to be directed at Dimitri’s post, and some at your use of it.
I see no evidence from Dimitri’s post that he has an advance copy of Joe’s book. However, the publication date is days away. I see several shots taken by Dimitri at Joe’s previous work. If the only thing of substance is Joe’s supposedly imprecise numbers, well, what of it? Are Livermore’s more precise or more precise-sounding because they don’t end in “0”? Whould Joe’s argument change? And, since Dimitri dismisses claims about imprecise numbers with a disdainful shrug, well, what are we to make of that?
For some reason, I recall that Union losses at Fredericksburg are 12,653, and at the Wilderness they are 17,666 … according to some sources. But I know that there’s an argument about Confederate losses at both battles, with the Wilderness estimate ranging from 7,000 to 11,400 … and even more. I don’t think that makes me a better historian than Joe. There have been various estimates of the losses on both sides at Gettysburg, the war’s most oft-studied battle. What should this tell us? And, knowing that, what really is the substance of Dimitri’s remark?
Should I read much into mistaking Cedar Creek with Cedar Mountain? Would Dimitri make much of that, even if it’s just in a blog post? I once had someone said I knew nothing about Andrew Johnson because somehow in copyediting the extra “e” fell out of Greeneville, Tennessee. What should we make of that comment? I think someone using Dimitri’s standards might make much of it.
And so I see some merit in Matt’s complaint, to put it gently. Yes, Dimitri’s often cited by certain people inn the blogosphere, and sometimes he’s got interesting things to say. Sometimes he does not. The ruminations about how Joe goes about his work seem to me to be irresponsible, and that needs to be highlighted. You’ve brought Dimitri’s post to a larger audience, and yet you seem torn between laughing at him and taking his potential review of Joe’s forthcoming book seriously. What if someone decided to review your forthcoming work on the Crater by highlighting a few blog posts and then twisting the assessment to fit some strange agenda? That would not be a review of your book: it would be an attack, and one made by someone not willing to take direct criticism in turn.
“All I am asking is whether there is anything to D’s specific points re: Glatthaar’s statistical analysis.” What specific points? I see none. Indeed, I don’t see that he understands what Joe analyzed. It’s not battle losses.
From Amazon’s publisher’s notes: “While gathering research materials for General Lee’s Army, Glatthaar compiled quantitative data on the background and service of 600 randomly selected soldiers–150 artillerists, 150 cavalrymen, and 300 infantrymen–affording him fascinating insight into the prewar and wartime experience of Lee’s troops. Soldiering in the Army of Northern Virginia presents the full details of this fresh, important primary research in a way that is useful to scholars and students and appeals to anyone with a serious interest in the Civil War. While confirming much of what is believed about the army, Glatthaar’s evidence challenges some conventional thinking in significant ways, such as showing that nearly half of all Lee’s soldiers lived in slaveholding households (a number higher than previously thought), and provides a broader and fuller portrait of the men who served under General Lee.”
Dimitri has nothing to say about that, Kevin, and there’s no evidence that he has any training in quantitative methods.
So why are you waiting in anticipation of what he might have to say about the book? You say: “I freely admit that I am not trained in statistics, which is why I believe it is worth reading Rotov’s review.” That makes no sense to me, Kevin.
You conclude by saying: “Finally, I find Rotov’s post to be an excellent example of the potential of blogging to speed up the formal review process.” How so? I don’t see it in this case.
All I was asking for was someone to help me sort through a few specific claims. The publication date may be forthcoming, but I receive most UNC books weeks in advance and I thought D may have received a review copy. I am not torn at all. In fact, we pretty much agree on the value of his blog. All I was asking for was a little help in sorting through the claims given that I am not able to make an evaluation. Honestly, I fail to see the problem with that. I clearly stated the following in the post: “I guess what I am looking for is a review of the review.” There is nothing more here than that. I am in no way endorsing his assessment of Glatthaar; in fact, I went out of my way to call him on it as I have done in the past.
Again, as for the potential of blogging to speed up the process I was not necessarily referring to D’s post. Rather, I am simply pointing out that a well-informed blog post could speed up the process. Sorry for the confusion.
Kevin, I asked and you answered! I honestly don’t see how you can celebrate an intellectual community that embraces people who are posting completely unfair personal attacks unfettered by any sort of review process. Seems to me that the community members would be a bit more rigorous about rejecting that sort of thing. That is why I asked.
TF Smith: I think you are right that if certain errors were made, that would be significant and worthy of discussion. But Joe G – who I admit is a friend of mine – is one of the most careful scholars in the business, so it seems to me unlikely that he made these sorts of errors.
WIll H: We would have to agree to disagree about Fogel and Engerman. They are both brilliant scholars. TotC had a huge positive impact on the scholarship on slavery. As far as I know, all of their core arguments (which they listed at the outset) have withstood thousands of hours of research by dozens of scholars. They made some errors along the way, but TotC was presented as early findings as a way of generating scholarly discussion. As far as I recall, their arguments about slave diet were largely about the caloric content of that diet. Not surprisingly, slave owners fed their laborers the calories needed to do the job. Surely many of the crucial arguments about the history of slavery have to be made through the analysis of quantitative data.
Seems to me that Joe G has been trying to generate a sophisticated analysis combining “traditional” qualitative evidence and quantitative analysis. My feeling is that he has succeeded nicely. But I think that the larger point is that good quantitative work (that so few scholars attempt any more) can in fact answer questions that cannot be addressed through other means, especially when examined in conjunction with other evidence, Surely it is possible to produce distorted arguments using numbers. And fewer readers will be able to judge when that is happening. But it seems like a mistake to reject the notion of quantitative analysis on its face.
You said: “I honestly don’t see how you can celebrate an intellectual community that embraces people who are posting completely unfair personal attacks unfettered by any sort of review process.”
Your commentary is part of that review process.
Thank you, Matt. I know Dr. G is a well-respected scholar with a strong publishing history; I don’t know much about Mr. Rotov, other than he is certainly a polemicist. I will add Dr. G’s work to my reading list, which expands on a daily basis, but will be very interested in his discussion of methodolgy. I am working toward a project that will look at labor force demographic trends over five decades in relation to federal legislation, so I need to see how professionals have dealt with these issues.
To give Kevin his due, he did write that “In the end, it (Rotov’s post) fails to shed any light at all on Glatthaar’s scholarship.” Seems like a fair summary.
General Lee’s Army is a must read, but I also highly recommend Forged in Battle and March to the Sea and Beyond.
Many thanks – I own and have read Forged in Battle; very impressive. At the moment, I am half-way through Charles Lane’s The Day Freedom Died on the Colfax massacre.
I thought Lane did a pretty good job with that book. I also recommend LeAnna Keith’s book, The Colfax Massacre.
Thanks – I’ll look for that one, as well.
Dimitri apparently has a large following, so I think Kevin is justified in discussing the review, just as he does, say, Black Confederate websites. While I haven’t read Prof Glatthaar’s latest, I think Dimitri offers a few good points buried in all of his infantile remarks. But then, I’ve been leery of historians basing their arguments on statistics ever since reading “Time on the Cross” in which Fogel and Engerman used statistics to “proove” that slaves were well-fed.
Thanks for the comment, Will.
I share all kinds of commentary concerning the Civil War on the blog, the CWM Facebook page, and my Twitter stream. One of the things I value about the blogging format is the ability to feature a wide range of content. I see nothing wrong with sharing it, responding to it and allowing others to sort through it.
While, as I’ve said, statistics don’t interest me all that much, it is a valid scholarly question as to the amount of insight into the Civil War that statistical analysis can provide to us.
The tone of the post and the site, overall, marks it as polemic; that much is clear. In some ways, it reminds me of the BCM sites in terms of the language used, and the lack of a comments feature, as far as I can tell.
Having said that, after having read through, it seems there is at least a couple of potentially valid questions raised in terms of social and demographic history (good old cliometrics), primarly along the basic question of comparing like to like. First, if in fact the data from company-, battalion-, and (Civil War era) regimental-sized units are mixed, and the samples are different (3 names from a FA Battery of ~100 men vs 4 names drawn from an infantry regiment of ~500 – ROM in both cases – as a hypothetical), than if the statistical conclusions being drawn do not differentiate between them, I can see a MOE issue coming into play. Also, depending on the criteria used, if the names drawn randomly for research are aggregated to support the “army-level” conclusions, rather than remaining within the different subsets (infantry, cavalry, artillery) I can see some issues there as well. One other but simply more inituitive point is that given the educational and financial resources likely needed amongst (for example) officers in cavalry and field artillery units as compared to infantry units, there could be the point of confirming the obvious, in terms of (for example) slave-holding and/or wealth.
I have not read Dr. Glatthar’s work, so I would not venture any of these potential issues are, in fact, in play, but those were the possible questions that arose as I read the review. Having said that, I think I’d still wait for the peer review before drawing any conclusions.
Best to all this Memorial Day.
Kevin, I guess my point is that I don’t see why you give this garbage a platform. I can see some virtue in being unfettered by the normal process of academic publishing, but this person would never have published this piece in any newspaper or journal, nor would he have dared present it at any academic conference, and i would pay good money to see him try to read it to Joe Glatthaar. The world of blogs means anybody can write whatever they want and post it. But aren’t there any standards in play when you link something and in a sense recommend it?
As for the quantitative analysis. As far as I know, the book isn’t out yet so I doubt if many of your readers have read it. There is nothing in this review that can be subjected to intelligent analysis without the book in hand.
In general, I would say that there are three separate questions one should ask of the application of statistical methods to history:
(1) Are the methods (sampling, tests etc) appropriate to the questions? That is, you can’t determine that a sample is inherently bad until you know the purpose for which it is being used.
(2) Is the analysis appropriate to the evidence? That is, once you have presented your questions and your evidence, are you over-reaching in presenting the analysis? Often the data are suggestive and interesting but fall short of convincing proof. The key is in the discussion.
(3) Have the methods and results been explained clearly? This is sometimes a stumbling block. When you are writing a footnote or appendix or entire volume of explanatory evidence, who really is the audience? How much of a tutorial do you present in reading regressions, for instance?
I don’t see anything in the review portion of this review that opens up a serious conversation about the quantitative methods, especially when the book is not yet distributed
I completely understand your concerns and I even tend to agree with you. At the same time, I am committed to casting a wide net when it comes to referencing Online commentary. Again, most of D’s critique is absolutely useless, which I pointed out in the post. At the same time, I made the call to highlight the brief commentary toward the end because I thought it might be worth a response and D does not allow for comments. I suspect that he has an advanced copy of the book. I suspect that my advanced copy will arrive soon. When others get it they can weigh in and offer their own conclusions.
My brain does not function in mathematical constructs, but I would think that if there is any discipline that has the capacity to be snark-free, it’s statistics. It should be possible to express disagreement without personal attack.
I don’t think anyone will disagree with you on that count. All I am asking is whether there is anything to D’s specific points re: Glatthaar’s statistical analysis.
Unfortunately, I’m the last person who’d be able to give you the answer to that one.
Okay Kevin, I’ll bite.
You have indicated that there is all sorts of good stuff here in the blogosphere that academic historians really would profit from reading. So I’ve been dipping in now and again to see what I’m missing.
You choose to link this review and propose a discussion of it. You even offer it as an “excellent example of the potential of blogging to speed up the formal review process.”
So I read it. It is, after all, a holiday
Is it possible that you don’t realize that this review that you have linked is insulting garbage? I do not know anything about the reviewer. I checked for him in Project Muse, History Cooperative, H-Net and Google Scholar and turned up nothing. Although I guess he is a blog celebrity.
In any case, what I read was a long string of absurd attacks on Joe Glatthaar. There is no point in listing them here. And the insults are not the sort that clever comics use to exaggerate truths. They are dishonest insults that indicate no familiarity at all with Glatthaar or his scholarly methods. The review even includes a mocking link to JG’s colleague’s “ratemyprofessor” page. What sort of person does that?
I gather that your point is that your readers should brush aside the attacks as Rotov just being Rotov and that you consider the rest of the review “worth reading.”
I read it on your recommendation and didn’t find any real analysis or review here that could be described as worth reading. You note that his comments on Cedar Mountain make no sense. He begins one paragraph with “Look at this sentence.” Well, as I read the sentence he quotes it seemed perfectly clear and his critique nonsensical. Later on he cites footnotes as evidence that JG is “complimenting himself” when the quoted footnotes appear to be perfectly normal reports on significance tests. And so on.
The review does seem to have one fairly coherent set of critiques, He believes that JG is unclear in explaining his sampling techniques and incomplete in summarizing his statistical methodologies. But the reader of the review is really in no position to agree or disagree with those critiques given the information presented here. And since we have already read a series of ad hominem attacks and problematic critiques of direct quotes, it is hard to imagine why we should trust the reviewer’s assessment of these points.
I guess I am just baffled by why you choose to call your readers’ attention to this review.
Surely it is his right to think ill of Joe Glatthaar and to declare that he lacks an understanding of history, and so on. But beyond the fact that he owns a computer and thus can run his own blog, why is it that you think that we should care what he thinks?
I sounds like we are pretty much in agreement with Dimitri’s review. I agree with every point you’ve made re: the absurd claims made about how Glatthaar produced Lee’s Army. As you know, I indicated as such in the post. All I am asking readers to consider are the comments related to G’s handling of the statistical data.
I have yet heard from anyone who is willing to consider the specifics of D’s argument, which I have admitted I am not qualified to consider.
Dimitri tries to be clever but fails. He attacks Glatthaar for the casualty figures he uses for Cedar Mountain but he also admits “I didn’t bother to look up the citation he gave”. Adding to his lack of knowledge of the source Glatthaar uses, he makes several mistakes, such as when Dimitri writes “Where Glatthaar puts Jackson’s casualties at 1,400” when Glatthaar actually wrote “over” 1,400 not “at”. Dimitri also wrote “Glatthaar puts Banks’ casualties at 2,400” when Glatthaar does not specify that all 2,400 are Banks. If one follows the citation Glatthaar uses, we see that Krick’s research put Banks losses at 2,222 but he also included losses experience in McDowell’s men who arrived at the end of the battle, something Livermore chose not to include.
It is clear that Dimitri did a hatchet job early on in the post, but I am more interested in his evaluation of Soldiering. Thanks for the comment.
As always, Rotov’s heaping helping of sarcasm and constant “centennialist” references will turn off many readers, especially those of us who know Dr. Glatthaar. But I think he has some valid quibbles. Works that rely on statistics written by historians not trained in their use always make me suspicious. His method of choosing his sample also has me scratching my head. And I thought American historians had gotten over their fascination with the Engerman-Fogel brand of pretentious statistics mumbo-jumbo. One of my reasons for choosing to study history was to avoid “chi tests” and the like.
While I find Rotov sometimes interesting to read, this is the kind of statement I find galling, and silly:
“My suspicion is that Glatthaar designed the book and had graduate students (or other helpers) develop it while he supervised them. The parts in which he had been indoctrinated (master narrative, numbers and losses, major themes and ideas) reflect his personal involvement to ensure conformity with the canon; meanwhile, the non-battle numeric detail represents the work of others.”
The cavalier use of words like “indoctrinated” strike me as unnecessarily caustic and ungenerous. Perhaps it is the Internet effect, but I wonder if he would use such dismissive rhetoric face to face with Glatthar, or even McPherson.
It adds nothing to the substance of his review, which is what I am much more interested in evaluating. It makes him look rather silly.
McPherson ?? Would that be the same McPherson that sold out and wrote a forward for Tom Carhart’s book, a book that many ridicule, just to help Carhart sell his book ? Is that the same McPherson ?? and yes I will say that to his face.