The truth of the matter is, there are a lot of different paths to getting a B-, far more paths than there are to getting an A or a D. Shows tend to get a B- in one of several situations:
(1) Critics were seriously divided on a show. All My Sons is a good example of this one. Some people thought it was a revelation, others hated it. Due to the fact that there are more publications that tend to give out very favorable Broadway reviews (Philadelphia Inquirer comes to mind, although they didn't review AMS) and that even people who really dislike a show rarely hate it enough to give it an F grade, the grades in this situation tend to end up in the B- range.
(2) No one really loved it, some people hated it A somewhat-flawed show will suffer this B- fate. A few critics will like it, but not love it (or even like like it) for X Y and Z reasons, but a couple of reviewers (John Simon, for example) will tear it apart based on the same faults. So it gets a cluster of B+es and then one or two D-es or Fs and we get right back into the B- zone.
(3) No one really hated it, some people loved it Same thing in reverse. Critics in general don't care for it, putting their grades in the C-/D+ range, but one or two people loved it, and their enthusiasm for it pulls the rating up to a B-.
C is our benchmark for "average". The B- grade is so common because critical disagreements are so common. Very rarely is there a B- critical consensus on a show. Unlike shows that get in the B+ to A+ range or the C- to F range, there doesn't need to be a critical consensus to get to a B-. If anything arguments amongst the critics favor a B- outcome.
In other words, a B- grade is not an inflated C. We are actively endeavoring to avoid grade inflation here, where reviews that really should be a C or a C+ are called a B- because we feel bad.
The only way to avoid the proliferation of B- grades would be to start engaging in weighting of various critics, something neither Rob nor I are keen to do, because the determinations of which reviewers are worth reading and which are not is so purely subjective. When Nate Silver weights the polling on 538, he weights it according to empirical evidence, namely some combination of how good that poll has been historically at predicting results, demographic data about respondents vs. the electorate and some evaluation of its methodology. There is no empirical evidence about how accurate a theatre review is, or at least there isn't yet. If after we've been doing this for awhile we discover that some reviewers are relentless boosters or unfair haters, we'll revisit the weighting question then.
PS: There are fewer B- grades on the sidebar to your right today then there were last week. This is because Rob went through and took out shows that have closed (although you can still find them on the site). Much of what we cover is commercial theatre; as you can imagine, shows that tend towards a B-, particularly if that's due to a Times review, are likely not going as long and will therefore disappear from the rankings faster. I'm sure January will bring a new crop of B- shows for your reading pleasure and our compiling consternation.