A Better Way to a Top 100?

When did you first learn about All About Romance‘s Top 100 poll—maybe when it was first assembled in 1998, or when it was recompiled in 2000, 2004, 2007, 2010, 2013, and most recently in 2018?

In my paper “History Ever After: Fabricated Historical Chronotopes in Romance Genre Fiction,” I examined the approach that one AAR reviewer took to historical romance outside the Regency. This kind of perceived “accuracy” is why I believed the top ten (above) chosen by their readers skewed toward British peers (white) in historicals. How books are reviewed by a site will have an impact on their readers’ opinions.

History-Ever-After-IASPR-presentation-2018

For another examination of the final list, check out Book Thingo‘s recent podcast “All About Romance Lists,” with the always-entertaining Kat, Gabby, and Rudi. (Click on the image below for the link.) These reviewers make great points, but notice that they do not question the idea of making a list. In fact, Gabby and Rudi reminisce about using the AAR list like a reading challenge when they were younger. I did the same when I first discovered romance. People love lists, even when we know they are imperfect. How many articles are headlined, “Ten romances for the summer / winter / fall / dentist’s office / to read while avoiding your taxes” and so on? At their best, lists can bring new titles to romance readers everywhere.

Jennifer-Hallock-Sydney-IASPR-2018

But just as important as the poll itself is how you compile the poll. Let’s look at the AAR process from a social science perspective. In stage one of the process, AAR did not initially include a single book by an African-American author—even though the site has given qualifying books A grades. Immediately when this was pointed out to AAR, they pulled this first list and added books by several authors of color. Unfortunately, they misspelled a few of the authors’ names in the process.

AAR-top-100-problems-process-2018

A closer look at the first list distributed online:

AAR-top-100-problems-process-2018

And here is a snapshot of the second list before all of the spelling corrections were made:

AAR-top-100-problems-process-2018

Here’s the thing. AAR should not have started with a predetermined list in the first place—and it not only would saved them a lot of headache, it would have created a more objective poll. A predetermined list inevitably reflected the reviewers’ bias—and everyone has bias. Everyone. That is the foundation of social science research theory. AAR stated that their list was made out of: (1) past winners; (2) staff feedback, or books that their reviewers believed had merit; and (3) public reviews on Amazon and Goodreads.

The first two are the problem. These two criteria bake into the poll a bias toward incumbents (predominantly white, cishet, traditionally-published books because look at romance publishing), and books that have scored high on their own subjective site. Yes, all reviews are subjective, and that is okay. (We writers need to remember that, as well as readers.) The problem here is not that AAR reviewers had opinions; it is that these opinions were conflated with gatekeeping. AAR did not blend reader suggestions with their own predetermined list until the third stage of voting—too late.

I think there is a better way. Now, to be clear, I am not a professional polling consultant. These are merely my humble amateur ideas drawn from a background in social science (bachelor’s and master’s degrees in International Affairs) and teaching 25 years’ worth of high school students how to assess the reliability of their sources.

Moreover, I am not volunteering to take AAR‘s place. As an author, maybe I should not even be suggesting any of this, but the social science teacher in me could not help but come up with these ideas. And of course I do not have enough of a blog platform to make these ideas work. But if someone wants to try it again in a few years, please consider these suggestions.

AAR-top-100-problems-process-2018

assembling a reader top 100 romance poll:

Before you begin: For the year leading up to your poll, make sure that you are publicizing a wide variety of books. This should include a representative slate of authors, characters, subgenres, tropes, and publishers (including indie). Keep a careful eye on your reviews to make sure that your coverage is balanced and open-minded. This gives visibility to a wide range of books and authors, and it attracts to your site a nice mix of readers with a spectrum of tastes and preferences.

  1. Open your poll by asking for 20 books from each reader participant. Start with your readers’ suggestions. Is this still bias? Yes, but it is the readers’ bias, and this is a readers’ poll. Moreover, it is this year’s bias, not the last poll’s bias. Reader preferences do change as social mores and sensibilities change.
  2. Take the top 150 suggestions by rank—but do not release anything yet.
  3. Now it is the time for you, the professional, to check the bias of your readers. Pick between 20 to 50 more books to fill in gaps of representation, subgenre, and publishing market. Do not add just your faves; add what is missing. There is a difference.
  4. Release this list of up to 200 books to your readers for the second round of voting. I know 200 books is a lot of books. But think about this: should a top 100 grow to be 100? That means books are added, even though they have not been seen by all participants. Maybe people will like those better than what they would otherwise choose, or maybe they won’t. But you will not receive an objective survey of opinion without giving everyone the same choices. Unwieldy or not, the list should be cut to 100 by asking your readers to pick up to 30 books from this list—about one of every seven books listed. This requires people to make tough choices, and people will only be able to advocate for their very favorites.

You can now count the votes and release the results of the top 100 romance novels. You could hold another vote for the top ten, or you could release the rankings from point four above.

Personally, I would keep the number of voting solicitations down. You have only done it twice so far—less of a burden on readers and therefore more likely to give you even participation across the whole survey. (Four voting steps, which is what AAR attempted, are certainly too much to ask of your audience. If their interest dwindles by the third or fourth step, your results are less accurate. Because of the early mistakes that AAR made, I suspect that participation from readers who enjoy diverse books dropped off. And, in a reader poll, the readers who vote most often get to define the list.)

Someone with a big platform could make this happen, or something like it. Good luck!