The Fault in our Stars

In 2014, one of the breakout movies of the year was The Fault in Our Stars, a romantic drama based on a book written in 2012.  I didn’t see the movie or read the book, but I did notice the box office receipts of $307 million compared to the production budget of $12 million – now that’s a solid return on investment!

I also liked the sound of the title, even though I didn’t know what it meant.  When I looked it up on Wikipedia, I found out that it refers to Act 1, Scene 2 of Shakespeare’s Julius Caesar, in which Cassius says to Brutus, ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’

Truth be told, I didn’t read Julius Caesar either, but I think that I understand what Brutus is saying: it’s not fate, it’s our character.

I was reminded of this phrase upon reading an article in the Wall Street Journal about Morningstar’s star rating system for mutual funds and ETFs (they rate stocks as well with a star system, but since it’s a totally different methodology, I won’t address that today).  You can see the article, ‘The Morningstar Mirage,’ by clicking here, although a subscription may be required.

The thrust of the WSJ article is that investors think that a five-star rating from Morningstar means that a mutual fund will be a five-star fund in the future, which isn’t true.  The WSJ article goes on to show how funds with five-star ratings don’t generally retain their rankings in subsequent years, how money floods into five-star funds and out of one-star funds and quotes investors and advisors who misuse the rankings.

The most damning accusation, however, is that Morningstar knows that investors misuse the star system but leave it intact because they receive licensing revenue from mutual fund companies that use the stars to promote their mutual funds.  Morningstar says that this accounts for just four percent of their revenue, but by my calculations, that still adds up to $35 million a year.

I’ve said before that I’ve got a love/hate relationship with Morningstar, but, in this case, I think the WSJ missed the mark.  Although there are some truly legitimate issues raised in the article, I think it’s a bit of a hatchet job.  Fake news, some might say.

Let’s start with how the star-system works in broad terms (the details can be found here).  First, Morningstar sorts all of the thousands of mutual funds in existence into what looks like about 80 or 90 categories (I didn’t count them, but this is the source document).

Then, Morningstar calculates the risk divided by the return for each fund in the category over the prior three years and forces them into a normal distribution curve.  The top 10 percent are rating five-starts and the bottom 10 percent get a one star rating.  The middle third get a three-star rating and the rest get either two or four stars.

So, all the rating is telling you what funds had the best risk-adjusted returns over the past few years within each category, nothing more.

Unfortunately, the fund industry uses the star ratings to promote their funds and while every ad says something like, ‘past performance does not guarantee future performance,’ the general investing public reads five stars as a ‘good fund’ and one star as a ‘bad fund.’

What’s interesting about the WSJ article is that even though they are criticizing Morningstar for implying that five-star funds are better than one-star funds, the research that they include in the article supports that argument.

The research from the journal shows that 10 years later, five-star funds have higher stars than four-star funds, which have higher ratings than thre-star funds, which have higher ratings that two-star funds, which have higher ratings than one-star funds.  It would appear from the WSJ article analysis, that picking five-star funds compared to lesser starred funds leads to the best result.

The WSJ’s argument is that five-star funds aren’t five star funds ten years later and while that’s true, I don’t think that Morningstar (or even the fund companies that promote their star ratings) have ever made such a claim.  In fact, Morningstar is very transparent about the results of their star system.

Morningstar has said as recently as last year that their star rating system has ‘some moderate predictive power’ in one study using one method, but that another method in the same study shows even weaker results.  Here’s a link to their 36-page report (lots of details and nuanced insights!).

Morningstar has published other reports showing that there are more important predictors of future success, namely the fund expense ratio (click here for more).  A few years ago, they even introduced a qualitative analysis system for funds where their fund analysts assign a Gold, Silver, Bronze, Neutral or Negative rating to a mutual fund.

You might think that I’m defending Morningstar, when really, I don’t have a dog in the hunt.  We subscribe to Morningstar research, but we don’t use the star ratings at all in our analysis.  Personally, I wish they didn’t have the star ratings because, periodically, clients will ask us why we own a fund that has a two or three star rating.

Thankfully, I don’t think we’ve owned anything with a one-star and the few times that we’ve had something with two stars, the fund didn’t fit well in the category, meaning the rating had more to do with the category than our fund (the article that I reference above about having a love-hate relationship with Morningstar, is about how Vanguard’s bond market index fund looks odd in it’s category because the category is mostly corporate bonds funds, while Vanguard is only 25-30 percent corporate).

I was amazed, and appalled, by the advisors that the WSJ quoted who described how much they rely on the star system.  In my opinion, those guys are practicing malpractice.  Anyone advisor that relies so much on such a simplistic system should retire from the business because they are doing their clients a disservice.

Now the last question, and the most damning, is whether or not Morningstar has a pay-to-play system whereby big fund companies like Fidelity or Blackrock use their money and influence to get a better rating.

This one is a little tricky.  On the one hand, I do think that the senior management at Morningstar knows that the fund companies butter their bread in the same way that news organizations know that advertisers pay the bills and it influences their decisions to some extent.

On the other hand, I’ve gotten to know a number of Morningstar analysts over the years and don’t think that they feel any pressure to do anything other that what they think is best.

One of the people that I know best is a former analyst, Sam Lee, who left a few years ago and now has an advisory firm in Chicago.  He wrote an article for Mutual Fund Observer last week and said, ‘I worked as a fund analyst for years.  My former colleagues were sincere.  No one brought up advertising or licensing revenue from such and such client when discussing a fund and I felt no implicit pressure to be nice to our biggest clients.’

I think Sam is as straight up as they come and believe him.  Now, I do think that Morningstar could make a positive change by forcing their advertisers to include Morningstar’s qualitative rating with the stars.

Of course, I know what would happen: fund companies would advertise their ‘gold’ or ‘silver’ rated funds that also had five stars and no one would advertise their five-star funds with a ‘negative rating.’  It wouldn’t solve the problem, but instead just make it smaller.

The reality, though, is that nothing is going to solve the problem, because, as Brutus said, it’s not the stars, it’s us.  Maybe that’s who inspired Pogo, who said, ‘We have met the enemy and he is us.’