Dealing with Corrupt Files in Hadoop

As I've been working with Hadoop a lot in the last several months, I've come to realize that it doesn't deal gracefully with corrupt files (e.g., mal-formed gzip files). I would throw a cluster at a couple hundred thousand files (of which one or two were bad) and the job would die two hours into execution, throwing EOFException errors all over the place. If I was only processing one file, I suppose that's a reasonably acceptable outcome. But when 99.9% of your files are fine, and the corrupt ones aren't recoverable anyway, there's no sense in blowing up the whole job just because a trivial portion of the data was bad.

elephant loses balance

Hadoop's relatively unknown LostBalance exception.

Turns out, it's not too hard to catch those exceptions within a custom record reader, log a warning message, gracefully ignore the files in question, and go about your business. Here's how to do it.

Continue reading

Analyzing Customer Churn – Competing Risks

Every survival analysis method I've talked about so far in this series has had one thing in common: we've only looked at one event in a customer lifetime (churn). In many cases, that's a perfectly fine way to go about things... we want our customers to stick with us, so churn is the event of interest. So why would we ever need to think about competing risks?

sharknado

You know, competing risks. Will you die by tornado, or by shark?

There's actually a critical assumption undergirding most survival analysis methods for right-censored data - that censored individuals have the same likelihood of experiencing the event of interest as individuals that never got censored. If this assumption ever gets violated, things like Kaplan-Meier estimators can become wildly inaccurate. (If you need a refresher on Kaplan-Meier curves and other concepts, take a look at my earlier post on basic survival analysis.)

Continue reading