July 29th, 2014


The danger of automated news

Interesting and very true article over on TechCrunch a few days ago. It's worth reading (it's not terribly long), mainly about the way that the automated algorithms on Facebook et al, which mostly just keep feeding us more of what we click on, can wind up providing not just a skewed view of the world but an *incredibly* depressing one. Since doom and disaster is what we (collectively) are most likely to click on, these algorithms just keep feeding us more and more of it, until it becomes too unpleasant to even be at the screen.

A much less-unpleasant but still irritating example: at some point, on Twitter, I apparently clicked on a couple of links relating to Neil Gaiman, and something, somewhere, decided that he was *the* topic I was most interested in. It is the wildest of wild overkill -- I mean, I like the guy's work, but literally *half* of the suggestions for tweets I might be interested in are about him. I've had to make a discipline of *never* clicking on any of those, simply to avoid feeding this bug with any more evidence.

None of which says that it is *impossible* to come up with personalization algorithms that produce a better view of the news. But it's a good reminder that the ones we have are, by and large, still mostly pretty terrible...