Does creating content make you dumb?

Share

creating content

Something I have been thinking about lately … does being a prolific content creator put you at a competitive disadvantage?

I read someplace that only 2 percent of the people on the web are content creators. The rest are simply consumers of content. Is that true?  I don’t know. I am too lazy to look it up and check that fact. But for argument’s sake, let’s go with it. Whatever the exact number, most of the world consumes. And in the process, those people are becoming smarter than me. Let’s unpack that observation.

Creating content is a time suck

I am clearly in that “creation” category, and, as you can tell by the photo of my worn computer keyboard above, I have been prolific.

I’ve written five full-length books in five years. I turned The Content Code and Social Media Explained into audio books this year and narrated them myself (not as easy as it seems!).

I’ve written hundreds of blog posts and recorded nearly 60 episodes of The Marketing Companion with my friend Tom Webster. I occasionally create videos, Slideshare presentations, and webinars.

I am constantly updating material for college classes and speeches. I have given about 150 interviews this year. I post daily on Facebook, Twitter and LinkedIn.

Now all that creating takes TIME. It might take me six months to write a book, four weeks to create a brand new speech and I spend around 5-6 hours a week on this blog alone, for example.

I’m not complaining at all. I love creating all this stuff and have so many ideas I could probably double the output if I had the time. Undoubtedly the act of sharing my ideas with the world has driven significant business benefits. People get to know me through my content and this leads to speaking engagements, consulting assignments, and wonderful friendships around the world. It all leads to money. I am fond of money.

The content creator at risk

I have built a successful business entirely through content. I have not spent a dime on any form of advertising since 2008 (although Phoebe The Amazing Intern is messing around with Google AdWords as part of her assignment this month).

But this output does come at a cost. I often joke in my workshops that I learned long ago that I could either blog or watch TV — you can’t do both!

And there is truth to this. I am more or less a pop culture illiterate. Even more concerning is that I spend very little time consuming professional content like the podcasts and blog posts of others. You could make an argument (and I am) that voracious consumers of content have a form of competitive advantage in the marketplace because they are better informed, more roundly educated, more connected, and more relevant because they are reading more than me and my fellow creators.

So, my membership among the content creating elite also dumbs me down.

This seems like a risky position to be in. The world is changing so very fast. By creating instead of consuming, am I grinding myself into obsolescence?

An interesting and sobering thought, right?

I’m lucky that I typically operate at a very high, strategic level. I don’t have to know the latest Pinterest or Snapchat tips and tricks to successfully provide a brand strategy or corporate marketing plan. So if the world of Facebook nuance and LinkedIn subtleties passes me by, I’m still good to go as long as I have a keen and accurate view of the big picture.

“Consuming” as competitive advantage

But this competitive situation is something to think about isn’t it? Even if a “consumer” absorbs just two hours of additional education a week compared to a “creator,” she would have accumlated more than 100 additional hours of information in a year. That is huge — the time equivalent of attending three college classes.

Fellow content creators, how are you establishing a balance that keeps you informed and relevant while pumping out all that good and useful stuff? How are you staying firmly placed on the learning curve and at least one step ahead of your customers and competitors?

Related Posts Plugin for WordPress, Blogger...
{grow}

Share

Facebook thinks you’re too dumb to realize its scientific papers are really just PR

Share

facebook-newsfeed

Yesterday, Technology Review was one of many outlets that published a headline, or variation thereof, that read, “Facebook Says You Filter News More Than Its Algorithm Does.” This revelation was lifted from a paper Facebook’s researchers published the same day in the scientific journal Science. The study sought to determine whether the social network’s News Feed algorithm — in aiming to serve up stories it predicts users will want to read — limits people’s exposure to political viewpoints they disagree with, creating a ideological bubble and contributing to political polarization. It’s an important question, particularly ahead of the 2016 presidential election and in light of Facebook’s growing influence over how news reaches audiences. According to recent studies, almost half of all web-using adults — and 88 percent of Millennials — use Facebook to find news.

But this research paper is less a piece of objective scientific inquiry and more the work of corporate-commissioned data tricksters — a rancid pile of pro-Facebook propaganda that derives and frames its conclusions with the sole purpose of making Facebook look good.

This isn’t science. It’s PR.

And because press releases — even ones with funky algebra and annotations — are never allowed to reflect poorly on a company, it’s not hard to predict what this Facebook-commissioned study, carried out by Facebook researchers, concluded when investigating whether Facebook’s algorithms contribute to political polarization.

The paper concluded that the News Feed algorithm — which helps determine the stories users are most likely to see on Facebook — had but a minuscule effect on limiting users’ exposure to viewpoints different from their own. The real perpetrator of political echo chambers on Facebook, the researchers found, were users themselves, because they fail to click on stories they disagree with and their friends are predominantly like-minded politically. The study essentially asserts that if you’re a liberal and when you log into Facebook all you see are stories that preach to your bleeding heart choir, it’s not Facebook’s fault — it’s your own for being too closed-minded and for not getting along with conservatives.

If true, that conclusion holds great significance. More and more, algorithms are responsible for the media we consume — whether it’s the stories served up by Facebook or the films Netflix recommends — and the products we buy, as companies like Amazon and Google strive to know what products users want before users know it themselves. From Mark Zuckerberg’s high-minded praise of journalistic institutions to the company’s grand ambitions to directly host content supplied by newspapers and magazines, Facebook endeavors to become the predominant platform and filter for news — and by “news” I don’t just mean what the guy who sat next to you ten years ago in Chemistry class had your breakfast. And therefore it’s crucial to keep Facebook accountable for providing factual and balanced streams of information, just as if it were the New York Times or NPR

A cursory glance at this study, along with many of the aggregated news stories that summarized it, suggest that in this respect at least Facebook is a responsible steward of the world’s news. But the fact that Facebook itself is the one absolving Facebook of damaging public discourse, demands greater scrutiny. And on closer examination, much of the data and assumptions that informed the researchers’ conclusions fail to qualify as sound scientific inquiry.

First, let’s look at the hard data Facebook unearthed — which, admittedly, is in pretty short supply. Researchers found that after bringing its News Feed algorithm to bear on news stories posted by friends and pages, conservatives are 5 percent less likely to see stories they disagree with, while liberals are 8 percent less likely. That’s not a huge differential, and is maybe smaller than we might expect. Nevertheless, the data undoubtedly shows that Facebook’s algorithms do limit users’ exposure to stories they are likely to disagree with, or as Facebook calls them in the paper, “cross-cutting” stories.

But the researchers all but dismiss this inescapable fact by arguing that the real culprit behind a lack of cross-cutting in Facebook feeds are the users themselves, either because they don’t have enough friends with dissimilar views or because they don’t like clicking on stories that run counter to their beliefs. The friends of users who list their political affiliation as “liberal,” for example, only share conservative-leaning stories 24 percent of the time. The friends of users who list their political affiliation as “conservative,” meanwhile, only share liberal-leaning stories 35 percent of the time. Making matters worse, liberals only click on cross-cutting stories 7 percent of the time, while conservatives do so 17 percent of the time. So even without the influence of Facebook’s algorithmic twitches, liberals and conservatives both experience the effects of an echo chamber on social media.

Even if we take the study at face value, just because user choices weigh heavily in limiting one’s exposure to diverse political views, that doesn’t exonerate Facebook. It’s algorithms do the same, just not as strongly. But even this conclusion is suspect because the data set used by researchers is enormously limited. Communications professor Christian Sandvig, writing at Microsoft Research’s Social Media Collective blog, notes that researchers only evaluated feeds belonging to users who share a discernible political affiliation on their Facebook profile — which the professor estimates only makes up 4 percent of users. And because that’s such a specific behavioral trait, the test group is hardly what one would call “representative” of Facebook’s larger user base. With that in mind, this data — whether it’s interpreted in Facebook’s favor or against it — doesn’t tell us much of anything at all.

The researchers also betray their motives by adopting a tone that is self-serving and even defensive, suggesting that in spite of whatever limiting effects Facebook’s algorithm brings to bear on cross-cutting content, it shouldn’t matter because social media users are ultimately “exposed to more cross-cutting discourse in social media they would be under the digital reality envisioned by some,” before linking to a 2001 book about how hyper-personalization may threaten democracy.

“Perhaps this could be a new Facebook motto used in advertising,” Sandvig writes. “‘Facebook: Better than one speculative dystopian future!’”

So a corporation smuggled some PR into a scientific paper and some journalists fell for it. Somewhere, an angel got his wings and a publicist got a promotion.

But there’s nothing trivial about Facebook’s ambitions to become the predominant source for news content and a gatekeeper of information that rivals Google in its power over what we know. And even if Facebook’s algorithms play over a negligible role in contributing to echo chambers and polarization, there’s an argument to be made that Facebook, which has never been shy in the past about policing what content users see, has a responsibility as a major news distributor to use its algorithm to counteract the effects of users’ self-made bubbles, placing more weight on cross-cutting stories in News Feeds belonging to users with stated political affiliations. Supporters of completely free and open content networks may bristle at that suggestion, but Facebook already exerts a great deal of control over what users see, as it constantly tweaks and tinkers with its News Feed algorithm. And if it must continue to mold and alter the shape of our News Feeds, then perhaps it could do so in ways that better its users as news consumers, deemphasizing or even exorcizing false or plagiarized stories while promoting a measure of ideological balance.

But anyone with a knowledge of Facebook’s brief history in public relations knows exactly how the company would respond to this suggestion. It would raise the same defense it always does when critics foolishly attempt to hold Facebook to standards of journalistic ethics: By claiming it’s “user-first.”

As recently as last month, Facebook played the “user-first” card in response to criticism over changes it made to its News Feed algorithm that deemphasized content shared by pages belonging to brands — including news organizations. It doesn’t take a wizard of business to know why Facebook would make this change. As engagement and referrals surrounding this content inevitably falls, news organizations that have become reliant on Facebook for traffic will need that fix and do whatever’s necessary to get it. Chiefly, that means running paid promotions on Facebook to ensure users see these posts — promotions that make Facebook even richer.

But there’s a darker side to this: Changes like this create a “pay-to-play” paradigm wherein only the most well-funded news organizations are afforded the enormous reach Facebook can offer. This state of affairs tends to crowd out smaller outlets — many of whom are smaller because they value journalistic bravery over brand-friendliness — which is as harmful to public discourse as the political polarization Facebook examined in yesterday’s study. And just like it did in its study on polarization, Facebook blamed users for the controversial News Feed change, declaring that its data team had crunched the numbers and found that users liked it better when they saw fewer posts from news sites. Of course, by crunching the numbers, Facebook merely meant that it ran a survey, and the questions it asked of users were magnificently leading. Questions like, “Are you worried about missing important updates from the friends you care about?” seemed to be carefully designed to goad users into saying they preferred an outcome that just so happened to be perfectly aligned with Facebook’s business interests.

So of course, everyone’s going to answer yes to that. But does that mean users don’t care about other types of posts? It doesn’t matter. This is what Facebook does, and this is what it just did with its latest scientific paper. Whenever it receives criticism or wishes to carry out something potentially controversial, it saves face by taking messy or incomplete or subjective data and twisting it so that users shoulder the blame.

And therein lies the insidious brilliance of the News Feed algorithm. Contrary to what its polarization study suggests, Facebook’s algorithms and the behavior of its users are not two separate and discrete forces. The work done by the algorithm is heavily informed by user behavior, but not entirely. And to what degree and under what circumstances user behavior holds sway is not always clear. This makes any comparison between the two highly muddled and confused — which is just how Facebook likes it. That way, the company can attribute virtually any negative consequence of any algorithm-driven efforts to maximize profit or influence to the behavioral whims of its users. The News Feed algorithm is at once a black box and a magic wand.

Facebook wants to be taken seriously by journalistic organizations as part of its play to host news content directly. But whenever critics raise concerns about changes made to its algorithms which, incidentally or purposefully, limit users’ exposure to certain types of news content, Facebook casts off its responsibility to any higher ideals of journalistic integrity. Thanks to the knot of human and algorithmic influences that impact the News Feed, which by design are impossible for outsiders to untangle, it’s able to argue — with science! — that it’s beholden only to users and the “user experience.” If politically polarized users want politically polarized content, Facebook won’t disappoint them.

But this user-first defense is disingenuousness, particularly in light of Facebook’s broader monetization strategies and its ambitions to control how the news media reaches audiences. It’s old hat to say that if you’re not paying for a product, the product is you. But when it comes to Facebook, few cliches hold more true. The company’s core constituencies are advertisers and other brands that pay for exposure on its platform, and many of Facebook’s product changes –like its most recent News Feed tweak which encourages organizations to pay or partner with the company to reach users — are aligned with its endeavors to boost its revenue and influence and not, as Facebook innocently claims, to create a “better user experience.”

In what’s becoming an enormously troubling trend among corporations, Facebook will only continue to use slippery data as a weapon in its war with the public and its competitors over the company’s public image. The Internet has made it possible to fact-check anything, and laypeople have become more adept than ever at identifying spin and other deceitful rhetorical techniques native to public relations. That’s why it’s so brilliant and insidious to see Facebook farm out what’s traditionally the work of PR specialists to data scientists. On top of the intellectual cachet society affords them as Silicon Valley geek-idols, these whiz kids breathe the rarefied air of academics, appearing in Science which, despite a reputation that’s waned a bit — probably owing to its willingness to give lousy corporate data research like a pass — is still exponentially more credible than a rewritten press release at Techcrunch. Most readers won’t think twice, nor will many tech journalists who will blindly rephrase studies like these without really examining the quality of the data or the subjective assumptions put forth by the authors.

But don’t be fooled by the fancy diagrams and annotations. This isn’t science. It’s pro-Facebook propaganda.

[illustration by Brad Jonas]

PandoDaily

Share