January 07, 2018
Behavioral hacking (BH) is found pervasively in social media and has profound effects on individuals, organizations, and politics. There is evidence that BH techniques can lead to mood changes, addictive and compulsive behavior, misinformation, and potential lower quality attention and work. These techniques are used because social media is based on advertising and targeted marketing, both of which profit on user attention. I suggest a “politics of behavioral hacking” that should be adopted to bring awareness, give tools to combat attention stealing business and design practices, and to help those who may be victim to them.
Behavioral hacking (BH) is defined as methods, techniques, and designs that trigger parts of our biology to encourage certain behaviors. BH can be found to a startling degree in widely pervasive social systems and technologies. BH could be detrimental to public mental and emotional health regarding social media 1, to public knowledge by promoting misinformation through methods like “clickbait” articles 2, and contributing to the development of techniques that exploit human psychology for political and/or monetary gain. 3
The knowledge and use of traditional BH techniques are generally not new. Marketers have been using them for decades. These traditional types of marketing have spilled over into the Internet. But there are wider ethical issues that technology has enabled for Internet marketing that we as a society have never seen before; web analytics, for example. Analytics are widely used and are extremely powerful (if not necessary) for modern, especially online, businesses to isolate a target market and build an effective “sales funnel”. Analytics are used to track user demographics, behaviors, location, device, and more and are used extensively across the web. Daniel Palmer has looked at how concepts of privacy and property have been altered by the Internet, and how the Internet tends to give most of the control to businesses in dictating the extent that consumers would be subject to their marketing techniques. This makes it very easy for companies to access consumer information, while making it relatively difficult for consumers to prevent such access. 8 Easy access to consumer information and effective BH techniques when combined are powerful for a business agenda, while potentially damaging to exposed individuals or groups. Examples may include highly targeted marketing based on demonstrated interests and quantifying social validation, both of which through speculation, could have the possibility of invoking or uncovering predisposed conditions for impulsive behavior (similar to how an addicted gambler might spend more money than they initially intended). Social media platforms like Facebook have combined these marketing techniques with psychological triggers in order to grab user attention and profit from it. 3
A study done at Ramon Llull University in Barcelona, Spain showed that the use of the Internet and mobile phones among college students have resulted in more loneliness, more depression, anxiety, and sleep disturbances. Due to women using their phones to establish and maintain social relationships more than men, they also speculate that, “women are more vulnerable to experiment psychological distress because they deal with more emotion-loaded issues, and tend to engage in ruminating behavior about these issues.” 4 Women seem to be particularly vulnerable to the effects of using social media because of these factors. In another study, brief exposure to Facebook led to a more negative mood for young women. 6
If people who use Facebook develop a more negative mood, why would they continue to use it? Facebook knew this was an issue that would affect user retention. They have it in their best interest to keep people feeling positive when they visit the site. So they conducted a study with researchers from Cornell University in New York. This study found that these kinds of emotional states (negative or otherwise) of Facebook users could be contagious and transmittable. They found that by displaying friends who were posting negative statuses in their news feed, they were more likely themselves to post a negative status. 7 They also note that visiting that person’s profile to view the status was always available to them, and that the experiment involved only curating their news feeds with specific friends posts to elicit a reaction.
These kind of practices can cause problems, however, when certain types of users who are in negative emotional states are involved in addictive and compulsive behaviors. With the power of targeted marketing, the primary social media business model, an entity can target individuals who are predisposed to conspiracies and mistrust. One study found that the more empathetic people are with those around them, the more predisposed they are to manipulation. There also exists a strong correlation to high interpersonal dependence. 9 If women and men today in our strongly networked society are maintaining their relationships through use of the Internet and social media, one can only imagine the kinds of dangers that exist in regards to BH techniques regarding target marketing and advertising.
In social media, it is very easy to find groups who share the same interests as you and who share the same beliefs as you. On social media platforms, it is possible to share and engage with information without criticism from the outside of which members support and validate each others beliefs (“echo chambers”). But even without actively searching for these groups on social media, a rational person might treat another person who’s sharing conspiracy theories and misinformation differently if they were a friend or family member, rather than an unknown entity. They might even place their trust in that person because of their relation to that person, regardless of the true legitimacy of the information.
One disturbing example of BH techniques being used on the Internet today is the widespreadness of “clickbait’ articles. “Clickbait” articles are typically news stories with provocative headlines that elicit uncertainty intended to draw a reader in. 2 These headlines can and often are misleading, or they could be false entirely, with the expectation that the reader will continue from the headline to read the full article. This assumption is problematic for many reasons, but the most prominent and problematic assumption is that readers will continue to read the full article to digest and understand the truth, if there exists one at all. Compounding these types of articles with persons who are predisposed to manipulation could enable far reaching misinformation campaigns (a.k.a “fake news”). This type of news is rampant on social media. Social media seems to encourage these type of articles if only because they capture your attention.
I will never forget a young woman who sat next to me on one rainy Seattle day in November. She sat next to me, eyes glued to her iPhone, rapidly twitching her right thumb up and down while swiping on her Instagram feed, and occasionally double-click tapping to multitask, all with a neurotic, compulsive rhythm. Her movements were slightly disturbing. And if the context were a casino, and her phone a slot machine, the stigma would be much worse. However, as I looked up from her to survey the rest of the passengers on the bus, I noticed that they too were on their smartphones, many on social media, and others with her same erratic obsessive movements.
Roger McNamee, a venture capitalist who invested in Google and Facebook, argued that both companies initial missions have changed due to the advent of the smartphone. At which point it became an arms race for people’s attention. “Facebook and Google assert with merit that they are giving users what they want…The same can be said about tobacco companies and drug dealers.” 11, p.8 Justin Rosenstein, who is the Facebook “like” co-creator, believes that there may be a legal case for state regulation of “psychologically manipulative advertising”. “If we only care about profit maximization… we will go rapidly into dystopia.” 11, p.9
Social media is inherently biased because as a business its goal is to capture and monetize attention. This implies that if it doesn’t grab your attention, it’s not good for business. The kinds of designs that are built into social media in the form of software encourage engagement with each other on the platform. It doesn’t make good business sense to encourage distribution of content that no one will engage with. Unfortunately, what people tend to engage with content that provokes their cognitive biases the most. This renders social media, despite best intentions, a haven for misinformation and “clickbait”.
The focal effects of BH in social media are manifested by the people using the platforms in a compulsive, addictive way. Like the people on the bus that I mentioned, it was clear in that situation that the person sitting next to me was anxiously awaiting her next dopamine hit by incessantly swiping down to refresh, and task switching if nothing came. However, the non-focal effects are a bit harder to study because of their pervasive and psychological nature. But they could be possibly be observed in the forms of emotional swings, disconnecting, radical conspiratorial ideas (when they weren’t interested or prone to believe beforehand), and in general psychological changes that followed after started to use social media.
So far I’ve made BH an issue of social media and its domain. But it also could be said that social media has had a considerable impact on fundamental ways that we do business from targeted marketing to recruiting. In academia, there have been real concerns about universities being subject to the same scrutiny and expectations of business, and that they should compete for resources by publishing papers and receiving the most citations (i.e. incentives). Portia Roelofs and Max Gallien write in the London School of Economics Blog, “Academia is replicating the structure of the mass media. Academic articles are now evaluated according to essentially the same metrics as Buzzfeed posts and Instagram selfies.” They also mention that promotions, jobs, and research funding are all tied up in these citation metrics, and these citation metrics don’t include “downvotes” if a paper is published critiquing another. 5 If the measure of impact for an academic is to produce research that garners citations, it can be assumed that the system will be abused for the sake of career advancement.
I’ve made apparent that behavioral hacking techniques are an ethical issue. I gave examples of emotional contagion through social networks, design practice that promote addictive and compulsive behaviors for monetary gain, and the widespread attention-grabbing method of “clickbait” articles found across the Internet and more recently, academia. I will construct my argument for a solution to BH by basing it on the Ethical standpoint of Utilitarianism. I will consider the greater good of the public — the users of the software regarding their mental and emotional health — to be more important than the business or organization practicing BH techniques. From this basis, we must consider how we might develop policy in order to curb the negative consequences of behavioral hacking. One way of approaching policy toward behavioral hacking, and the one I recommend, is for there to be a “politics of behavior hacking”, similar to James Boyle’s “politics of intellectual property.” 8 This means that there should be further study on how BH techniques negatively affect groups of people, at an individual and organizational level. An obvious consequence to BH in social media is the “arms race for attention”. These negative consequences to BH should be debated within their own contexts surrounding their industry and implementation.
To reduce the impact of current and potential negative consequences, we should create a sense of increasing urgency. For example, here have been recent efforts to bring awareness to the issue of how social media and smartphones affect our attention negatively (non-profits like TimeWellSpent.io), but there needs to be strong enough evidence of these consequences to convince the public that it’s worth changing.
Once these consequences are isolated, organizational changes or policy for minimizing them should be proposed and evaluated. With public awareness and acceptance of the problem, we can create accountability, and from accountability, solutions. If the wider public is aware that Facebook employs these techniques, and quite deliberately, there might be a growing user base who “want out”. These “politics of behavioral hacking” should be beneficial to the public awareness and the understanding of innate predispositions to manipulation, as well as empowering to consumers who may fall victim to them.
Patrick Eddy lives in Seattle, watches the rain fall, and likes occasionally muse over things.